id
stringlengths
3
9
source
stringclasses
1 value
version
stringclasses
1 value
text
stringlengths
1.54k
298k
added
stringdate
1993-11-25 05:05:38
2024-09-20 15:30:25
created
stringdate
1-01-01 00:00:00
2024-07-31 00:00:00
metadata
dict
204946593
pes2o/s2orc
v3-fos-license
1701. Differences in Diagnostic Performance of β-d-Glucan Testing in Patients with Varying Degrees of Susceptibility to Invasive Fungal Infections Abstract Background Usefulness of β-d-glucan (BDG) testing in high-risk patients for invasive fungal infection (IFI) diagnosis has been well demonstrated. However, data on its usefulness in patients without risk factors are limited. We evaluated differences in the diagnostic performance of BDG testing in patients with varying degrees of susceptibility to IFI. Methods From April 2017 to May 2018, all consecutive patients (≥18year-old) who were performed BDG testing (Beijing Gold Mountainriver Tech) were enrolled. Patients were classified into three groups: Group A for patients with host factors defined by 2008 European Organization for Research and Treatment of Cancer-Mycoses Study Group diagnostic (EORTC-MSG) criteria, Group B for patients with malignancy receiving recent chemotherapy within 1 month without host factors, and Group C for others. Cases of proven and probable IFI defined by EORTC-MSG criteria, Pneumocystis pneumonia and all fungemia were considered as true IFIs. Sensitivity, specificity, positive and negative predictive value (PPV and NPV) were calculated with a cut-off value for positivity ≥80 pg/mL. Results Among 473 eligible patients, 190, 142, and 141 patients were classified into group A, B, and C, respectively. Rates of true IFI were significantly different in each group (57/190, 19/142, and 10/141 in each group, P < 0.001). Sensitivities were 0.83, 0.68, and 0.70 and specificities were 0.62, 0.59, and 0.63 in group A, B, and C, respectively. PPVs were considerably different among three groups (PPV for 0.48, 0.20, and 0.12; NPV for 0.89, 0.92 and 0.97 in each group, respectively). Conclusion The BDG test is a useful assay for IFI diagnosis; however, the clinical interpretation should be different by patient risks. Whereas BDG testing could be considered as a tool for predicting IFI in high-risk patients, it only could be a tool for excluding IFI in patient without risk factors. Disclosures All authors: No reported disclosures. Background. Usefulness of β-d-glucan (BDG) testing in high-risk patients for invasive fungal infection (IFI) diagnosis has been well demonstrated. However, data on its usefulness in patients without risk factors are limited. We evaluated differences in the diagnostic performance of BDG testing in patients with varying degrees of susceptibility to IFI. Methods. From April 2017 to May 2018, all consecutive patients (≥18year-old) who were performed BDG testing (Beijing Gold Mountainriver Tech) were enrolled. Patients were classified into three groups: Group A for patients with host factors defined by 2008 European Organization for Research and Treatment of Cancer-Mycoses Study Group diagnostic (EORTC-MSG) criteria, Group B for patients with malignancy receiving recent chemotherapy within 1 month without host factors, and Group C for others. Cases of proven and probable IFI defined by EORTC-MSG criteria, Pneumocystis pneumonia and all fungemia were considered as true IFIs. Sensitivity, specificity, positive and negative predictive value (PPV and NPV) were calculated with a cut-off value for positivity ≥80 pg/mL. Results. Among 473 eligible patients,190,142, and 141 patients were classified into group A, B, and C, respectively. Rates of true IFI were significantly different in each group (57/190, 19/142, and 10/141 in each group, P < 0.001). Sensitivities were 0.83, 0.68, and 0.70 and specificities were 0.62, 0.59, and 0.63 in group A, B, and C, respectively. PPVs were considerably different among three groups (PPV for 0.48, 0.20, and 0.12; NPV for 0.89, 0.92 and 0.97 in each group, respectively). Conclusion. The BDG test is a useful assay for IFI diagnosis; however, the clinical interpretation should be different by patient risks. Whereas BDG testing could be considered as a tool for predicting IFI in high-risk patients, it only could be a tool for excluding IFI in patient without risk factors. Disclosures. All authors: No reported disclosures. Methods. Adult patients ≥19 years with candidemia who underwent ophthalmological examination after the diagnosis of candidemia at a tertiary care hospital in South Korea from 2006 to 2018 were enrolled, and clinical data were collected. Prevalence and Risk Factors for Endogenous Fungal Endophthalmitis in Results. There was a total of 152 adult patients with candidemia who underwent an ophthalmological examination. Endogenous fungal endophthalmitis was found in 29 patients (19.1%). Patients were categorized into two groups (Non-endophthalmitis [NE] and endophthalmitis [E]). Between two groups, there was no significant difference in terms of age, sex, underlying comorbidities. Also, no difference in clinical conditions at the diagnosis of candidemia was noted including concomitant bacteremia, presence of septic shock, receipt of recent surgery, presence of neutropenia, total parenteral nutrition, central venous catheter, urinary catheter, ventilator, dialysis, use of antibiotics, and Candida spp. colonization. However, there was a higher rate of abnormal alanine aminotransferase (ALT) in the E (35.7%) than in the NE (14.8%), P = 0.008. Moreover, the proportion of C. albicans candidemia was higher in the E (65.5%) than in the NE (35.8%), P = 0.003. In contrast, C. parapsilosis candidemia was more common in the NE (27.6%) than in the E (6.9%), P = 0.018. Although there was a trend of higher mortality rate in the E (51.7%) than in the NE (35.0%), no statistical significance was observed, P = 0.095. Multivariate logistic analysis showed C. albicans candidemia (odds ratio [OR] 4.122, 95% confidence interval [CI] 1.653-10.280, P = 0.002) and abnormal ALT (OR 3.839, 95% CI 1.427-10.333, P = 0.008) were significantly associated with E cases. Conclusion. Endogenous fungal endophthalmitis occurred in 19% of adult patients with candidemia. C. albicans candidemia and abnormal ALT were significantly associated with endophthalmitis. Adult patients with candidemia caused by C. albicans or having abnormal ALT need to be closely monitored for the possibility of endophthalmitis. Disclosures. All authors: No reported disclosures. Background. There is a growing concern on infections with multiple organisms including fungi in patients with mucormycosis. However, limited data are available on co-infection in patients with mucormycosis. Bacterial or Fungal Co-Infection in Patients with Mucormycosis Methods. Patients with proven mucormycosis were retrospectively enrolled at a tertiary hospital from July 2009 to January 2019. Proven mucormycosis was defined as positive fungal culture result for mucormycosis from a sterile biopsy specimen and/or histologic evidence of tissue invasion of hyphae with positive mucormycosis immunohistochemistry test result. We reviewed other pathogens isolated from sterile or non-sterile sites before and after 7 days from the biopsy for infected tissue that suggested invasive fungal infection.
2019-10-24T09:17:07.065Z
2019-10-01T00:00:00.000
{ "year": 2019, "sha1": "9e1dccf3b763b2827227208a3ace2ebee29775c6", "oa_license": "CCBYNCND", "oa_url": "https://academic.oup.com/ofid/article-pdf/6/Supplement_2/S623/30276411/ofz360.1565.pdf", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "556d5772cb67254777f88d88017821341ea83596", "s2fieldsofstudy": [ "Medicine" ], "extfieldsofstudy": [ "Medicine" ] }
2994587
pes2o/s2orc
v3-fos-license
Incidence of catheter-related complications in patients with central venous or hemodialysis catheters: a health care claims database analysis Background Central venous catheter (CVC) and hemodialysis (HD) catheter usage are associated with complications that occur during catheter insertion, dwell period, and removal. This study aims to identify and describe the incidence rates of catheter-related complications in a large patient population in a United States-based health care claims database after CVC or HD catheter placement. Methods Patients in the i3 InVision DataMart® health care claims database with at least 1 CVC or HD catheter insertion claim were categorized into CVC or HD cohorts using diagnostic and procedural codes from the US Renal Data System, American College of Surgeons, and American Medical Association’s Physician Performance Measures. Catheter-related complications were identified using published diagnostic and procedural codes. Incidence rates (IRs)/1000 catheter-days were calculated for complications including catheter-related bloodstream infections (CRBSIs), thrombosis, embolism, intracranial hemorrhage (ICH), major bleeding (MB), and mechanical catheter–related complications (MCRCs). Results Thirty percent of the CVC cohort and 54% of the HD cohort had catheter placements lasting <90 days. Catheter-related complications occurred most often during the first 90 days of catheter placement. IRs were highest for CRBSIs in both cohorts (4.0 [95% CI, 3.7-4.3] and 5.1 [95% CI, 4.7-5.6], respectively). Other IRs in CVC and HD cohorts, respectively, were thrombosis, 1.3 and 0.8; MCRCs, 0.6 and 0.7; embolism, 0.4 and 0.5; MB, 0.1 and 0.3; and ICH, 0.1 in both cohorts. Patients with cancer at baseline had significantly higher IRs for CRBSIs and thrombosis than non-cancer patients. CVC or HD catheter–related complications were most frequently seen in patients 16 years or younger. Conclusions The risk of catheter-related complications is highest during the first 90 days of catheter placement in patients with CVCs and HD catheters and in younger patients (≤16 years of age) with HD catheters. Data provided in this study can be applied toward improving patient care. Background Central venous catheters (CVCs) refer to prolonged vascular access devices indicated for the administration of intravenous medication treatments, fluids, or total parenteral nutrition, repeated blood sampling, and for hemodialysis (HD) [1,2]. Annual CVC exposure in hospital intensive care units has been estimated to total 15 million days [3] in the United States. Use of CVCs for HD (hereafter referred to as HD catheters) has increased in recent years, comprising approximately 25% of prevalent HD patients in the United States [4]; this is despite the recommendation by the National Kidney Foundation that tunneled, cuffed catheters for HD access be limited to <10% of prevalent dialysis patients due to the greater risk of morbidity and mortality [5]. Long-term dialysis using tunneled, cuffed catheters increases a patient's risk of death 2-to 3-fold and serious infection 5-to 10-fold compared with dialysis using arteriovenous fistulas [1]. Additionally, compared with the general population, dialysis patients have a 100-fold greater risk of sepsis-related death, with infection-related and all-cause mortality highest in those with catheters [6]. CVC-and HD-catheter usage are associated with complications that occur during catheter insertion, throughout the catheter dwell period, and at the time of removal. Identification and prevention of catheter-related complications is critical to improving patient care [7,8]. Common complications include catheter misplacement or breakage, catheter occlusion due to local or systemic infection, and thrombosis [7][8][9][10][11]. Reported incidence rates (IRs) of catheter-related complications vary widely depending on the terminology and definition of complications, patient population, units of measurement, duration of catheterization and follow-up, catheter location, placement and care procedures, and diagnostic methods [9]. Patients undergoing HD may have different complication rates than non-HD patients. The US Renal Data System provides guidelines for the coding of HD catheter procedures [10][11][12]. Therefore, investigating complication rates in patients with and without HD catheter procedures is possible and important for understanding IRs in different patient groups. The objective of this study was to obtain unadjusted IRs of first complications after catheter insertion in a real-world setting. Previous studies have reported complication rates in patients with catheters; however, to our knowledge no studies have described incident complications of CVCs, including HD catheters, in both adults and children at different time periods following insertion. The current study uses a consistent methodologic approach to identify and describe the IRs of select first complication events after catheter insertion in a large, geographically diverse US patient cohort. This is important for improving patient care. Study design and data source A retrospective cohort analysis of the i3 InVision DataMart W administrative claims database was conducted to determine the IRs of select complications in patients with either CVC or HD catheter replacement or removal. The i3 InVision database is a proprietary sample of individuals receiving health insurance benefits from a large health plan, comprising discounted fee-for-service independent practice association plans throughout the United States. The database includes medical, pharmacy, and limited laboratory claims for more than 39 million patients. Of these, over 24 million have been continuously enrolled for 12 or more months during the study period, representing approximately 8.2% of the 2000-2007 general US population [27]. Patients in the database between May 2000 and January 2007, with at least 1 claim for CVC or HD catheter insertion and a record of having a catheter removal or replacement procedure, were included in the initial data cut. Among these patients, those who were continuously enrolled in the health plan for 180 days prior to the first CVC or HD catheter placement and who had no complication events during this 180-day period were included in the study. Requiring patients to be free of defined complications prior to catheter placement and to have both catheter placement and removal or replacement claims allowed us to more accurately attribute study outcome events to catheter procedures in order to ensure a meaningful interpretation of catheter-related complications from claims data. Patients were followed from the date of their first qualifying claim for insertion of either type of device (index date) until onset date of each distinct CVC or HD catheter-related complication event or other censoring date, such as the date of catheter removal or replacement, health plan termination, or end of study, whichever occurred first. In patients with multiple catheter insertion and removal or replacement claims, only the patient's first qualifying placement claim was counted as the index date for IR calculation of study outcomes. In patients who were censored at catheter removal or replacement, only the patient's first such claim was counted if multiple removal or replacement claims were on record. Catheter placements, as well as removals and replacements were identified using combinations of codes from the International Classification of Diseases, Ninth Revision, Clinical Modification (ICD-9-CM) [28], Current Procedural Terminology (CPT), and Healthcare Common Procedure Coding System (HCPCS) based on categories and groupings proposed by the US Renal Data System [10][11][12], the American College of Surgeons [29], and the Physician Consortium for Performance Improvement [30]. Patients were categorized into 2 mutually exclusive cohorts (CVC or HD catheter, with HD catheter placement taking precedence given the greater specificity of claim codes consistent with HD catheter insertion). Patients with an HD catheter placement claim who also had at least 1 claim for chemotherapy or parenteral nutrition in the 30-day period prior to or following their HD catheter insertion claim were excluded. Patients in the CVC cohort were further classified as those with or without cancer at baseline (i.e., prior to CVC placement) using ICD-9-CM diagnosis codes for malignant neoplasms. Codes and definitions used to identify and assign patients into cohorts are listed in Additional file 1. Outcome events Outcome events included first (incident) complications after catheter insertion that occurred during at least 1 overnight hospitalization or a hospital emergency room visit which did or did not result in hospitalization and were each identified using ICD-9-CM diagnostic or CPT procedure codes (Additional file 1) consistent with catheter-related blood stream infections, thrombosis, embolism, intracranial hemorrhage, major bleeding events, and mechanical catheter-related complications (MCRCs). The first or only occurrence of any of the above complications was also defined as an outcome and used for the IR calculation of the "Any complication" category. In patients with only 1 or multiple complications of the same type, the first or only complication occurrence was included in the IR calculation for specific types of events. In patients with multiple complication events of different types, the first occurrence of each distinct complication was used for the IR calculation of specific event types. Statistical analyses Patient characteristics, including gender and age (with categories of <2, 2-16, 17-64, and ≥65 years) and duration of catheter placement (with time periods of 1 to <90, 90 to <180, 180 to <365, and ≥365 days) were reported for the HD and CVC cohorts as well as for the CVC patients by cancer-at-baseline status; the HD cohort did not include any patients younger than 2 years of age. IRs were calculated as the total number of complication events that occurred within the defined catheter placement period divided by the sum of the catheter-days at risk during the period of catheter placement and were expressed per 1000 catheter-days. The 95% confidence interval (CI) of the IR was calculated based on a normal approximation, under the assumption that the number of events follows a Poisson distribution. The 95% CI was expressed as: Exp{ln(incidence rate) ± 1.96 sqrt(1/total number of events)}. Catheter-days at risk were defined as the number of days between catheter insertion and event onset date (i.e., first outcome event) or the earliest date of catheter removal or replacement, health plan termination, or end of study. Catheter-days differed for each complication event and within each event catheter placement duration period because patients were censored at variable event dates. Fisher's exact test was used to determine statistical significance of differences in IRs between CVC patients with or without cancer at baseline. A sensitivity analysis was also conducted to determine if eliminating the requirement for having a catheter removal or replacement claim had an effect on complication rates. We calculated the IRs of complications in the first 90 days of catheter placement by using less stringent censoring criteria and eliminating the requirement to have a catheter removal or replacement claim. Patient disposition and characterization Patient assignment to the CVC and HD catheter cohorts is represented in Figure 1. Included in the study were 16,721 and 5,984 patients who underwent removal or replacement of a CVC or HD catheter, respectively. Patient characteristics, including duration of catheter placement are summarized in Table 1. Incidence of catheter-related complications The IRs of any catheter-related complications by duration of catheter placement are depicted in Figure 2, categorized by patient cohort (HD catheter or CVC), as well as cancer-at-baseline status (CVC cohort only). CVC patients with cancer at baseline had statistically significantly higher rates of any complication compared with CVC patients without cancer during the first 90 days of catheter placement (P = .0001). For all catheter placement periods after 90 days, patients with cancer at baseline had statistically significantly lower complication rates than patients with no reported cancer in the same catheter placement period ( Figure 2). Table 2 presents the IRs of specific and any CVC and HD catheter-related complications among patients with catheter placement lasting less than 90 days. For specific complications, CRBSIs had the highest IR, 4.0 for CVCs (95% CI, 3.7-4.3) and 5.1 for HD catheters (95% CI, 4.7-5.6), followed by thrombosis, 1.3 for CVCs (95% CI, 1.1-1.4) and 0.8 for HD catheters (95% CI, 0.7-1.0). Other IRs for CVCs and HD catheters were lower, ranging from 0.09 (MB in CVCs) to 0.68 (MCRCs in HD catheters). The IRs for ICH were identical (0.10) in both cohorts. The incidences of CRBSIs, thromboses, and any complication were significantly higher in patients with cancer at baseline than in those with no reported cancer (P < .05 for each type of complication). The IRs of any catheter-related complications occurring in the first 90 days of catheter placement are shown by patient cohort, baseline cancer status, and patient age in Figure 3. Among CVC patients with baseline cancer, younger patients (in combined age groups <2 and 2-16 years, data not shown) had statistically significantly higher rates of any complication compared with older age groups (P = .0005). Complication rates did not differ between younger and older age groups in CVC patients without cancer. HD patients in age group 2-16 years had statistically significantly higher rates of complications versus older age groups (P = .0019). Temporal trends for the most frequent complications (i.e., CRBSI and thrombosis) were examined in both CVC and HD catheter cohorts over the 2001-2006 study period and there was no consistent evidence suggesting substantive changes in the incidence of either CRBSIs or catheterrelated thrombosis during this period (Figure 4). In the sensitivity analysis, during the first 90 days of catheter placement, the IRs for the most common complications decreased while rates for embolism, ICH, and MB increased slightly. The decrease in IR for CRBSI (from 4.0 to 1.6) was more substantial than for thrombosis (from 1.3 to 0.8) among patients in the CVC cohort (Additional file 2). Discussion This study analyzed the incidence of catheter-related complications occurring in a real-world setting using a single, large population sample and consistent methods to identify and calculate unadjusted IRs for these different complications. The results demonstrate that the highest rates of first complications after CVC and HD catheterization occur during the first 90 days of catheter placement. Rates of complications are also higher in cancer patients with CVC placements than in non-cancer patients during the first 90 days of catheter insertion. Complication rates in children with HD catheters and with CVCs with cancer at baseline are higher than in the respective catheter cohorts for adults and elderly patients. Earlier reports of IRs of catheter-related complications were characterized by great variability due to inconsistencies in terminology, study design and methodology, small sample sizes, and heterogeneity of patient populations. For example, some studies did not differentiate embolism from thrombosis [31], and others did not clearly define the specific types of "catheter malfunction" being studied [14,32,33]. This study utilized diagnostic and procedural codes from a large administrative claims database to extract data describing the most common complications following CVC or HD catheter insertion. The finding that catheter-related complications were considerably more common during the first 90 days following catheter placement may reflect the fact that some catheter-related complications eventually lead to catheter removal (the censoring event in the present study). Recurrent catheter insertions and complications occurring after repeated catheter placements were not evaluated. In this study, the most frequent complications associated with catheter placement were CRBSIs and catheter-related thrombosis. The incidence of CVC-related thrombosis was low overall (4.4% of catheters or 1.3/1000 catheter-days). In previous reports, the incidence of CVC-related thrombosis varied widely (0.6%-33% of catheters) because of differences in catheter type, study design and population, and sensitivity of the examination procedures [9]. There is growing evidence that catheter-related infection and thrombosis are closely related because both involve fibrin sheath formation, and the risk of clinically apparent thrombosis is markedly increased after an infection episode [9]. In part, this may account for these 2 types of complications having the highest IRs in both the CVC and HD catheter cohorts of our study. A sensitivity analysis was also conducted to determine if eliminating the requirement for having a catheter removal or replacement claim had an effect on complication rates. Comparison of findings from the first 90 days of catheter placement (i.e., higher CRBSI IR with removal/replacement requirement) demonstrates that some complications, such as CRBSI, are more likely to result in catheter removal. While CRBSI typically presents with visible symptoms, up to two-thirds of patients with catheter-related thrombosis are asymptomatic [9,34,35]. In most instances, catheters causing CRBSI must be removed to resolve the problem [34]. In contrast, patients identified as having thrombosis can be treated with thrombolytic therapy to resolve the problem [35]. While patient age is considered a risk factor for CRBSI (CRBSI is more common in children than in adults [34]), the age distributions were similar in the main and sensitivity analyses. In the sensitivity analyses, IRs for relatively rare complications (e.g., embolism and ICH) slightly increased. These findings could be related to delayed effects of treatment of occluded catheters rather than catheter placement. Further studies are needed to investigate these findings. Consistent with earlier reports, our study showed a higher frequency of catheter-related complications in patients with cancer than in non-cancer patients during the first 90 days of catheter placement, possibly because of the use of immunosuppressive cancer therapies. The risk of infection is known to be particularly high in neutropenic patients and in patients undergoing chemotherapy prior to stem cell transplantation [36]. More than one insertion attempt (OR, 5.5), ovarian cancer (OR, 4.8), and previous CVC insertion (OR, 3.8) are also significant baseline risk factors for catheter-related thrombosis in cancer patients [37]. In particular, the risk of developing CVCrelated thrombosis is significantly higher in patients with cancer who also carry the factor V Leiden mutation (reported relative risk from 2.6 to 7.7) [38,39], or have hyperhomocysteinemia (reported ORs from 3.8 (95% CI, 1.3-11.3) [40] to as high as 33.9 (95% CI, 1.53-751.33) [41]), compared with patients without these conditions. Further analyses based on these and other risk factors are warranted, and may yield new insights into the management of cancer patients with indwelling catheters. Although the incidence of catheter-related complications was significantly higher in CVC patients with cancer at baseline compared with non-cancer patients during the first 90 days of catheter placement, non-cancer patients (i.e., patients in HD catheter or CVC non-cancer groups) had greater incidences of any type of complication in catheter placement periods lasting 90 days or longer. One possible explanation for this finding is that over time, cancer patients (presumably the more severely ill group and inherently different from non-cancer patients) had poorer survival than the non-cancer patients, resulting in their reduced contribution to lengthier catheter time periods. It is also plausible that the severity of complications in these cancer patients necessitated earlier catheter removal (within the first 90 days), whereas in the non-cancer group, complications occurred and catheters were removed later. Because of limitations of the claims database, comorbid conditions and other factors possibly contributing to outcome events were not assessed in any of the groups. This analysis was based on automated medical and prescription claims. While claims data are extremely valuable for the efficient and effective examination of health care outcomes, all claims databases have certain inherent limitations because the claims are collected primarily for the purpose of reimbursement for health services and not for research. The presence of a diagnosis code on a medical claim does not always indicate positive presence of disease because claims data are subject to errors in coding, inaccurate disease classification, or may include a 'rule out' diagnostic workup rather than actual disease. To increase the specificity in outcome identification, we required 1 overnight hospitalization or a hospital emergency room visit and used validated diagnostic and procedural codes identifying these conditions from published literature [10][11][12]29,30]. Although we attempted to utilize precise codes to identify catheter type, cancer status, and outcome events, coding errors may have misclassified some patients. An additional potential confounder is the inability to distinguish from claims data whether catheterization was the cause of the defined outcome event or the indication for the outcome event. However, we imposed patient selection requirements (i.e., 180 days of outcome-free enrollment prior to index and record of catheter insertion and removal or replacement claims) in order to correctly ascertain the temporal relationship between the exposure and outcome event. There is also a potential for misclassification regarding patient ascertainment to either HD catheter or CVC groups. However, to more accurately categorize patients in these 2 groups, we used coding recommended by the US Renal Data System and further required patients in the HD cohort not to have claims for chemotherapy or parenteral nutrition within 30 days of HD catheterization. Patients without HD procedure codes were allocated into CVC cohort. Another limitation of our study was that patients in the database we utilized were commercially insured and might not be completely generalizable to the general US population. There is limited ability in claims data to determine whether demographic characteristics of the study population are similar to the general population. However, the age distribution of the study population was similar to that in the general US population during the same time period. Previous studies showed that site of catheter insertion can influence the incidence of certain catheter-related adverse events such as infectious and thrombotic complications [22,42,43]. Limitations inherent to claims data did not allow us to determine the type of catheter device used (tunneled vs. non-tunneled devices), the site of the catheter insertion, or whether the catheter that was removed during the 90-day period was the same catheter that was placed. The extent to which "good" catheter placement and care recommendations are practiced can also influence complication rates; however, the claims data source did not permit us to determine whether techniques including sterilization methods (e.g., ethanol locks) were utilized at the clinical level. It was also not possible to determine whether lag time between the actual catheter removal or replacement and claims processing dates may have biased our results. Future analyses examining the correlation of claims for infection and thrombosis may provide more insight into the possible simultaneous occurrence of these complications. While patients who require long-term vascular access are generally very sick, patients who require catheters for HD may differ in important ways from those who require CVADs for medical conditions other than hemodialysis. An understanding of these differences could provide further guidance for improving the care of patients who are severely or chronically ill. Conclusion This study provides a new body of data on the risk of catheter-related complications derived from a large patient population. The risk of catheter-related complications is highest during the first 90 days of catheter placement in patients with CVC and HD catheters and in younger patients (≤16 years of age) with HD catheters. In younger patients (<2 years of age and 2-16 years of age, combined) with CVCs the risk of complications is higher than in patients over 16 years of age only in patients with cancer at baseline. This information provides valuable additional context for the development of strategies for the management of dysfunctional CVC and HD catheters and for improved patient care.
2016-05-15T09:35:29.325Z
2013-10-16T00:00:00.000
{ "year": 2013, "sha1": "ea03be5c0ad7457c013b0bcdb7c76024c399913f", "oa_license": "CCBY", "oa_url": "https://bmccardiovascdisord.biomedcentral.com/track/pdf/10.1186/1471-2261-13-86", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "e5eafadefecdd60111baf41e3cf81f7085fd5c8b", "s2fieldsofstudy": [ "Medicine", "Biology" ], "extfieldsofstudy": [ "Medicine" ] }
237497682
pes2o/s2orc
v3-fos-license
The Behavior of MDPC-23 Cells Modulated by DSPP Phosphophoryn (PP) and Dentin Sialoprotein (DSP) are two of the most abundant dentin matrix non-collagenous proteins. These two proteins are derived from Dentin Sialophosphoprotein (DSPP) precursor protein cleavage by DMP1 protease. PP was well established as a nucleator to initiate dentin mineralization. The expression of DSPP precursor protein was reported to be required for normal odontoblast lineage differentiation. DSPP protein was also reported to modulate cell migration, cell proliferation and differentiation, leading to dentin formation. To further understand the role of DSPP in odontoblast lineage, we used MDPC23 cells to examine the mineralization process and the effects of DSPP on this process. MDPC23 cell line is an odontoblast-like cells, which exhibits unique features of dental pulp stem cells such as high alkaline phosphatase activity and expression of odontoblast markers: DSP and PP. MDPC23 cells were cultured in Mineralization Medium (MM) containing mineral inducing molecules (i.e., 100 μg//ml ascorbic acid, 10 mM β-glycerol phosphate and 10 nM dexamethasone) to examine MM effects on the cell growth and mineral deposition. Recombinant DSPP protein was used to test its effects on dental pulp cell growth and mineralization. We found that MDPC23 cells cultured with mineralization medium, proliferated slowly and spread widely at 4d. However, MDPC 23 cells in MM group, shrank and gradually detached from the well surface at 6d. Strong mineral deposition was detected with Alizarin Red staining at 7d. In the presence of recombinant DSPP protein (recDSPP), MDPC23 cells cultured with mineralization medium, also proliferated slowly, spread widely but remained vital from 4d, 6d, 7d, 8d, 9d, 10d, 12d through 18d and 20d. Strong mineral deposition was detected with Alizarin Red staining at 18d and 20d. Thus under the combined MM and recDSPP treatment, MDPC23 cells maintained the cell vitality and delayed the mineralization process. This study lent support that DSPP is required for maintaining normal odontoblast lineage. Introduction DSP and PP are the most dominant noncollagenous protein in dentin. These two proteins are derived from DSPP mRNA, then translated to a DSPP precursor protein, which is further processed by BMP1 protease to generate DSP and PP proteins [1][2][3]. Since DSP protein, PP protein and DSPP mRNA were mainly expressed in polarized odontoblasts, DSP and PP were postulated to be associated with dentin mineralization [4]. PP has been well established as a nucleator for dentin mineralization [5][6][7]. Mutations in DSPP gene have been linked to dentinogenesis II and III [8,9]. DSPP knockout mice displayed symptoms similar to patients with dentinogenesis imperfecta III [10]. Originally, the role of DSP was controversial. For example, DSP was reported as a weak inhibitor for mineralization [11]. Recently DSP was reported to regulate dental pulp cell differentiation into odontoblasts [12,13]. To better understand the role of DSPP in odontoblast lineage differentiation during tooth development, previously we systematically examined teeth from wild-type (wt) and DSPP KO C57BL/6 mice between the ages of post-natal day 1 and 3 months. We found that DSPP Knockout (KO) mice have hypomineralized teeth, thin dentin and a large dental pulp chamber, similar to those from patients with dentinogenesis imperfecta III as reported by Kulkarni et al [10]. Furthermore, we found developmental abnormalities not previously reported, such as circular dentin formation within dental pulp cells and altered odontoblast differentiation in DSPP KO mice, even as early as one day after birth. Surprisingly, we also identified chondrocyte-like cells in the dental pulp from KO mice teeth. Thus we suggested that the expression of DSPP precursor protein is required for normal Volume odontoblast lineage differentiation and that the absence of DSPP allows dental pulp cells to differentiate into chondrocyte-like cells [14]. The transient expression of DSP protein and DSP-PP transcripts occurs first in the preameloblasts, and next in both preameloblasts and young odontoblasts. Finally, DSP-PP transcripts exhibit sustained expression in odontoblasts [15,16]. PP was reported to promote cell migration and the expression of high levels of Col type I and PP in dental pulp cells. The addition of recombinant DSP/PP proteins affected cell proliferation and differentiation in a dental pulp cell line [17]. Taken together, DSPP does have a major role in dentin mineralization. DSPP also has roles in dental pulp cell migration, proliferation and differentiation. Furthermore, the wider DSPP mRNA expression in preameloblasts, dental pulp cells, bone cells and salivary glands suggest that additional DSPP roles yet to be discovered. MDPC23 cell line was established from fetal mouse molar papillae by Dr. Tom Hank [18,19]. MDPC23 cells express high alkaline phosphatase activity as well as express two odontoblast differentiation markers: dentin phosphoprotein and dentin sialoprotein. In this study, we used MDPC23 cells to investigate the mineralization process in vitro and we examined whether DSPP could affect the behavior of MDPC23 cells regarding cell vitality and mineral formation. Cells An original mouse dental pulp cell line MDPC23 [18,19] was later determined to be a rat odontoblast-like cell line [20]. MDPC23 cell line was obtained from Dr. Jacque Nor (University of Michigan Dental School). MDPC23 cells were used to test the ability to deposit minerals under the mineralization solution containing ascorbic acid, β-glycerol phosphate and dexamethasone (details see Culture medium below). We investigated the effect of DSPP on MDPC23 cell proliferation, differentiation and mineral formation. Culture medium and cell culture preparation MDPC23 cells were seeded at 20x10 4 cells/6 well in α-MEM with 0.1% FBS, L-glutamine and 1% penicillin-streptomycin overnight, then their growth medium were replaced with assigned treatments in the next day. For the control group, MDPC23 cells were cultured in growth medium, which was composed of complete α-MEM, supplemented with 15% Fetal Bovine Serum (FBS), L-glutamine and 1% penicillin-streptomycin (GIBCO, USA). MDPC23 cells were changed with fresh growth medium every other day. In the mineralization medium (MM) group, MDPC23 cells were cultured in the same culture medium with additional mineral inducing components 100μg/ml ascorbic acid, 10nM Dexmethansone, and 10 mM β-glycerol phosphate. MDPC23 cells in the MM group were changed with fresh growth medium containing mineralization solution every other day. The third group (MM/recDSPP) group, MDPC 23 cells were cultured with mineralization medium and recombinant DSPP solution (100 μl recDSPP in insect culture medium/6 well). For each group, the specific culture medium was changed every other day. Preparation of recombinant DSPP protein from baculovirus system Rat DSP-PP 240 cDNA in a baculovirus expression vector pVL1392 was used to generate recombinant DSP 430 /PP 240 (i.e., DSP protein with 430 amino acid residues and PP protein with 240 amino acid residues) proteins in the insect sf9 cell culture medium. The sf9 cell supernatant from DSP-PP 240 cDNA in pVL1392 contained DSP 430 /PP 240 protein mixture, which was used directly in cell culture (100 μl recDSPP/6 well). Cell number counting Cells were treated with 1X trypsin-EDTA. The cell number per 6 well was determined by hemocytometer. Once cells were treated with 1X trypsin-EDTA at Day 4, the 6 wells were discarded. Data are presented as the mean±S.D. of triplicate samples. MDPC 23 Cell Morphology under Different Culture Medium MDPC23 cells (40x10 4 cells/6 well) were cultured in 2mL of MDPC23 growth medium as described in the previous section. Medium from these cells of three groups was changed every other day for needed days. Triplicates were set up for each group. At d4, d6, d7, d8, d9, d10, d18 and d20, the cells from each group were examined by light microscopy. Alizarin Red staining for Ca ++ deposition Freshly prepared Alizarin Red 40 mM pH4.2. Six wells were washed with cold PBS for 5-10 min and next fixed in cold 70% ethanol for 1hr. Then 6 wells were washed twice with water, and stained with filtered Alizarin Red for 10 min at room temperature. At the final step, 6 wells were washed with water to remove nonspecific staining and photographed. Statistical Analysis Results are presented as means±Standard Deviation (S.D.). Two-sample t-test for mean difference with unequal variances was carried out using the program Statistical Analysis System (SAS Institute Inc., Cary, NC, USA) by personnel at the Center for Statistical Consultation and Research Center of the University of Michigan. MDPC23 cells cultured for 4d in regular growth medium showed higher cell proliferation Culture of MDPC23 cells in three different medium (i.e., Control group, MM group and MM/recDSPP group). All groups of MDPC23 cells were seeded at 20x10 4 cells/6 well in 0.1% FBS overnight, then next morning were replaced with different treatments (see Table 1). In each group, the cells were changed with specific medium every other day. After 4d in the culture medium, the MDPC23 cells in the control group had the vigorous growth. Cell proliferation was most pronounced when MDPC23 cells were incubated in the regular growth medium in the control group. The presence of ascorbic acid, β-glycerol phosphate and dexamethasone in the growth medium in MM group reduced MDPC23 cell proliferation. Similarly, MM and recDSPP in the growth medium reduced MDPC23 cell proliferation (see Table 1) MDPC23 cells in the control group were highly packed with cuboid shape after 4 day and 6 day culture in α-MEM growth medium with 15% FBS, glutamine and 1% penicillin and streptomycin. Table 1 showed that MDPC23 cells in the control group had a higher cell proliferation number compared to those of MM and MM/recDSPP groups. Figure 1 showed that MDPC23 cells in the control group are highly packed and appeared as cuboid shape after 4 days and 6 days culture (Figure 1). MDPC23 cells in the MM/recDSPP group maintained the cell vitality and delayed the mineralization process At 4d treatment, MDPC23 cells were fibroblast like and attached very well to the surface. At 6d, more cells appeared. As shown in Figure 3, 7d, 8d, 9d and 10d, more MDPC 23 cells showed on the well surface. In general, under MM and recDSPP treatment, MDPC23 cells maintained vital appearance from 4d, 6d, 7d, 8d, 9d, 10d, 18d and 20d (Figure 3). MDPC23 cells at 12d (not shown) also displayed vital appearance. At 18d and 20d, Alizarin Red detected mineral deposition in these wells. Vital cell appearance was observed in Alizarin Red stained wells at 18d and 20d (not shown). Comparisons of MDPC23 cells in control, MM and MM/recDSPP groups We would like to emphasize the major difference at 4d and 6d cultures among control, MM and MM/recDSPP groups. Cells in control group showed high packed cuboidal shape. Cells in MM group, showed stretched, spindle shape at 4d and cells shrank and detached at 6d. In contrast, cells in MM/DSPP group at 4d showed less spindle shape cells on the well and at 6d more spindle shape cells on the well (Figure 4). Discussion Mineralization solution is well known to induce bone cell differentiation into osteoblast cells expressing bone related proteins. It is also well known to induce dental pulp cells to differentiate to odontoblast like cells producing dentin matrix proteins, finally leading to dentin mineralization. MDPC23 cells were reported to peak alkaline phosphatase activity at 6d [18]. Likely the MM induced MDPC23 cell differentiation and alkaline phosphatase activity, which lead to the detection of Alizarin Red staining in 7 day culture. At 4d, MDPC23 cells in the Control group were highly packed and MDPC23 cells in MM group were wide spread to the well. At 6d, MDPC23 cells in the Control group were highly packed. However, at 6d MDPC23 cells in the MM group shrank and detached from the well surface. Likely, at 6d, part of the cells in MM group underwent apoptosis. However, strong Alizarin Red staining was detected in 7d cultured wells after extensive washes with water. The whole well at 7d showed red staining, indicated the matrix was stained. Once the collagen was synthesized, alkaline phosphatase and matrix molecules were available, even after cell death, the matrix could undergo mineralization [21]. For example, Marsh, et al. (1995) grew mouse MC3T3-E1 cells grown in medium containing ascorbic acid and β-glycerol phosphate. MC3T3-E1 cells express an osteoblast phenotype and produce a highly mineralized extracellular matrix. They demonstrated that bone-like extracellular matrix mineralized in the absence of functional MC3T3-E1 osteoblasts [21]. It would be interesting to investigate the alkaline phosphatase expression from 2d, 3d, 4d, 5d and 6d and to follow the expression of the proteases in the wells exhibited the cell detachment at 6d in the Future research should be targeted to understand how the relationship between alkaline phosphatase and protease activities, which might shed light to the MDPC23 cell behavior at 6d. In the mineralization medium group without the availability of recDSPP in the medium, MDPC23 cells cultured for 6d showed shrank cells. In contrast, MDPC23 in MM/recDSPP group did not exhibit shrank cells at 6d. Actually, cells in this group display cell vitality at 6d. The cell vitality lasted from 6d through 7d, 8d, 9d, 10d, 12d, 18d to 20d. Both cells at 18d and 20d showed strong Alizarin Red staining, which indicated high mineral deposition in these wells. In addition, we observed MDPC23 cells at 18d and 20d displayed viable cell appearance in the wells. How did the MDPC23 cells in MM/recDSPP group display cell vital appearance up to 20d in culture? We would like to briefly describe how previously we proposed a model that DSPP is required for stem cell self-renewal and keeping odontoblast lineage differentiation. When comparing DSPP KO and wild type (wt) mice, DSPP KO mice without DSPP expression show fewer dental pulp cells and thicker dentin molars at early stage as compared to wt animal. Given that there are more differentiated odontoblasts present in KO mice compared to wt animals, there may be a disturbance in stem cell self-renewal such that more stem cells differentiate and the pool of self-renewing stem cells is depleted. DSPP may participate in dental pulp stem cell self-renewal. The absence of DSPP has altered the dental pulp stem cell fate, such that the stem cells can no longer maintain the odontoblast lineage differentiation program. These data lead us to propose a model to show that in wt animals, the constant supply of odontoblasts needed to sustain a continuous formation of dentin is dependent upon the presence of DSPP [14]. Based on the proposed role of DSPP in stem cell renewal and odontoblast lineage differentiation in an animal model [14], here we proposed a model ( Figure 5) to explain how the MDPC23 cells in MM/recDSPP groups display cell vital appearance at d20. A model showing that DSPP expression is likely required for dental pulp stem cell renewal and odontoblast lineage differentiation in MDPC23 cells. When MDPC23 cells were cultured in mineralization medium with DSPP recombinant protein, cells can undergo self-renewal and odontoblast lineage differentiation. In contrast, MDOC23 cells were cultured in mineralization medium without recDSPP, stem cells self-renewal and odontoblast lineage differentiation were gradually depleted. and mineral deposition (strong Alizarin Red staining) at 7d, (3) MDPC23 cells cultured in mineralization medium and recDSPP, showed active cell spreading from 4dto 20d and exhibited active mineral deposition. Thus under the combined MM and recDSPP treatment, MDPC23 cells maintained the cell vitality and delayed the mineralization process. When MDPC23 cells were cultured in mineralization medium with DSPP recombinant protein, cells undergo self-renewal and odontoblast lineage differentiation. In contrast, MDOC23 cells were cultured in mineralization medium without recDSPP, stem cells self-renewal and odontoblast lineage differentiation was gradually depleted. This study lends support that DSPP is required for maintaining normal odontoblast lineage.
2021-09-13T21:56:29.440Z
2021-01-01T00:00:00.000
{ "year": 2021, "sha1": "976b4028d221f91e7a963deb2e44e12e2831d814", "oa_license": "CCBYSA", "oa_url": "https://doi.org/10.29011/2574-7347.100081", "oa_status": "GOLD", "pdf_src": "MergedPDFExtraction", "pdf_hash": "976b4028d221f91e7a963deb2e44e12e2831d814", "s2fieldsofstudy": [ "Biology" ], "extfieldsofstudy": [] }
119102227
pes2o/s2orc
v3-fos-license
Effect of thermal fluctuations on spin degrees of freedom in spinor Bose-Einstein condensates We consider the effect of thermal fluctuations on rotating spinor F=1 condensates in axially-symmetric vortex phases, when all the three hyperfine states are populated. We show that the relative phase among different components of the order parameter can fluctuate strongly due to the weakness of the interaction in the spin channel. These fluctuations can be significant even at low temperatures. Fluctuations of relative phase lead to significant fluctuations of the local transverse magnetization of the condensate. We demonstrate that these fluctuations are much more pronounced for the antiferromagnetic state than for the ferromagnetic one. I. INTRODUCTION Properties of rotating spinor Bose-Einstein condensates attract a lot of attention now. First examples of these systems with hyperfine spin F = 1 were found in optically trapped 23 Na [1]. Vortex phase diagram of spinor condensates is very rich, since the order parameter has three components in F = 1 case and five components in F = 2 case. Topological excitations in spinor condensates were studied theoretically in a large number of articles see, e.g., Refs [2,3,4,5,6]. At the same time, an interest is now growing to temperature effects in atomic condensates. Refs. [7,8,9,10,11] study theoretically the Berezinskii-Kosterlitz-Thouless (BKT) transition associated with the proliferation of thermally-excited vortex-antivortex pairs. For instance, in Ref. [8] it was shown that in quasi twodimensional condensates BKT transition can occur at rather low temperatures, T ∼ 0.5T c , at number of particles in the system N ∼ 10 4 . Recently, some signatures of possible BKT phase were also found close to the critical temperature T c in experimental work [12], where condensates in optical lattice have been studied. Finally, experimental evidence for the BKT transition in trapped condensates was reported in Ref. [13]. Refs. [14,15] deal with the thermal fluctuations of positions of vortices in rotated scalar condensates. Note that, according to the Mermin-Wagner-Hohenberg theorem, Bose-Einstein condensation is not possible in 2D homogeneous systems. However, application of the trapping potential leads to the macroscopic occupation of the ground state of Bose gas. The aim of the present paper is to study the effect of thermal fluctuations in rotated quasi two-dimensional spinor condensates. These systems have a specific degree of freedom, associated with the relative angle among different components of the order parameter corresponding to different hyperfine state. In other words, this angle determines coherence among components of the order parameter. Also it influences a transverse magnetization of the condensate. In this paper, we focus on thermal fluctuations of this angle. Note that experimentally, at present time, it is possible to study the condensate phase [16,17,18], see also Ref [19]. In addition, recently, a new and nondestructive method for measuring the local magnetization of the condensate was proposed and successfully applied in Ref. [20]. We show that the relative angle among hyperfine components of the order parameter in 2D case can experience strong thermal fluctuation even at low temperatures. The reason is the weakness of the spin energy of the system as compared to interactions in density channel. Also fluctuations of this angle lead to significant relative fluctuations of the local transverse magnetization of the condensate, which are much larger in the antiferromagnetic case than in the ferromagnetic one. This paper is organized as follows. In Section II, we give a basic formulation of the problem. In Section II, we discuss our main results for the fluctuations of angle and spin textures. We conclude in Section III. II. BASIC FORMULATION We consider harmonically-trapped quasi 2D Bose-Einstein condensate with spin F = 1. The trapping potential is given by where ω ⊥ is a trapping frequency, m is the mass of the atom, and r is the radial coordinate. The system is rotated with the angular velocity Ω, well below the critical rotation speed ω ⊥ , and the number of atoms in the cloud is N . In this paper, we restrict ourselves on the range of temperatures much smaller than T c . Therefore, we can neglect a noncondensate contribution to the free energy of the cloud. The total energy of the system in this approximation coincides with the energy of the condensate. For the number of condensed particles, we use the ideal gas result: (2) At the same time, where ζ(2) is a Riemann zeta function, ζ(2) ≈ 1.28. Eqs. (2) and (3) remain accurate even for the case of interacting particles [21]. We also introduce a dimensionless temperature t = T /T c . Since we are considering low temperatures, T ∼ 0.1T c , temperature dependence of condensed particles number can be neglected, N (T ) ≃ N . The order parameter in the F = 1 condensate has three components Ψ j (j = −1, 0, 1). The free energy of the system can be written as [22,23] where the integration is performed over the system area, repeated indices are summed, F a (a = x, y, z) is the angular momentum operator, which can be expressed in a matrix form through the usual Pauli matrices, h is the one-body Hamiltonian, given by Constants g n and g s characterize interactions in density and spin channels and are given by where a 0 and a 2 are scattering lengths for atoms with total spin 0 and 2, and n z is the concentration of atoms in longitudinal direction. In real spinor condensates, |g s | ≪ |g n |, since a 0 ≈ a 2 . Typically, |g s /g n | ∼ 0.001 − 0.01, and this ratio can be tuned. In this paper, we study the case of relatively dilute condensate and take g n = 10. We will consider different values of N but at fixed value of interaction parameter g n . This is possible, since, in the case of a single layer cloud, we can always tune the trapping frequency in the longitudinal direction keeping g n constant. To ensure the regime of quasi-two-dimensionality, we can also tune ω ⊥ . In this case, we have to change the rotation speed to keep the dimensionless rotation speed the same, and the themperature to fix dimensionless t. In real atomic condensates, a is approximately several nanometers. The most realistic value of N for this g n is close to 10 3 , and to illustrate the effect of N we will consider the following range: 10 2 N 10 4 . The total magnetization of the condensate is fixed: Magnetization M is normalized in terms of N and maximum of |M | is equal to 1. One has also to take into account the normalization condition for the order parameter: The spatial profiles of all the components of the order parameter in the equilibrium can be found from the condition of minimum of energy (4). It is also convenient to introduce the longitudinal l z and transverse l tr local magnetizations of the condensate: The spin energy in this case can be represented as In this paper, we restrict ourselves only to the case of axially-symmetric phases, when moduli of all the components of the order parameter are independent on the azimuthal angle and depend only on the radial coordinate r. Note that equilibrium vortex phases in this situation were studied in Refs. [2,3] for the spin F = 1 condensate and in Ref. [6] for F = 2 system. For the axially-symmetric phases, each component of the order parameter can be represented as where ϕ is a polar angle, L j is a winding number, and δ j is a relative phase. We will denote such phases as (L −1 , L 0 , L 1 ). As it was shown in Ref. [3], an axial symmetry of the solution implies that winding numbers satisfy the relation: L 1 + L −1 = 2L 0 . In this case, according to Eqs. (11) and (12), the spin energy depends on relative angle It is important to note that only a spin contribution to the total energy (4) depends on phases δ j via the spinmixing term. For the stationary state, which is a local minimum of Gross-Pitaevskii functional (4), the value of χ is determined by the sign of interaction constant in a spin channel g s . For positive g s (antiferromagnetic case), a minimum of F spin is attained at χ = π, whereas for negative g s (ferromagnetic case) χ = 0. III. RESULTS AND DISCUSSION According to the results of Ref. [3], for the antiferromagnetic state (g s > 0), phases (-1, 0, 1) and (1, 1, 1) are energetically favorable in the region of small and moderate values of magnetization M . Phase (-1, 0, 1) is realized at low rotation frequencies Ω and (1, 1, 1) at higher Ω. In Ref. [2], it was shown that (0, 1, 2) state is favorable in the ferromagnetic case (g s < 0) in a region of moderate values of Ω and M. In these phases, all three hypefine states are populated. Fluctuations of χ have a sense only in this case, since the χ-dependent part of the energy is equal to zero identically, if one of the components of the order parameter is zero. In this paper, we will concentrate on these three vortex states, since they are appropriate candidates for the illustration of the effect of thermal fluctuations. Note that in homogeneous spin-1 condensate atoms populate only two or one hyperfine state(s); they can populate three states only if the system is trapped and experiences rotation, which generates vortices. An important feature of real atomic spinor Bose-Einstein condensates is a weakness of the spin interactions comparing to the interaction in density channel (|g s | ≪ |g n |). At the same time, the coherence among the different components of the order parameter (angle χ) is fully determined by the spin interaction. Angle χ also influences the transverse magnetization of the condensate, as seen from Eq. (11). Note that a longitudinal component of magnetization is independent on χ. Smallness of g s comparing to g n leads to the fact that thermal fluctuations of relative angle χ become significant at much lower temperatures than fluctuations of the density of particles. Therefore, at relatively low temperatures, one can assume that the moduli of all the components of the order parameter remain fixed (that can be also checked numerically), whereas χ is fluctuating. For the case of small fluctuations of χ, one can use a harmonic approximation and represent the deviation of the energy of the system from the equilibrium, δF = F (χ 0 + δχ) − F (χ 0 ), as a quadratic function in terms of the deviation of angle χ from the equilibrium δχ = χ − χ 0 : where I = dS f 1 f −1 f 2 0 . Under these assumptions, the average square of the deviation of χ from the equilibrium is given by Integrals in Eq. (15) can be calculated analytically. After taking into account Eq. (3), we get: We also introduce a quantity ∆χ = (δχ) 2 T , which can be considered as an average deviation of angle χ from the equilibrium. We see that ∆χ depends on dimensionless temperature t = T /T c , the number of particles N and integral I. For a given vortex phase, I is also a function of total magnetization M . It is important to emphasize that the scaling relation (16) has a sense only if g n is independent of N , as discussed above. In order to calculate I, we use a variational method, which was previously applied by us in Ref. [6] to evaluate energies of various axially-symmetric vortex phases in spin F = 2 condensate. In this approach, each component of the order parameter is modeled by a trial function and values of variational parameters are found from the condition of minimum of total energy. In Fig. 1 we plot calculated dependence of ∆χ (measured in degrees) as a function of the number of particles in the system for different vortex phases at t = 0.1 and g n = 10. This value of g n is close to typical experimental ones (a 0 ≈ 5 nm, n z ≈ 2 nm −1 ), see also calculations of Ref. [3,6]. We assume that, for (-1, 0, 1) and (1, 1, 1) states, g s = 0.01g n and M = 0.1, whereas for (0, 1, 2), g s = −0.01g n and M = 0.5. Note that ∆χ for the particular phase is independent on Ω, since I has the same property. We see that even for quite low temperatures, ∆χ can be rather large and the coherence among different components of the order parameter is practically destroyed. For smaller value of |g s |, fluctuations of χ are, of course, even stronger. To illustrate the effect of temperature, in the inset to Fig. 1, we show the dependence of ∆χ on T for (0, 1, 2)-phase (curve 1), (1, 1, 1) phase (curve 2), and (-1, 0, 1) phase (curve 3) at fixed number of atoms N = 1000, g s = −0.05g n for the first curve and g s = 0.05g n for two others. Note that ∆χ is almost independent on total magnetization M of the condensate. As we already pointed out, fluctuations of χ lead to that of l tr . In a harmonic approximation, one can express the average deviation of |l tr | from the equilibrium δ |l tr | T through the deviation of χ: where u = 0 for the antiferromagnetic case and u = 1 for the ferromagnetic one. If in the antiferromagnetic state the total magnetization is not large, M 0.5, one can expect that (f 1 −f −1 ) 2 ≪ f 1 f −1 , and, therefore, even small fluctuations of χ lead to strong relative fluctuations of |l tr |. At the same time, for ferromagnetic case, u = 1 in this equation, and relative fluctuations of |l tr | are much smaller. We have calculated δ |l tr | T for different vortex phases and our calculations revealed that δ |l tr | T / |l tr | is almost independent on radial coordinate r for vortex phases (-1, 0, 1) and (1, 1, 1). This is due to the fact that |L −1 | = |L 1 | for these states, therefore, f 1 (r) is nearly proportional to f −1 (r), and, according to Eq. (15), δ |l tr | T / |l tr | should only slightly depend on r. In Fig. 2 we present δ |l tr | T / |l tr | as a function of total magnetization of the condensate for (-1, 0, 1) state at t = 0.1, g s = 0.01g n (antiferromagnetic case), and N = 1000. We see that relative fluctuations of transverse magnetization can be significant even at low temperature. Value of δ |l tr | T / |l tr | decreases with increase of M . This result is natural, since condensate becomes more polarized with growing M . An absolute value of δ |l tr | T also remains sizable. Although value of fractional quantity δ |l tr | T / |l tr | is growing with decreasing of M , the value of |l tr | itself becomes smaller. Therefore, we found that the most appropriate values of M to observe fluctuations of transverse magnetization is around M = 0.2, where both δ |l tr | T / |l tr | and |l tr | are high: δ |l tr | T / |l tr | 0.1, whereas l tr is comparable to the longitudinal magnetization l z in the fully polarized state at M = 1, where it should be easily detectable experimentally. Value of δ |l tr | T / |l tr | depends also on the vortex phase; we found that in (1, 1, 1) state it is even much larger than in (-1 ,0 ,1) state. Also we have calculated δ |l tr | T for the ferromagnetic (0, 1, 2) phase. As can be expected, in this case, relative fluctuations of |l tr | are much weaker. Physically, this is because |l tr | is proportional to the ferromagnetic order parameter [6], which is responsible for the ferromagnetic ordering. Therefore, one can expect that in the ferromagnetic phase this order parameter is more robust with respect to thermal fluctuations, than in the antiferromagnetic one. In addition, average deviation of |l tr | from the equilibrium is negative and its modulus is growing with increase of M , in contrast to the antiferromagnetic system. Thermal fluctuations should also be important in the case of F = 2 condensate, where there are two interaction constants in spin channel and two characteristic angles. Therefore, one can expect more complicated behavior, as compared to F = 1 condensate. For instance, in homogeneous F = 2 system, a cyclic state can have a lowest energy; in this case atoms populate three hyperfine states, and the spin energy depends on the coherence among them. Fluctuation problem for this system was analyzed in Ref. [24]. A new method to create such entangled states in spin-1 condensate was recently applied exper-imentally in Ref. [19], where a microwave energy was injected to the system. As a result, particles redistribute from spin −1 state to spin 0 and 1 states, and all three magnetic sublevels become populated. The spin-mixing dynamics in F = 1 condensate was studied theoretically in Ref. [25]. Note that in Eq. (14) we have assumed that fluctuating χ is spatially independent that is not true in general case. However, spatial gradients of χ give some additional contribution to the kinetic energy of the system, which is much larger than the spin energy. Therefore, gradients of χ result in rather large increase of total energy, and we can neglect them for the trapped system, at least for our range of parameters. In other words, healing length for χ far exceeds the Thomas-Fermi radius of the system, and, therefore, although χ is fluctuating inside the cloud, it remains nearly constant [24], except of the surface layer, where the density of particles is low. Thermal fluctuations of χ should be also noticeable in three-dimensional condensates at low and moderate temperatures. In general, the dependences of the number of condensed particles on the reduced temperature and critical temperature on the total number of atoms for 3D case are similar to that in 2D system, which are described by Eqs. (2) and (3). The main difference is the powers of t and N in the right hand sides of Eqs. (2) and (3) that are 3 and −1/3 ( ω ⊥ /kT c ∼ N −1/3 ), respectively. However, in this case one has to take accurately into account the possibility of long wave length fluctuations of χ in longitudinal direction and formation of kinks [24]. IV. CONCLUSIONS In this paper, we have studied the effect of thermal fluctuations on the coherence among different components of the order parameter in quasi 2D rotating F = 1 Bose-Einstein condensate, when all three hyperfine states are populated. Different axially-symmetric vortex phases were considered. We have shown that the deviation of the relative phase χ = 2δ 0 − δ 1 − δ −1 from the equilibrium can be very significant even at low temperatures, much smaller than T c . Fluctuations of relative angle induce sizable fluctuations of the spin texture, namely, local transverse magnetization of the condensate. We have shown that these fluctuations are much more pronounced in antiferromagnetic case than in the ferromagnetic one. The recently proposed in Ref. [20] direct and nondestuctive method for the imaging of spinor BEC spatial magnetization (or some of its modification) can be applied for the experimental study of the thermal fluctuations of spin textures, since it enables multiple-shot imaging and one can directly observe the dynamics of a single sample. Fig. 1. Dependences of ∆χ (in degrees) on the number of particles in the system for different vortex phases at fixed value of interaction constant g n = 10 (see in the text) and t = 0.1. In the (-1, 0, 1) phase, g s = 0.01g n , M = 0.1; in the (1, 1, 1) state, g s = 0.01g n , M = 0.1; in the (0, 1, 2) state, g s = −0.01g n , M = 0.5. Inset shows ∆χ as a function of temperature for (0, 1, 2) phase (curve 1), (1, 1, 1) phase (curve 2), and (-1, 0, 1) phase (curve 3) at the same values of M , g n and t. Number of atoms is N = 1000, interaction constants are g s = −0.05g n for the first curve and g s = 0.05g n for two others.
2019-04-14T02:10:39.348Z
2006-02-06T00:00:00.000
{ "year": 2006, "sha1": "300772653c3ffbe6e344c0f1ea4b2a29f2a0dd20", "oa_license": null, "oa_url": "http://arxiv.org/pdf/cond-mat/0602119", "oa_status": "GREEN", "pdf_src": "Arxiv", "pdf_hash": "300772653c3ffbe6e344c0f1ea4b2a29f2a0dd20", "s2fieldsofstudy": [ "Physics" ], "extfieldsofstudy": [ "Physics" ] }
133647341
pes2o/s2orc
v3-fos-license
Mesozoic resource potential in the Southern Permian Basin area: the geological key to exploiting remaining hydrocarbons whilst unlocking geothermal potential It is generally accepted that hydrocarbon exploration in northern Europe has reached a mature stage. A basin’s maturity is defined by the underlying number of new discoveries and the declining production rate of mature fields (SPE 2015). For geoscientists, a mature basin has well-defined characteristics in terms of, for example, reservoir presence or trap formation (e.g. Byrne 2012). It is interesting, therefore, to note how much is still unknown about certain stratigraphic intervals in northern Europe. The Mesozoic overburden of the Southern Permian Basin (sensu Maystrenko et al. 2008; Doornenbal & Stevenson 2010) continues to provide fresh insights into the geological history of an area where, as the name suggests, historical hydrocarbon exploration has focused on the Paleozoic. The aim of this Special Publication is to increase knowledge of the Mesozoic overburden as a driver for further hydrocarbon exploration/production and the development of new geothermal energy sources. The succeeding chapters are introduced by tectonic framework overviews that give a context for the papers that follow. The remaining articles are organized in approximate stratigraphic order, from old to young, and include a variety of examples from semi-regional to localized field or sub-basin studies. An overview of the study area for all the following chapters is given in Figure 1. It is recognized that the overprinting Mesozoic systems have different naming conventions across the study area (e.g. the Central European Basin System sensu Littke et al. 2008; Maystrenko et al. 2008). Here we use the title ‘Southern Permian Basin area’ to emphasize the resource opportunities associated with a region that has, at least in large part, been associated with hydrocarbon exploration, and very large gas fields, in the Permian Rotliegend. For our purposes, this includes part of the Polish Trough, the highs surrounding the basin (e.g. the Ringkobing-Fyn High), and analogous elements of the Danish and Norwegian offshore. In this introduction, an overview of the geological history and resource base of the area is presented as a framework for the chapters that follow. A description of further prospective resources is then offered (oil and gas plus geothermal potential). It is generally accepted that hydrocarbon exploration in northern Europe has reached a mature stage. A basin's maturity is defined by the underlying number of new discoveries and the declining production rate of mature fields (SPE 2015). For geoscientists, a mature basin has well-defined characteristics in terms of, for example, reservoir presence or trap formation (e.g. Byrne 2012). It is interesting, therefore, to note how much is still unknown about certain stratigraphic intervals in northern Europe. The Mesozoic overburden of the Southern Permian Basin (sensu Maystrenko et al. 2008;Doornenbal & Stevenson 2010) continues to provide fresh insights into the geological history of an area where, as the name suggests, historical hydrocarbon exploration has focused on the Paleozoic. The aim of this Special Publication is to increase knowledge of the Mesozoic overburden as a driver for further hydrocarbon exploration/production and the development of new geothermal energy sources. The succeeding chapters are introduced by tectonic framework overviews that give a context for the papers that follow. The remaining articles are organized in approximate stratigraphic order, from old to young, and include a variety of examples from semi-regional to localized field or sub-basin studies. An overview of the study area for all the following chapters is given in Figure 1. It is recognized that the overprinting Mesozoic systems have different naming conventions across the study area (e.g. the Central European Basin System sensu Littke et al. 2008;Maystrenko et al. 2008). Here we use the title 'Southern Permian Basin area' to emphasize the resource opportunities associated with a region that has, at least in large part, been associated with hydrocarbon exploration, and very large gas fields, in the Permian Rotliegend. For our purposes, this includes part of the Polish Trough, the highs surrounding the basin (e.g. the Ringkøbing-Fyn High), and analogous elements of the Danish and Norwegian offshore. In this introduction, an overview of the geological history and resource base of the area is presented as a framework for the chapters that follow. A description of further prospective resources is then offered (oil and gas plus geothermal potential). generalized and simplified overview is presented as a framework for the following papers. The time period from 500 Ma (Cambro-Ordovician boundary) to 300 Ma (latest Carboniferous) saw a period of large-scale plate reorganization which was fundamental to the development of the NW European basins (e.g. Ziegler & Dèzes 2006;Krawczyk et al. 2008;McCann 2008c;Pharaoh et al. 2010). In Ordovician-Devonian times (c. 460-380 Ma), the Caledonian collision led, through a series of phased events, to the amalgamation of the Avalonian, Laurentian and Baltica plates which formed Laurussia (e.g. Cocks et al. 1997;Cocks 2000;Holdsworth et al. 2002;Krawczyk et al. 2008;Smit et al. 2016). To make a simplistic statement, for what was in reality a complex structural situation, the Caledonian Front between Avalonia and Baltica is expressed as the Trans-European Fault Zone in northern Germany/southern Denmark and as the Tornquist-Teisseyre Zone through NE Germany and Poland, with the relict crustal domains defined by gravity/magnetic data and long-offset seismic studies (e.g. EUGENO-S 1988;Zielhuis & Nolet 1994;Thybo 1997;Abramovitz & Thybo 2000;Thybo 2001;Banka et al. 2002;Grad et al. 2002Grad et al. , 2009Yegerova et al. 2007;Krawczyk et al. 2008;Kaban et al. 2010;Maystrenko & Scheck-Wenderoth 2013). This sutured margin and its related structures continued to have a strong influence on the subsequent Mesozoic sequence, as discussed in the Pomerian area by Deutschmann et al. (2018) and Seidel et al. (2018). In Late Devonian-Permian times (c. 370-270 Ma), the final stages of collision between Gondwana to the south and Laurussia to the north occurred, leading to the amalgamation of Pangaea and the generation of the Variscan mountain belt, and associated structures, in Central Europe (e.g. Gast 1988;Ziegler 1990b;Kroner et al. 2008;Maystrenko et al. 2008). The foreland of the Variscides became the main Carboniferous basin, allowing the deposition of large-scale deltaic and marine systems associated with the coal-rich Westphalian intervals and their related gas-prone source rocks (e.g. Kombrink et al. 2010). The gradual fill of that basin, combined with changes in palaeogeographical setting, access to marine gateways and global climate shifts, led to the deposition of continental sequences of the Rotliegend in the Mid-Late Permian followed by thick evaporites in the late Permian-Early Triassic (e.g . Ziegler 1990a;Verdier 1996;Geluk 2005;Peryt et al. 2010;Fryberger et al. 2011;McKie 2017). These evaporite sequences later (typically, from the Early-Mid Triassic onwards) mobilized into significant salt pillows, walls and diapirs which impacted Mesozoic depocentres and allowed hydrocarbon trap formation (as discussed by Bouroullec et al. 2018;Hernandez et al. 2018;van Winden et al. 2018). The Mesozoic structural evolution of the Southern Permian Basin area is defined by the relict foreland basin and subsequent tectonic movement along the terrane boundaries (Thybo 2001;Pharaoh et al. 2010). For example, the Triassic is notable for the generation, or accelerated evolution, of various overprinting rift systems (Thybo 2001). These led to thick Triassic sequences in areas such as the Glückstadt Graben (Maystrenko et al. 2005) and the Horn Graben (e.g. Kilhams et al. 2018) with subsequent thick Jurassic sequences in the Central Graben and adjacent areas (e.g. Verreussel et al. 2018). These fast rifting phases, and variable filling of the associated basins, have an impact on aspects such as overpressure distribution related to depth, reservoir quality and fluid fills (e.g. Peeters et al. 2018). Subsequent inversion and erosion phases led to the exposure of some Triassic intervals. . The Jurassic sequence is, however, only preserved in areas of the basin that were protected from a late Jurassic-Early Cretaceous inversion episode (related to the Alpine collisional phase: e.g. Rosenbaum et al. 2002;Pharaoh et al. 2010) or areas that only suffered from subsequent Late Cretaceous-Cenozoic inversion (related to further southern European plate convergence: Dèzes et al. 2004;Kley & Voigt 2008;Pharaoh et al. 2010;Kley 2018). In The Netherlands, this includes discrete rifts such as the Roer Valley Graben/West Netherlands Basin and the Central Graben (e.g. Winstanley 1993;de Jager et al. 1996;de Jager 2003;Verreussel et al. 2018). In Germany, Jurassic preservation is often associated with halokinetic rim synclines and other localized areas such as the Lower Saxony Basin (Schwarzkopf 1990;Baldschuhn et al. 1996;Doehler 2005;Lott et al. 2010;Bruns et al. 2014). Although the Cretaceous interval suffered various discrete inversion phases (e.g. Vejbaek et al. 2010;Kley 2018;Krzywiec et al. 2018;Wolf et al. 2018), which allowed hydrocarbon traps to form, there is a general basin sag during this time. This, coupled with rising eustatic sea level (e.g. Miller et al. 2005;Ramkumar 2016) and generally warm global temperatures (e.g. Steuber et al. 2005), led to an evolution from a clastic-dominated succession (marine shorefaces/pelagic mudstones/deepwater sandstones: e.g. Milton-Worssell et al. 2006;Jeremiah et al. 2010;Vis et al. 2018;Zwaan 2018) to chalk-rich seas (including important reservoir units across the region: e.g. van Lochem 2018). The Cenozoic then saw a switch back to clastic conditions. Shallow-marine sands were deposited around the basin edges (e.g. Knox et al. 2010) Historical resource focus: conventional hydrocarbons Boigk (1981), van Hulten (2009) and Breunese et al. (2010) provide detailed overviews of the hydrocarbon exploration and production history of this basin. Here a short, simplified summary is presented to illustrate the relative historical importance of the Paleozoic and Mesozoic intervals to hydrocarbon exploration. It is recognized that there are further Mesozoic geological resources that are not covered here or in subsequent chapters. These include, but are not limited to, quarrying (e.g. limestone at Winterswijk in the eastern Netherlands; Faber 1959) and salt extraction (e.g. Wassmann & Brouwer 1987). The discovery and use of hydrocarbons in NW Europe can be traced back to at least 1628 when oil products were used as cart lubrication and medical remedies in the region of Wietze, Lower Saxony (Boigk 1981; Arndt 2017). Although oil remained a useful product throughout the subsequent centuries, it was not until the late 1930s that hydrocarbon exploration started to boom in the region with the discovery of the Bentheim gas field (Zechstein), and subsequently (in 1943) the Schoonebeek oil and gas accumulations (van Hulten 2009). At that time exploration was focused on Mesozoic and Permian Zechstein targets with, for example, discoveries at the Goldenstedt (in 1959: Zechstein), Adorf (in 1959: Triassic) and Hengstlage (in 1963: Triassic) fields (Breunese et al. 2010). However, the discovery of a series of giant Rotliegend gas fields was to change the focus of explorers across the basin. In 1959, the Slochteren-1 well proved up 2900 Bcm (billion cubic metres) (c. 102 Tcf (trillion cubic ft)) at Groningen, at that time the largest gas field in the world (Grötsch et al. 2011). This was followed up by the 1965 Groothusen and 1969 Salzwedel discoveries, the latter containing 200 Bcm (c. 7.1 Tcf) of gas, the largest field in Germany (Breunese et al. 2010). The very large hydrocarbon volumes associated with the Paleozoic have subsequently made it the key economic driver for operators. However, the Mesozoic has also had an important role to play in the Southern Permian Basin area. Figure 2a illustrates that around 10% of gas reserves are hosted within Mesozoic reservoirs. These are typically, but not always, associated with the Triassic interval, being charged by Westphalian Type III coals. Field examples include Apeldorn (Kus et al. 2005), Caister (Ritchie & Pratsides 1993), De Wijk (Bruijn 1996Goswami et al. 2018) and F15-A (Fontaine et al. 1993). Perhaps of more consequence, Figure 2b illustrates that around 90% of the basin's oil reserves are found in the Mesozoic interval. Figure 2c, d illustrates that these Mesozoic conventional gas and oil reserves, within the Southern Permian Basin, are predominantly found in The Netherlands, Germany and Denmark (97% for gas and 99% for oil), with only minor amounts associated with the UK and other countries such as Poland. This geographical distribution is a result of the geological history described above, including the spatial extent of rift systems (e.g. the Central Graben and associated Jurassic source rocks in a discrete area running through the Dutch, German and Danish offshore), areas of inversion, source-rock presence and charge timing (cf. Pletsch et al. 2010). However, this does not rule out further conventional hydrocarbon discoveries in the Mesozoic of, for example, Poland or other historically less attractive areas (e.g. Kilhams et al. 2018;Kortekaas et al. 2018). An example, from The Netherlands, of the number of historical exploration wells and their stratigraphic targets (post-1940) is shown in Figure 3a. Early exploration, as described above, focused on the Triassic and Zechstein intervals, with the former considered the most attractive. However, after the 1959 Groningen discovery, there was a distinct upswing in both the annual total number of exploration wells and the proportion of Paleozoic targets which continued throughout the 1960s and 1970s. In the period from 1980 to 1990, a combination of high to moderate oil prices, new 3D seismic technology and a desire to explore new plays saw an increase in the annual total number of exploration wells to record numbers. In that period, the proportion of Mesozoic targets also increased, and the success rate of all exploration wells jumped from around 35 to 65% (Breunese & Rispens 1996). Figure 3b illustrates which Mesozoic interval these Dutch wells targeted (post-1940). Historically, the early (pre-1986) discoveries in The Netherlands were associated with the relatively shallow Cretaceous structures (and, to a lesser extent, the Jurassic) which could be defined and drilled using 2D seismic data (Breunese & Rispens 1996). With the advent of 3D seismic technology, there was a switch to deeper Triassic targets and those prospects that could be understood and de-risked by new techniques such as amplitude analysis (e.g. Breunese & Rispens 1996;Bruijn 1996). Note that in the period 2011-16 there was a resurgence in Cretaceous exploration drilling. This is associated with the chalk interval in the Dutch Central Graben where the application of advanced seismic inversion techniques, developed in the adjacent Danish offshore, gave confidence to chalk reservoir quality and fluid fill predictions For Germany, the historical discovered recoverable volumes per era is shown in Figure 4a and per Mesozoic period in Figure 4b. This illustrates a similar early (pre-1964) focus on Mesozoic exploration before a switch to Paleozoic targets and subsequent discoveries. With the exception of the 1980 Mittelplate discovery (cf. Doehler 2005), only very small volumes (<10 MMboe UR (million barrels of oil equivalent ultimate recovery)) have been discovered in the Mesozoic since 1968. This is likely to be due to a combination of geological factors (e.g. limited Jurassic source-rock presence: cf. Pletsch et al. 2010) and the more limited acquisition of 3D seismic data in comparison to The Netherlands. The historical perception of relatively small-volume promise in Germany has also led to high oil-price sensitivity, with an increase in discoveries related to peaks in global oil prices (e.g. between 1978 and 1980 the Brent crude price (inflation adjusted) rose from $50/bbl (barrel) to $105/bbl, by 1985 the inflation adjusted price was around $31/bbl: Macrotrends 2017). Prior to the latest 2014-16 oil price fall (from c. $105/bbl to $30/bbl) there had been significant interest from smaller operators to redevelop (mainly Jurassic) historical oil fields and to chase exploration upside (including unconventional plays). The oil-price fall and general social movement against unconventionals caused companies to downscale activities or exit the country. Examples include PRD Energy at Volkensen (Arndt 2015;Market-Wired 2015), Kimmeridge GmbH in Lower Saxony (iDeals 2016) and Central Anglia A/S in the Sterup licence, Schleswig-Holstein (Central Anglia 2016). Future resource focus: hydrocarbons and geothermal energy It is clear after at least 70 years of intense exploration across the Southern Permian Basin area that there is increased pressure to apply new geological ideas or new techniques to the exploration and exploitation of conventional hydrocarbons. The development of unconventional oil plays also gives an opportunity to extend hydrocarbon production within the basin. However, environmental concerns and societal pressure has, in recent years, seen an upsurge in government support for geothermal projects. In all these resource scenarios, a sound geological understanding is key to successful exploitation. Here, each of these existing and future resources is considered within the framework of the chapters that follow. Maximizing production efficiency from existing hydrocarbon fields It has become standard practice in conventional oil and gas fields across the Southern Permian Basin area and elsewhere, if considered economic, to drill extra infill wells and apply enhanced recovery techniques to achieve the highest possible hydrocarbon yield (e.g. Lake 2010). Such techniques require a good geological understanding of how, for example, sedimentary layers cause differential fluid flow. Porter et al. (2018) give an example from the Cretaceous shoreface reservoirs of the Rotterdam oil field, including a comparison to analogous outcrop examples in southern England. Additionally, Vis et al. (2018) consider how local tectonic phases influenced reservoir distribution (which, therefore, has a possible impact on hydrocarbon extraction strategies) in the Dutch Schoonebeek Field. Typically, enhanced recovery techniques have been focused on oil reservoirs via water injection. However, gas injection is increasingly being used. Goswami et al. (2018) give an example of enhanced recovery via nitrogen injection at the De Wijk Field (Triassic reservoir interval), illustrating how new technologies can be utilized to increase gas yield. It is possible that existing or future technologies, such as cheaper drilling or enhanced recovery techniques, could unlock further resources. Exploring for further conventional hydrocarbon resources An estimate of the remaining prospective conventional hydrocarbon resources for The Netherlands is shown in Figure 5, which shows considerable potential left in all the Mesozoic intervals, including Triassic gas (Fig. 5a) and both Jurassic (Fig. 5b) and Cretaceous oil (Fig. 5c). Chasing further exploration opportunities in a mature basin demands a high level of geological understanding of all the play elements. This starts with consideration of the potential of large existing datasets. An example is presented by van Kempen et al. (2018), utilizing public well log data to consider Triassic Bunter interval reservoir properties and trends across The Netherlands. Detailed consideration of existing seismic data can also reveal interesting features. For example, Strozyk et al. (2018) identify a series of pockmarks which could refine the gas-charge timing story in the eastern Netherlands. It is also possible to question the fundamental tectonic framework of an area and the impact this might have on exploration models. An example is presented by Krzywiec et al. (2018) for the Polish Trough. Dogmas around reservoir (e.g. Kortekaas et al. 2018 for the Bunter of the Dutch Northern Offshore; Zwaan 2018 for the Cretaceous of the adjacent Norwegian and Danish offshore) or source-rock development (e.g. Kilhams et al. 2018 for the Triassic play of the German Horn Graben) can be challenged in areas perceived to be fallow through the integrated evaluation of well results and conceptual geological models. Conventional hydrocarbon opportunities remain. For example, for the aforementioned German Jurassic oil play opportunities, the geological elements remain the same even if the economic boundaries have shifted. Exploring for unconventional hydrocarbon resources The presence of various Mesozoic source rock intervals and associated high TOC shale units (in addition to similar Paleozoic intervals) suggested that the recent unconventional oil and gas boom in the USA could be transferred to NW Europe (e.g. Schulz et al. 2010). A combination of societal pressure, economics and geological factors mean that this has not yet happened (e.g. Selley 2012;Johnson & Boersma 2013;Weijermars 2013). However, geoscience is central to the identification of resources and can contribute to the continuing debate over its extraction. Stock & Littke (2018) give an example of how the Posidonia shale may be associated with unconventional resources in the Lower Saxony Basin. Efficient exploration for, and exploitation of, geothermal energy resources Northern Europe is often considered to be at the forefront of the energy transition from hydrocarbons to renewable sources. Here, we consider the role that geoscience can play in unlocking further geothermal energy reserves. In the area of the Southern Permian Basin, there are a number of ongoing initiatives to promote geothermal energy. Figure 6 shows an estimate of the geothermal district heating production and targets for various countries in the study area and, although not specific to the Mesozoic, gives an example of the gap to potential for this energy source. As suggested by Franz et al. (2018), there is considerable potential for the use of geothermal energy in Germany, a statement which can also be extended to The Netherlands and Poland. The resources of, for example, Denmark and the UK are not yet fully defined. In some areas of the basin, there is social and economic demand for clean energy (mainly for urban greenhouse heating). Pilot projects, often with a combination of Triassic and Cretaceous targets, have been undertaken which have highlighted the importance of sound geological understanding, through facies and porosity prediction in achieving economic water production rates (e.g. Pluymaekers et al. 2012;De Vaal 2017;Franz et al. 2018;Vondrak et al. 2018). For example, Figure 7a illustrates the growth of geothermal systems in The Netherlands (both in operational projects (2016, n = 12) and associated energy production). This effort began with a 2005-07 pilot project by A+G van den Bosch horticultural company at Bleiswijk (West Netherlands Basin), with the aim of utilizing heat for growing via water production from the Upper Jurassic-Lower Cretaceous interval (Platform Geothermie 2017; VleesTomat 2017) The success of this project has led to an acceleration of geothermal investment (focused on the Mesozoic, as demonstrated by all the drilled projects (2016, n = 16) in Fig 7b) aided by a wealth of subsurface data in the public domain (see also Vondrak et al. 2018). Various projects have also been undertaken in NE Germany. For example, in Neubrandenburg, production has been achieved from Triassic Rhaetian sandstones (Wolfgramm et al. 2009;Franz et al. 2018), with further potential in the Middle Jurassic fluvial sandstone sequence Franz et al. 2018). Various regional and local governmental organizations are supporting both further research into geothermal energy (with a number of technical and mapping tools now available: e.g. TNO 2013; Peta 2015) and subsidizing projects (e.g. EBN 2017). It is clear that a large range of techniques originally developed for the hydrocarbon industry, such as basin modelling (e.g. Nelskamp & Verweij 2012) and seismic attribute analysis (e.g. Dierkhising 2015), can also be applied to geothermal projects to improve pre-drill prediction of, for example, reservoir temperature and fluid fill. Conclusions The Mesozoic of the Southern Permian Basin area continues to provide fresh insights which can be applied to energy resource exploitation and identification. The general tectonic history of this area is considered well known, but the papers that follow illustrate that new observations can still be made. It is demonstrated throughout this publication that the key to unlocking remaining resources is built on a foundation of solid geological understanding. This applies to the efficient production of discovered hydrocarbons, especially when attempting to apply new engineering techniques, as well as defining and exploring for new conventional reserves. If the exploitation of unconventional hydrocarbons becomes socially acceptable in northern Europe, an underlying geological understanding of various Mesozoic age units will become increasingly important. However, this mantra also applies to the development of geothermal energy reserves, where accurate predictions of porosity and geothermal gradients are one of the many keys to a successful project. It is hoped that this Special Publication will spur further exploration for, and efficient extraction of, the remaining resources in the basin. The editorial team extends its appreciation to all the authors and collaborators that contributed to this book. The time and effort taken represents considerable determination and perseverance. Everyone that contributed to the original Geological Society conference is also appreciated, the open and collaborative atmosphere formed the basis of this book, with special thanks going to: Laura Griffiths, Gary Hampson, Howard Johnson, James Maynard, Robert Schöner, Martin Wells and Sarah Woodcock. We thank various managers who allowed the editors and authors to spend time making a significant contribution. Ben Kilhams particularly thanks Max Brouwers, Ramon Loosveld, Carlo Nicolai and Edwin Verdonk, who have supported the venture through their patience and sponsorship. Sincere thanks go to all the reviewers who took time and care to give excellent, constructive feedback: Oscar Abbink, Kresten
2019-04-27T13:08:33.947Z
2018-05-03T00:00:00.000
{ "year": 2018, "sha1": "9257e4cbbcb384a604e172df369950aba37814e9", "oa_license": "CCBY", "oa_url": "https://doi.org/10.1144/sp469.26", "oa_status": "HYBRID", "pdf_src": "Highwire", "pdf_hash": "202ffba311ccbeb4f3ae2062ef3c353b7c0dbd9c", "s2fieldsofstudy": [ "Geology" ], "extfieldsofstudy": [ "Geology" ] }
18197389
pes2o/s2orc
v3-fos-license
Status of Hearing Loss and Its Related Factors among Drivers in Zahedan, South-Eastern Iran Objective: This study aims to investigate loss of hearing among drivers in Zahedan, southeastern Iran. Patients and Methods: This study carried out on a total of 1836 drivers in Zahedan in 2013. Loss of hearing in both ears was measured at 250, 1000, 2000, 3000, 4000, 6000, and 8000 Hertz. The demographic variables, blood parameter and anthropometric data were recorded through interview and examinations. Data were analyzed in Stata.12 software using paired t-tests, McNemar test and Multiple Logistic Regression. Results: The mean age was 38.2±9.8 years. The highest mean hearing thresholds in the right and left ears were 25.7±9.1 and 27.7±9.1, respectively at 250 Hz. There was significant difference between left and right ears hearing threshold at all frequencies (P<0.001), and the highest difference occurred at 250 Hz. Hearing threshold in the left ear was greater than in the right ear at all frequencies. Hearing threshold was correlated to marital status, type of license, and vehicle, smoking, age, and driving history at all frequencies (P<0.01), and also significantly correlated to blood sugar and cholesterol levels at 250 and 500 Hz in both left and right ears (P<0.01). Conclusion: In conclusion, high levels of noise increase hearing threshold with greatest damage to the left ear. Therefore, drivers should be periodically examined for ear damage in accordance to variables affecting loss of hearing. Moreover, drivers must be educated about usage of appropriate ear-plugs during driving, especially for the left ear. Introduction Noise is a health-threatening factor, which can affect safety and efficiency of people in their workplace (Borchgrevink, 2003;Palmer et al., 2002). Through communication problems, and a lack of alertness and focus that will lead to stress and fatigue, noise can cause incidents and accidents and other occupational health/psychological consequences (Dalton et al., 2003;Kramer et al., 2002;Sprince et al., 2003), and this is often overlooked (da Silva et al., 2006;Picard et al., 2008). In terms of road accident mortality, Iran is among the world's leading countries, so that according to statistics published by the Legal Medicine Organization, on average, 25,000 people lose their lives in road traffic accidents each year (Soori et al., 2009), and hearing loss can be a factor in creating road accidents (Ouis, 2001). The most important complication associated with loud noise is loss of hearing, which can lead to masking noise and impaired communication, resulting in accidents. Thus, preventing hearing loss and screening those at risk can potentially reduce accidents. Currently, it is considered a major health issue and one of the 10 work-related diseases, and it is a major occupational disease in Europe (Nelson et al., 2005;Sulkowski et al., 2003). A study conducted in Poland showed that during 1998-2011, about 17.7% of occupational diseases belong to hearing loss (Szeszenia-Dąbrowska & Wilczyńska, 2013). The noise is considered the third leading dangerous pollutant in major cities (Maleki et al., 2010), and road traffic is the most important cause of urban noise (Bluhm et al., 2004). This problem is further exacerbated by the growing number of vehicles on the urban road networks and their sluggish speed due to heavy traffic (Ouis, 2001). Thus, vehicles can be considered a moving source (traffic) of environmental noise and a source of occupational noise for drivers, which can affect their hearing. Drivers are exposed to many physical and physiological stresses such as environmental noise, vibration, temperature fluctuations (due to opening and closing of door and working outside home in different seasons), ergonomic problems and safety risks such as accidents (Clark & Stansfeld, 2007;Jiao et al., 2004). Therefore, loss of hearing can be much more severe and have more complications in this group of people, compared to other occupations. Many studies have been conducted in industrial environments on occupational hearing loss in Iran and around the world (Aliabadi et al., 2014;Leensen & Dreschler, 2015;Nomura et al., 2005), but only a few on loss of hearing in inner and intercity drivers. A study by Lopez et al. showed increased hearing threshold in 22% of drivers at 3000 to 6000 Hz, which increased further with aging (Lopes et al., 2012). Berjis et al. also reported that at 2000 Hz, left ear hearing threshold was significantly higher than that of the right ear among Heavy Goods Vehicle (HGV) drivers (Berjis et al., 2011). Additionally, some studies have found prevalence of hearing loss among drivers from 18.1% to 55.4% (Janghorbani et al., 2009, Santos & Castro Júnior, 2009). Since in the Western countries, vehicles are much more advanced and create very little noise, only a few studies have been conducted on loss of hearing in drivers. However, a number of studies have been conducted in this area in developing countries. In a study in India, hearing loss in HGV drivers was 3 times more compared to taxi drivers (Merchant et al., 2000). Another study in Iran (Janghorbani et al., 2009) has reported the prevalence of bilateral noise-induced hearing loss about 18.1% and the prevalence rates had been higher in the left ear than the right ear. On the other hand, there is no comprehensive study regarding drivers in South-East of Iran, Sistan & Bluchestan province. Furthermore, this province has been located at the border of Pakistan and Afghanistan where thousands of HV drivers transport goods from Iran to Pakistan or Afghanistan and vice versa. Therefore, study of hearing status and its related factors among drivers could be of utmost importance. Accordingly, the present study aimed to provide useful evidences on HV drivers' hearing status and to identify groups at greater risk. Material Studied A total of 1836 inner and intercity drivers in Zahedan with a minimum of 5 years driving history were studied in terms of physical ear examination and audiometry in 2013. Study exclusion criteria were diseases of the ear, work history in noisy environments and subsequent loss of hearing. Participants included drivers with license class A and B (a classification of driving license in Iran), selected according to convenient sampling method. The demographic variables, history of hearing problems and work in other environments, blood parameter and anthropometric data were recorded in data registry forms through in-depth interviews and medical examinations and tests. Indeed, drivers were interviewed in predefined Health Center (Hanane) and a questionnaire on abovementioned information was completed by trained interviewers, for each subject. As we wanted to evaluate the relation between hearing loss and some blood parameters, so, they were asked to refer to the regional laboratory. The drivers additionally provided with an introduction letter for blood sampling (12-h overnight fasting). One day after the interview, a blood sample was taken from each driver at the laboratory and blood parameters were tested. The weight of the subjects was measured by standardized and reliable scale and height of them was measured in centimeter scale in standing position by height gauge. After ear examination by an ear, nose and throat specialist, audiometry was performed by an audiometric clinician, and loss of hearing in both right and left ears was measured at 250, 500, 1000, 2000, 3000, 4000, 6000, and 8000 Hz, and recorded in audiogram sheets. The presence of a hearing loss was defined as a pure-tone average of threshold at 250, 500, 1000, 2000, 3000, 4000, 6000, and 8000 Hz greater than 25 dB of HL in the worse ear and the worse ear was chosen in order to include people with at least one affected ear (Rosenbaum & Stewart, 2004). Frequency distribution of hearing loss in the right, left and both ears was found separately, and the relationship of hearing loss with demographic parameters and some blood factors was studied using logistic regression test (Table 1). The prevalence of hearing loss varied from 11.9% among drivers aged 20-29 years to 45.3% in those over 50 years of age. A significant relationship was found between age and hearing loss in the right and both ears as well (P<0.05). Moreover, there was significant relationship between hearing loss and type of vehicle (P = 0.042). In addition, passenger vehicle drivers had lower hearing loss than other drivers (Table 1). Although HGV drivers more than non-HGV drivers, and illiterate drivers more than highly educated ones suffered loss of hearing, no significant relationship was found between loss of hearing and education level or type of vehicle (P>0.05). Similarly, the relationship of hearing loss with marital status was not significant (P>0.05). Hearing loss in smoker drivers was greater than in non-smokers, with a significant difference (P=0.038) ( Table 1). Although odds of hearing loss reduced with increasing blood sugar, cholesterol and triglyceride, none of these factors had a significant relationship with hearing loss (P>0.05). The odds of hearing loss increased with increasing BMI, but not significantly (P>0.05) ( Table 1). It must be mentioned that the mismatch between the presented table and total sample is due to missing in some variables. Frequency distribution of hearing loss and mean hearing threshold in both ears in drivers at different frequencies are shown in Table 2. Paired t-test showed a significant difference in hearing loss between left and right ears at 250, 500, 1000, 2000, 400, and 8000 Hz, and in every case. Hearing loss in the left ear was greater than in the right one, and the highest hearing loss frequency in both ears was at 250 Hz. Discussion The present study showed the prevalence of bilateral hearing loss in 23.8% of drivers, and about a quarter of drivers had hearing loss, which is higher than that previously reported in Iran (Janghorbani et al., 2009). There are few similar studies in Iran, and this is perhaps the first of its kind in the southeast. Clearly, the prevalence of hearing loss depends on how it is defined, diagnostic methods, frequencies studied, age and gender, and socioeconomic status of drivers and study population in epidemiological studies, which limits comparison between studies. However, the previous studies conducted on the general population in Norway and USA (Agrawal et al., 2008;Borchgrevink et al., 2005) have reported the prevalence of hearing loss of 18% and 16.1%, respectively. Although the prevalence of hearing loss in general public is lower than in specific occupations, a study on phone operatives in Michigan (Stanbury et al., 2008) reported the prevalence of hearing loss 19%, which is less than that found in the present study. Generally, the high prevalence in the present study may be due to differences in definition of hearing loss, measuring technique, and study population. Another point is that although hearing status in different occupations is periodically examined, unfortunately this does not happen in the case of drivers. Thus, a lack of awareness or unavailability of proper services in the southeast of Iran can lead to drivers' delayed visit to medical centers, resulting in increased hearing loss in this occupation. Although hearing loss reduces with increasing blood sugar, cholesterol, and triglyceride, in multivariate analysis with controlled age, none of these parameters was significantly related to hearing loss. The odds of hearing loss increased with increasing BMI, but not significantly. This is somewhat in agreement with a previous study in Iran (Janghorbani et al., 2009). It appears that blood factors could be related to hearing loss in drivers but some studies conducted about the relation between blood factors and hearing loss, reported contradictory results (Daniel, 2007;Kaźmierczak & Doroszewska, 2000;Maia & Campos, 2005), which requires further investigation. In the present study, haring loss in both ears increased with aging, which is in line with previous domestic and international studies (Janghorbani et al., 2009, Agrawal et al., 2008, Borchgrevink et al., 2005, Stanbury et al., 2008 in drivers and general populations. It seems, despite physiological and anatomic changes with aging; it is greater driving history and occupational exposure that cause hearing loss in drivers. As the design of devise has important role in voice emission (Bilski, 2013), so non-standard design of vehicles and automobile in Iran could be another factor that increase the hearing loss among Iranian drivers. In this study, education level and marital status were not significantly related to hearing loss, but it was greater in passenger vehicle drivers than other drivers. Passenger vehicle drivers appear to spend more time driving and are more exposed to noise and stress, which adversely affects their hearing loss. Moreover, hearing loss was greater in smoker drivers than non-smokers. Although there are few studies on the effect of smoking on hearing loss, smokers appear to have higher stress and mental preoccupation. Generally, greater stress and noise have mutual effect on hearing loss. On the other hand, this result might be explained by the need to open the window when smoking, which in turn increases the exposure to noise. This study showed greater hearing loss in the left ear than in the right one, and this agrees with studies in Iran (Berjis et al., 2011, Janghorbani et al., 2009) and worldwide (Kumar et al., 2005). According to tables 1 and 2, a similar situation is observed in subgroups of various parameters. It seems that the left ear is more exposed to noise through the vehicle window than the right one, and thus is more damaged. Use of an air-conditioner prevents leaving the window open, and thus prevents this situation. Strong points in the present study included a large sample size, measuring demographics and blood factors, controlling confounding factors, and being the first study of its kind in the Southeast of Iran. Study limitations included being cross-sectional and not determining cause and effect relationship. In conclusion, considering that proper hearing in drivers can have an important role in reducing preventing road accidents, it is essential that greater attention be paid to this occupation in terms of professional and occupational health, so that as well as preventing progress of hearing loss, safety of these and other people can be improved. It is recommended that drivers be periodically and regularly examined in terms of damage to the ear; and screening can be conducted according to parameters affecting hearing loss. Given the relationship between some blood factors and hearing loss, it is highly important to educate drivers about complications of hearing loss and use of appropriate ear-plugs during driving. Clearly, vehicle manufacturers can play a substantial role in reducing occupational hearing loss in drivers by standardization of production in terms of lower noise and air-conditioning to prevent opening of the window.
2017-06-18T01:22:57.073Z
2015-12-17T00:00:00.000
{ "year": 2015, "sha1": "c9356635810908aaed16b25b14b784c5569ceb55", "oa_license": "CCBY", "oa_url": "http://www.ccsenet.org/journal/index.php/gjhs/article/download/53097/29837", "oa_status": "HYBRID", "pdf_src": "PubMedCentral", "pdf_hash": "c9356635810908aaed16b25b14b784c5569ceb55", "s2fieldsofstudy": [ "Medicine" ], "extfieldsofstudy": [ "Medicine" ] }
44436479
pes2o/s2orc
v3-fos-license
Conformational Changes in the Integrin (cid:1) A Domain Provide a Mechanism for Signal Transduction via Hybrid Domain Movement* The ligand-binding head region of integrin (cid:1) subunits contains a von Willebrand factor type A domain ( (cid:1) A). Ligand binding activity is regulated through conformational changes in (cid:1) A, and ligand recognition also causes conformational changes that are transduced from this domain. The molecular basis of signal transduction to and from (cid:1) A is uncertain. The epitopes of mAbs 15/7 and HUTS-4 lie in the (cid:1) 1 subunit hybrid domain, which is connected to the lower face of (cid:1) A. Changes in the expression of these epitopes are induced by conformational changes in (cid:1) A caused by divalent cations, function perturbing mAbs, or ligand recognition. Recombinant truncated (cid:2) 5 (cid:1) 1 with a mutation L358A in the (cid:2) 7 helix of (cid:1) A has constitutively high expression of the 15/7 and HUTS-4 epitopes, mimics the conformation of the li-gand-occupied receptor, and has high constitutive ligand binding activity. The epitopes of 15/7 and HUTS-4 map to a region of the hybrid domain that lies close to an interface with the (cid:2) subunit. Taken together, these data suggest that the transduction of conformational changes through (cid:1) A involves shape shifting in the (cid:2) Integrins mediate a wide variety of essential cell-matrix and cell-cell interactions and also participate in many common disease processes (1,2). Integrins are heterodimers containing non-covalently associated ␣ and ␤ subunits; each subunit has a large extracellular domain linked to a transmembrane segment and a short cytoplasmic tail. Integrins participate in bi-directional signaling; ligand recognition is dynamically regulated by "inside-out" signaling, and ligand occupancy leads to "outsidein" signals that affect cell migration, growth, differentiation, and survival (3)(4)(5). Modulation of integrin activity is essential in such processes as leukocyte migration to sites of tissue injury and the aggregation of platelets to form a hemostatic plug. Integrin activation can be mimicked in vitro by divalent cations such as Mn 2ϩ or Mg 2ϩ (6). Three major conformational states of integrins can be distinguished using monoclonal an-tibodies (mAbs) 1 : an inactive (resting or low affinity) state, an active (or high affinity) state, and a ligand-occupied state (7). The conformations of the inactive and active states are discriminated by low and high expression, respectively, of activation epitopes (such as those recognized by 12G10, 15/7, and 9EG7 for the ␤ 1 subunit, see Refs. 8 -10). The ligand-occupied conformer expresses high levels of ligand-induced binding site (LIBS) epitopes (which are generally also activation epitopes) and shows decreased expression of ligand-attenuated binding site (LABS) epitopes (such as mAb 13 for the ␤ 1 subunit, see Ref. 11). The conformational states are in equilibrium; therefore, antibodies that recognize activation epitopes or LIBS tend to cause activation and stabilize the ligand-occupied state. Conversely, antibodies that recognize LABS appear to block ligand binding by preventing conformational changes involved in ligand recognition (7,11,12). The molecular basis of integrin function has been powerfully elucidated by the recent x-ray crystal structures of the extracellular domains of ␣ V ␤ 3 in both an unliganded state (13) and in complex with a small peptide ligand (14). Overall, the integrin structure resembles that of a "head" on two "legs." The ligand-binding head region of the integrin contains a sevenbladed ␤-propeller in the ␣ subunit, the top face of which is in close juxtaposition with a von Willebrand factor type A domain in the ␤ subunit (␤A). ␤A consists of seven ␣ helices encircling a central ␤-sheet and is connected at its N and C termini to an immunoglobulin-like "hybrid" domain and forms an extensive interface with it. The key regions involved in ligand recognition are loops on the upper surface of the ␤-propeller and the upper face of the ␤A, which contains a metal ion-dependent adhesion site (MIDAS) and an adjacent MIDAS cation-binding site (13)(14)(15). A small number of subtle conformational changes between the unliganded and liganded states were observed. The most important of these appeared to be a shift of the ␣1 helix in ␤A, and a slight closing up of the interface between the upper surface of the ␤-propeller and the upper face of the ␤A. A surprising feature of the crystal structures was that the two legs are severely bent at the "knees," such that the head is in close contact with lower legs. Because the peptide ligand was soaked into the crystals of unliganded ␣ V ␤ 3 , it is unclear whether the small conformational changes observed between the unliganded and liganded structures (13,14) are representative of those that take place upon ligand occupancy of the * This work was supported in part by grants from the Wellcome Trust (to M. J. H.). The costs of publication of this article were defrayed in part by the payment of page charges. This article must therefore be hereby marked "advertisement" in accordance with 18 1 The abbreviations used are: mAb, monoclonal antibody; ␤A, ␤ subunit von Willebrand factor type A domain; ␣A, ␣ subunit von Willebrand factor type A domain; MIDAS, metal-ion dependent adhesion site; BSA, bovine serum albumin; tr␣ 5 ␤ 1 -Fc, recombinant soluble integrin heterodimer containing C-terminally truncated ␣ 5 and ␤ 1 subunits (␣ 5 residues 1-613 and ␤ 1 residues 1-455) fused to the Fc region of human IgG␥1; LIBS, ligand-induced binding site; LABS, ligand-attenuated binding site. native integrin. Importantly, no pathway for the transduction of conformational changes from the head to the legs (or from legs to head) was evident. Hence, the molecular basis of both outside-in and inside-out signaling remains to be clarified. Recently, evidence (16,17) has been presented that the bent form of the integrin is in the inactive state and that this may undergo a switchblade-like straightening to attain the active conformation. Nevertheless, precisely how this straightening is linked to activation of ligand binding in the head domain is uncertain. A key regulator of integrin activity is known to be the conformation of the ␤A domain (15,18), and we have shown that a movement of the ␣1 helix activates this domain (8). We hypothesized that the ␣1 helix could occupy two different positions: position 1 characterized by high binding of the mAb 12G10 (the active conformation), and position 2 characterized by low binding of 12G10 (the inactive conformation). The inward movement of ␣1 helix observed in the liganded ␣ V ␤ 3 structure (14) supports this proposal. Hence, position 1 appears to correspond to the "in" state and position 2 to the "out" state of the ␣1 helix. About half of the integrin ␣ subunits contain a similar domain (␣A or I), and in these domains an inward movement of the ␣1 helix is linked to rearrangement of cationcoordinating residues at the MIDAS and a dramatic downward shift of the C-terminal ␣7 helix and its preceding loop (19). However, no change in the position of the ␣7 helix of ␤A was observed between the two x-ray structures, and it was suggested that activation of ␤A does not involve ␣7 movement (13)(14)20). Here we provide evidence that changes in the expression of activation epitopes in the hybrid domain are linked to shapeshifting in the ␣7 helix region of ␤A. This movement appears to participate in the conformational changes involved in both activation and ligand binding. Our data suggest that an outward swing of the hybrid domain is coupled to ␣7 helix motion, and hence lend support to a recent model of integrin activation (21). There are both strong similarities and some differences between ␤A and ␣A domain activation. Expression Vector Construction and Mutagenesis-C-terminally truncated human ␣ 5 and ␤ 1 constructs encoding ␣ 5 residues 1-613 and ␤ 1 residues 1-455 fused to the hinge regions and C H 2 and C H 3 domains of human IgG␥1 (␣ 5 -(1-613)-Fc and ␤ 1 -(1-455)-Fc) were generated as described previously (24). To aid the formation of heterodimers, the C H 3 domain of the ␣ 5 construct contained a "hole" mutation, whereas the C H 3 domain of the ␤ 1 constructs carried a "knob" mutation as described (24,25). The L358A and S359A mutations in the ␤ 1 subunit were carried out using oligonucleotide-directed PCR mutagenesis, as described (24). Oligonucleotides were purchased from MWG Biotech (Southampton, UK). The presence of the mutations (and the lack of any other changes to the wild-type sequence) was verified by DNA sequencing. For comparison of purified wild-type heterodimers with heterodimers containing the L358A or S359A mutations in ␤ 1 , 75-cm 2 flasks of sub-confluent CHOL761h cells were transfected with 5 g of wild-type or mutant ␤ 1 -(1-455)-Fc and 5 g of ␣ 5 -(1-613)-Fc DNA as described above. After 4 days, culture supernatants were harvested by centrifugation at 1000 ϫ g for 5 min. Wild-type or mutant heterodimers were purified using protein A-Sepharose essentially as described before (24). Proteins-A recombinant fragment of fibronectin containing type III repeats 6 -10 (III 6 -10 ) was produced and purified as described previously (12). A mutant fragment in which the RGD integrin-binding sequence is replaced by the inactive sequence KGE (III 6 -10 KGE, see Ref. 26) was produced and purified in the same manner. III 6 -10 was biotinylated as before (8) using sulfo-LC-NHS biotin (Perbio, Chester, UK). Fab fragments of N29, TS2/16, and 12G10 were prepared by ficin cleavage of purified IgG, followed by removal of Fc-containing fragments using protein A-Sepharose, according to the manufacturer's instructions (Perbio). None of the Fab fragments showed any reactivity with goat anti-mouse IgG (Fc-specific) peroxidase conjugate (Sigma). Effect of Divalent Cations on 15/7 and HUTS-4 Binding-96-Well plates (Costar 1 ⁄2-area EIA/RIA, Corning Science Products, High Wycombe, UK) were coated with goat anti-human ␥1 Fc (Jackson Immunochemicals, Stratech Scientific, Luton, UK) at a concentration of 2.6 g/ml in Dulbecco's phosphate-buffered saline (50 l/well) for 16 h. Wells were then blocked for 1Ϫ3 h with 200 l of 5% (w/v) BSA, 150 mM NaCl, 0.05% (w/v) NaN 3 , 25 mM Tris-Cl, pH 7.4 (blocking buffer). Blocking buffer was removed, and supernatant from cells transfected with wild-type ␣ 5 -(1-613) and ␤ 1 -(1-455)-Fc diluted 1:1 with 150 mM NaCl, 25 mM Tris-Cl, pH 7.4 (25 l/well), was added for 1-2 h at room temperature. Wells were then washed three times with 200 l of 150 mM NaCl, 25 mM Tris-Cl, pH 7.4, containing 1 mg/ml BSA (buffer A). Buffer A was treated with Chelex beads (Bio-Rad) to remove any small contaminating amounts of endogenous Ca 2ϩ and Mg 2ϩ ions. mAbs (1 g/ml) in buffer A with varying concentrations of Mn 2ϩ , Mg 2ϩ , or Ca 2ϩ were added to the plate (50 l/well). The plate was then incubated at 30°C for 2 h. Unbound antibody was aspirated, and the wells were washed three times with buffer A. Bound antibody was quantitated by addition of 1:1000 dilution of goat anti-mouse IgG (Fc-specific) peroxidase conjugate (Jackson Immunochemicals) in buffer A for 30 min at room temperature (50 l/well). Wells were then washed four times with buffer A, and color was developed using 2,2Ј-azinobis(3-ethylbenzothiazoline-6-sulfonic acid) substrate (50 l/well). Absorption at 405 nm was measured using a plate reader (Dynex Technologies). Background binding to mAbs to wells incubated with supernatant from mock-transfected cells was subtracted from all measurements. Measurements obtained were the mean Ϯ S.D. of four replicate wells. For comparison of the effects of divalent cations on 15/7 and HUTS-4 binding to wild-type heterodimer and the L358A and S359A mutants, plates were coated with anti-human Fc, blocked as described above, and then incubated with supernatant from cells transfected with wild-type or mutant heterodimers. mAb binding was measured as described above in 2 mM EDTA, 2 mM Mn 2ϩ , 2 mM Mg 2ϩ , or 2 mM Ca 2ϩ . Measurements obtained were the mean Ϯ S.D. of four replicate wells. Effect of mAbs and Ligand on 15/7 and HUTS-4 Binding-Plates were coated with anti-human Fc and blocked as described above. Wells were then incubated with supernatant from cells transfected with wildtype or mutant heterodimers for 1-2 h at room temperature as above. Wells were washed three times with 200 l of 150 mM NaCl, 1 mM MnCl 2 , 25 mM Tris-Cl, pH 7.4, containing 1 mg/ml BSA (buffer B). 15/7 or HUTS-4 (l g/ml in buffer B) was added to the plates (50 l/well) either alone or in the presence of Fab fragments of N29, TS2/16, or 12G10 (5 g/ml), mAb 13 IgG (10 g/ml), or III 6 -10 (20 g/ml). The plates were then incubated at 30°C for 2 h. Unbound antibody was aspirated, and the wells were washed three times with buffer B. Bound 15/7 or HUTS-4 was quantitated by addition of 1:2000 dilution of goat anti-mouse IgG (Fc-specific, precleared with rat serum proteins) peroxidase conjugate (Sigma) in buffer B for 30 min at room temperature (50 l/well). Wells were then washed four times with buffer A, and color was developed as above. Background binding of mAbs to wells incubated with supernatant from mock-transfected cells was subtracted from all measurements. Measurements obtained were the mean Ϯ S.D. of four replicate wells. Comparison of Epitope Expression by Wild-type and Mutant Heterodimers-Plates were coated with anti-human Fc and blocked as described above. The blocking solution was removed, and cell culture supernatants were added (25 l/well) for 1-2 h. All supernatants were assayed in triplicate, and supernatant from mock-transfected cells was used as a negative control. The plate was washed 3 times with buffer B (200 l/well), and anti-␣ 5 or anti-␤ 1 mAb (5 g/ml) was added (50 l/well). The plate was incubated for 2 h and then washed 3 times in buffer B. Peroxidase-conjugated anti-rat or anti-mouse secondary antibodies (1:1000 dilution in buffer B; Jackson Immunochemicals) were added (50 l/well) for 30 min, and the plate was washed four times in buffer B, and color was developed as above. All steps were performed at room temperature. Results shown are the mean Ϯ S.D. of three separate experiments. Effect of L358A and S359A Mutations on III 6 -10 Binding-Plates were coated with anti-human Fc and blocked as described above. Wells were then incubated with protein A-purified heterodimers diluted to ϳ1 g/ml with 150 mM NaCl, 25 mM Tris-Cl, pH 7.4 (25 l/well), for 1-2 h at room temperature. Wells were washed three times with 200 l of buffer B. Biotinylated III 6 -10 (0.1 g/ml) in buffer B was added to the plate (50 l/well) alone or in the presence of N29, TS2/16, or 12G10 (5 g/ml). The plate was then incubated at 30°C for 2 h. Unbound ligand was aspirated, and the wells were washed three times with buffer B. Bound ligand was quantitated by addition of 1:500 dilution of ExtrAvidin peroxidase conjugate (Sigma) in buffer B for 20 min at room temperature (50 l/well). Wells were then washed four times with buffer B, and color was developed using 2,2Ј-azinobis(3-ethylbenzothiazoline-6-sulfonic acid) substrate (50 l/well). Background binding to BSA-coated wells was subtracted from all measurements. Measurements obtained were the mean Ϯ S.D. of four replicate wells. Mapping of the 15/7 and HUTS-4 Epitopes-Substitution of human residues with the corresponding residues in murine ␤ 1 within the hybrid domain sequence 361-425 was performed using a PCR-based mutagenesis kit (Gene Tailor, Invitrogen) according to the manufacturer's instructions. CHOL761h cells were transfected with wild-type or mutant constructs and supernatants harvested as described above. Binding of 15/7, HUTS-4, and TS2/16 to mutant heterodimers was performed as described above, relative to the wild-type control. Background binding to mAbs to wells incubated with supernatant from mock-transfected cells was subtracted from all measurements. Measurements obtained were the mean Ϯ S.D. of three replicate wells. Results shown are mean Ϯ S.D. of three separate experiments. In each assay involving a comparison between different heterodimers, the binding of mAb 8E3 (5 g/ml) was used to normalize any differences between the amounts of the different heterodimers bound to the wells. For example, normalized A 405 for 15/7 binding ϭ (AM 15/7 Ϫ Am 15/7 ) ϫ ((AWT 8E3 Ϫ Am 8E3 )/(AM 8E3 Ϫ Am 8E3 )), where AM 15/7 ϭ mean absorbance of wells coated with mutant integrin; Am 15/7 ϭ mean absorbance of wells coated with mock supernatant; AWT 8E3 ϭ mean absorbance of 8E3 binding to wells coated with wild-type integrin; Am 8E3 ϭ mean absorbance of 8E3 binding to wells coated with mock supernatant, and AM 8E3 ϭ mean absorbance of 8E3 binding to wells coated with mutant integrin. 8E3 recognizes a non-functional epitope in the N-terminal region of the ␤ 1 subunit (24). Essentially identical results were obtained from normalization using mAb N29 against the PSI domain (Ref. 27, data not shown). In experiments using heterodimers captured from cell culture supernatants, similar results were obtained using protein A-purified heterodimers (data not shown). Each experiment shown is representative of at least three separate experiments. Homology Modeling of the Head Region of ␣ 5 ␤ 1 -A model of the ␣ 5 -propeller and thigh domains and ␤ 1 A and hybrid domains was built based on an alignment against the ␣ V ␤ 3 crystal structure (13), using the same procedures as described previously (8). The PSI domain (residues 1-60 of ␤ 1 ) was not included in the model. Representation of the structure was produced using PyMol. 2 Expression of the 15/7 and HUTS-4 Epitopes Is Regulated by Conformational Changes in the ␤A Domain-To investigate the mechanisms of integrin activation, we employed a recently described system for expression of recombinant soluble ␣ 5 ␤ 1 (24). For these particular studies, we have used a truncated version of ␣ 5 ␤ 1 , ␣ 5 -(1-613) ␤ 1 -(1-455), fused to the Fc region of human IgG␥1 (24) (hereafter referred to as tr␣ 5 ␤ 1 -Fc). This heterodimer contains the ␣ subunit ␤-propeller and thigh domain, and the ␤ subunit A, hybrid, and PSI domains (13), and has been shown to retain the properties of the full-length receptor (24). This system is particularly useful (a) because it permits the rapid analysis of the effects of mutations, and (b) because conformational changes in the head region can be studied in isolation, i.e. in the absence of any complicating effects due to the presence of the lower leg domains (e.g. unbending, see Refs. 16 and 17) or the cytoplasmic tails (5). Activation of the integrin head is known to involve conformational changes in ␤A, and since ␤A is connected at its N and C termini to the hybrid domain, these changes must be transduced from and to the hybrid. HUTS-4 and 15/7 are two previously characterized mAbs whose epitopes lie within this region of the ␤ 1 subunit (9, 29 -31). Expression of the 15/7 and HUTS-4 epitopes by tr␣ 5 ␤ 1 -Fc was found to be cation-modulated (Fig. 1, A and B). Binding of each mAb was promoted by Mn 2ϩ and to a smaller extent by Mg 2ϩ , whereas Ca 2ϩ did not stimulate binding. Importantly, these effects parallel the effects of each divalent ion on the ligand-binding competence of the integrin (32), and they also mirror a conformational change in the ␤A domain reported by mAb 12G10 (8). These changes have been shown to be due to cation binding to the MIDAS (8), and in agreement with this, the MIDAS mutation D130A prevented the cation modulation of 15/7 and HUTS-4 binding (data not shown). Conformational changes in ␤A can also be induced by function-perturbing mAbs with epitopes in this domain. The epitopes of all function-altering anti-human ␤ 1 mAbs that map to the A domain include one or more residues in the sequence Asn 207 -Lys 218 (33), which is predicted to form the ␣2 helix region (13). The epitope of mAb 12G10 also includes two arginyl residues that lie near the base of the ␣1 helix (8). TS2/16 and 12G10 are examples of activating mAbs, whereas 13 is an example of a function-blocking mAb (11). The mAb N29, whose epitope lies in the PSI domain, was used as a control. 3 15/7 and HUTS-4 binding to tr␣ 5 ␤ 1 -Fc was increased by TS2/16 and 12G10 but markedly decreased by mAb 13 (Fig. 2). Hence, conformational changes in the ␣1/␣2 helix region of ␤A appear to modulate 15/7 and HUTS-4 binding. As ligand recognition is also known to cause shape-shifting in ␤A and to stabilize the active conformation of this domain (8), we tested the effect of ligand binding on the expression of the 15/7 and HUTS-4 epitopes. A recombinant fragment of fibronectin containing the ␣ 5 ␤ 1 recognition sites (26) stimulated 15/7 and HUTS-4 binding (Fig. 2) to a similar extent as TS2/16 and 12G10. Taking these data together, the active conformation of ␤A (stabilized by Mn 2ϩ /Mg 2ϩ , activating mAbs, or ligand) leads to increased expression of the 15/7 and HUTS-4 epitopes. Conversely, the inactive conformation of ␤A (stabilized by Ca 2ϩ or function-blocking mAbs) leads to decreased expression of the 15/7 and HUTS-4 epitopes. These effects may be linked to a motion of the ␣1 helix (8). Our data on the effects of cations, function-perturbing mAbs, and ligand are in broad agreement with previous characterization of 15/7 and HUTS-4 as mAbs recognizing activation/LIBS epitopes (9,29,31). A Mutation in the ␤A Domain ␣7 Helix (L358A) Results in Constitutively High Expression of the 15/7 and HUTS-4 Epitopes-The transduction of conformational changes from ␤A to the hybrid domain must take place at the interface between these two modules. At its C terminus ␤A is joined to the hybrid domain by the ␣7 helix, and mutations in this region of ␣ subunit A domains cause activation by favoring a downward shift of ␣7 (34,35). We therefore tested whether similar mutations in the ␣7 helix of ␤A could affect 15/7 and HUTS-4 binding. A mutation Leu 358 to Ala was found to cause increased expression of the 15/7 and HUTS-4 epitopes, whereas the control mutation S359A had little effect (Figs. 3 and 4). Expression of the 15/7 and HUTS-4 epitopes by the L358A mutant was constitutively high compared with the wild-type receptor, and Mn 2ϩ /Mg 2ϩ only caused a small enhancement of the level of binding seen in the absence of divalent ions (Fig. 3, A and B). The high level of 15/7 binding to the L358A mutant was only slightly increased by activating mAbs TS2/16 or 12G10 and was relatively resistant to inhibition by mAb 13 (Fig. 4); similar results were obtained for HUTS-4 (data not shown). In contrast to the wild-type receptor, ligand binding did not increase 15/7 epitope expression by the L358A mutant (Fig. 4). The mutation S359A had little effect on the ability of mAbs or ligand to modulate 15/7 binding (results similar those for wild-type tr␣ 5 ␤ 1 -Fc, see Fig. 2). These findings suggest that the L358A mutation constrains the ␤A domain in an active conformation and that the ␣7 helix region is involved in conveying conformational changes from ␤A to the hybrid domain. The L358A Mutant Mimics the Ligand-occupied Conformation-We next tested whether the L358A mutant caused any other conformational changes associated with activation or ligand binding. For this purpose we used a panel of mAbs recognizing epitopes on both the ␣ 5 and ␤ 1 subunits (Fig. 5A). The results showed that the L358A mutation increased the expression of the 12G10 epitope and decreased the expression of the 13 and 4B4 epitopes in ␤A. The mutation also attenuated the expression of the epitopes of function-blocking mAbs SNAKA52, 16, and P1D6, which lie at or near the top of the ␣ 5 ␤-propeller domain (23), close to the ␤A/propeller interface (13). Although, as shown above, the L358A mutation increased the expression of the 15/7 and HUTS-4 epitopes, it did not alter the expression of the JB1A epitope, which maps to a different region of the hybrid domain (36). Furthermore, the expression of epitopes that are not affected by the activation state, such as TS2/16 and JB1A, was not altered by the L358A mutation. The control mutation S359A had no significant effect on the expression of any of the epitopes tested (Fig. 5B). Taken together, these results show that the L358A mutation specifically increases the expression of all activation/LIBS epitopes (12G10, 15/7, and HUTS-4, see Refs. 8, 9, and 31) and decreases the expression of all LABS epitopes (SNAKA52, 4 16, P1D6, 13, and 4B4, see Refs. 11, 12, and 37). Hence, the data suggest that L358A mutant adopts a conformation that is similar to the ligand-occupied state. Furthermore, a conformational change in the ␣7 helix region of ␤A caused by the mutation appears to be linked to movements in the ␣1/␣2 helix region (the location of the 12G10, 13, and 4B4 epitopes) and the proximity of the ␤A/propeller interface (the location of the SNAKA52, 16, and P1D6 epitopes). The L358A Mutation Causes Activation of Ligand Binding-If the conformation of the L358A mutant is akin to the ligand-occupied state, it would be predicted that this mutant should be constitutively active for ligand binding (38). Tr␣ 5 ␤ 1 -Fc has low constitutive ligand binding activity when captured onto enzyme-linked immunosorbent assay plates using goat anti-human Fc, but the same protein has similar activity to recombinant integrin containing the complete extracellular domains of ␣ 5 and ␤ 1 when stimulated with mAbs such as 12G10 (24). We compared the ligand binding activity of wild-type tr␣ 5 ␤ 1 -Fc with the L358A and S359A mutants (Fig. 6). The results showed that compared with the wild-type receptor, the L358A mutant had high constitutive ligand binding activity, which was only slightly enhanced by activating mAbs TS2/16 or 12G10. In contrast, the S359A mutant had constitutively low activity, similar to wild-type levels. The Epitopes of 15/7 and HUTS-4 Map to a Region of the Hybrid Domain Close to an Interface with the ␣ Subunit-The above results suggest that the transduction of conformational changes from ␤A to the hybrid domain involves a shift of the ␣7 helix. To understand these changes more fully, we fine-mapped the epitopes of 15/7 and HUTS-4. Both antibodies bind to human ␤ 1 but not to mouse ␤ 1 , and their epitopes have been shown to reside within amino acid residues 355-425 (30,31). These residues in human ␤ 1 show 10 differences with the equivalent sequence in murine ␤ 1 (Table I) using TS2/16 as a control. The mutations E371D and K417N abrogated both 15/7 and HUTS-4 binding, whereas the mutation S370P completely blocked 15/7 binding and strongly inhibited HUTS-4 binding. The other mutations had either no effect or showed a partial inhibition ( Table I). None of the mutations affected binding of TS2/16. Ser 370 and Glu 371 map to the C-D loop, and Lys 417 maps to the neighboring E-F loop of the hybrid domain (13). The spatial proximity of this triplet of residues is consistent with them forming an antibody epitope. Comparison with the crystal structures of ␣ V ␤ 3 (13,14) shows that these epitopes map to a region of the hybrid domain that faces the ␣ subunit ␤-propeller and are very close to residues that form a small interface with it. To estimate the antibody-accessible surface, we rolled a 20-Å sphere over the structure (16). The results (not shown) demonstrated that Lys 417 would be accessible in this conformational state, but Ser 370 and Glu 371 would not. However, Ser 370 and Glu 371 would be available for antibody binding if the hybrid domain moves away from the ␤-propeller. DISCUSSION By using the conformation-sensitive mAbs 15/7 and HUTS-4 and site-directed mutagenesis, we have studied conformational changes in integrin ␤ 1 A domain, and we investigated how these relate to signal transduction in the integrin head region. Our results show the following: (i) ␤A domain activation involves a conformational change in the region of ␣7 helix; (ii) this shape-shifting results in increased exposure of the 15/7 and HUTS-4 epitopes in the hybrid domain and is also associated with other conformational changes in ␤A and the top of the ␣ subunit ␤-propeller; (iii) the 15/7 and HUTS-4 epitopes map to a portion of the hybrid domain that is likely to be partly masked (in the inactive receptor) due to its close proximity to the ␤-propeller. Taking these results together with previous data showing that a movement of the ␣1 helix is important for activation of ␤A (8), we propose the model of affinity regulation shown in Fig. 7. Mechanism of ␤A Activation-Recent debate concerning the mechanism of activation of ␤A has centered on whether or not the ␣7 helix moves. Three distinct models have been proposed. (i) Based on the crystal structures of ␣ V ␤ 3 (13,14) in which there is no movement of ␣7, and the MIDAS site is unoccupied in the absence of ligand, it was suggested that ␤A is regulated by unmasking of the MIDAS site (20,39). (ii) Based on a separation of the head domains observed in RGD-occupied ␣ IIb ␤ 3 after rotary shadowing/electron microscopy (40), and on a similarity between the quaternary structures of integrins and heterotrimeric G-proteins (13), it was hypothesized that activation involves a rotation of ␤A away from its contact with the ␤-propeller (41). In this scenario, the position of the ␣7 helix is fixed but the rotation of ␤A resulted in the same net movement of ␣7 seen in ␣A domains. (iii) Based on studies of inactive (Ca 2ϩ -occupied) and active (Mn 2ϩ -occupied) ␣ V ␤ 3 by negative staining/electron microscopy (17), it was suggested that activa-tion of the head region is regulated by an outward swing of the hybrid domain, which is predicted to be coupled to downward shift of the ␣7 helix equivalent to that in ␣A domains (19,42). Model i appears unlikely because our results (this work and see Ref. 8) and the results of others (17,18) show that conformational changes take place through cation binding to the MIDAS in the absence of ligand occupancy. Model ii is improbable because ␤A and the ␤-propeller appear to move closer together, rather than farther apart, upon ligand recognition because ligand binding requires close apposition of residues on both subunits (14,39). Instead, our data supply strong support for model iii because they suggest that a movement in the ␣7 helix region is important for activation and that this movement is linked to a change in the position of the hybrid domain such that it moves away from the ␣/␤ subunit interface. The existence of the swing-out motion of the hybrid is further supported reactivity with ␤ 1 hybrid domain substitution mutants CHO L761h cells were transfected with ␣ 5 -(1-613)-Fc and wild-type or mutant ␤ 1 -(1-455)-Fc. Cell culture supernatants were analyzed for reactivity with anti-␤ 1 mAbs by sandwich enzyme-linked immunosorbent assay. Results are expressed as a percentage of wild-type binding and are mean Ϯ S.D. from three separate experiments (except for the S422T mutant, from two separate experiments). A value of 0% indicates that mAb reactivity was identical to, or slightly lower than, reactivity with supernatant from mock-transfected cells. All the mutants bound well to the hybrid domain mAb JB1A, and none of the mutations affected recognition of the III 6 -10 fragment of fibronectin (data not shown). by the finding that a section of the ␤ 3 hybrid domain encompassing residues 393-423 (equivalent to residues 402-432 in ␤ 1 ) is exposed in the active but not the resting form of ␣ IIb ␤ 3 (43). This portion of the hybrid domain encompasses part of the 15/7 and HUTS-4 epitopes (Lys 417 ). Why was no movement of the ␣7 helix seen in the crystal structure of the liganded form of ␣ V ␤ 3 (13)? The likely explanation is that motion of ␣7 would be prevented because the hybrid domain is paralyzed by lattice contacts and by its contacts with the leg domains (16,17). Some conformational changes are observed in the liganded ␤A domain; these include rearrangement of the loops that coordinate the MIDAS cation, leading to an inward movement of the ␣1 helix. These changes are very similar, both in direction and form, to those seen in ␣A domains; however, as pointed out above, the subsequent downward motion of the ␣7 helix that takes place in ␣A domains is probably prohibited. Thus, the structural changes observed in the liganded ␤A also favor the hypothesis that the unliganded structure represents the inactive form of ␤A (44). We found that a mutation in the ␣7 helix, L358A, caused activation of the ␤A domain. In ␣A domains mutation of a highly conserved isoleucine residue at the same position favors the active state. This is apparently because this residue fits into a hydrophobic pocket (known as "socket for isoleucine," SILEN), and this interaction favors the inactive form (34). However, unlike the ␣A domains, mutation of residues that form the hydrophobic pocket surrounding the ␣7 helix in the ␤ 1 A domain (Leu 125 , Leu 149 , Leu 253 , and Ile 314 ) did not affect the activation state of tr␣ 5 ␤ 1 -Fc. 5 Hence, the mechanism that regulates ␣7 movement may differ slightly from that of ␣A domains. Nevertheless, mutation of Leu 358 may favor the active state of ␤A because this residue is likely to be more exposed in the "down" position of the ␣7 helix than in the "up" position. Hence, mutation of the leucine residue to the less hydrophobic alanine would be predicted to lower the energy of the active state. We cannot rule out the possibility that the L358A mutation activates the receptor by altering the ␤A/hybrid interface. However, mutation of Ser 359 (also at the interface) did not cause activation, and furthermore, this paradigm would not explain how activating anti-␤A mAbs cause activation and hybrid domain movement in a similar manner to the L358A mutation. In contrast, a linked movement of the ␣1 and ␣7 helices as seen in ␤A domains can explain the mechanism of action of the anti-␤A domain mAbs (see below). Similarities and Differences between Activation of ␤A and ␣A Domains-In the activation of ␣A domains a change in cation coordination at the MIDAS is linked to an inward movement of the ␣1 helix. This movement pinches the hydrophobic core, squeezing out residues in the loop that precedes the ␣7 helix, and results in an ϳ10-Å downward motion of ␣7 (19). A similar link between ␣1 and ␣7 helix movement in ␤A is suggested by our findings. For example, occupancy of the MIDAS by Mn 2ϩ or the binding of activating mAbs, which stabilize the in position of the ␣1 helix (8), also promotes the downward motion of the ␣7 helix (reported by increased exposure of the 15/7 and HUTS-4 epitopes). Although the overall mechanisms of ␤A and ␣A activation now appear to be closely related, there are some subtle differences. For example, for ␣A domains Mn 2ϩ and Mg 2ϩ are equally effective for promoting ligand binding to the MIDAS (45,46). In contrast, Mn 2ϩ is much more effective than Mg 2ϩ for promoting ligand binding to the ␤A MIDAS (8). This property of Mn 2ϩ may be due to the fact that it binds with much higher affinity than Mg 2ϩ to the ␤A MIDAS, whereas the affinities of ␣A domains for these two ions are more comparable (44,45). Similar to ␣A domains, movement of the ␣7 helix appears to form an essential part of the activation mechanism of ␤A because ␣7 movement closely parallels the activation state. However, as noted above, the regulation of ␣7 motion may be slightly different to that in ␣A domains. Allosteric Mechanism of Function-perturbing mAbs-We have shown previously (11,12,37) that most function-blocking anti-␣ 5 and anti-␤ 1 mAbs have an allosteric mode of action. They recognize epitopes attenuated by ligand recognition and appear to perturb ligand binding by preventing a conformational change involved in the formation of the integrin-ligand complex (7). The epitopes of function-blocking anti-␤ 1 A domain mAbs map to the ␣2 or ␣1 helix regions (13,33,47), which lie adjacent to each other. Similarly, the epitopes of functionblocking anti-␤ 2 A domain mAbs have been shown to include residues in the ␣1/␣2 or ␣7 helix regions (48,49). It is likely that these mAbs function allosterically by stabilizing ␣ 1 in the inactive (out) location and/or ␣7 in the inactive (up) position. The epitopes of function-blocking anti-␣ subunit mAbs map to loops on the top face of the ␤-propeller domain (indicated by arrowheads in Fig. 7), and the binding of these mAbs may prevent conformational rearrangements required for ligand binding (14). Hence both anti-␣ and anti-␤ subunit functionblocking mAbs appear to impede conformational changes involved in ligand recognition, as proposed previously (7). The epitopes of activating anti-␤A domain mAbs map to the same regions as inhibitory mAbs (8,18,33) and are likely to function by stabilizing ␣1 in the active (in) location and/or ␣7 in the active (down) position. It has been suggested that 12G10 activates by affecting the ␤A/hybrid domain interface (44), rather than by an effect on the ␣1 helix (8). However, we consider this proposal to be incorrect because (i) the mechanism of 12G10 action is likely to be closely overlapping with that of other activating anti-␤A mAbs (whose epitopes are not close to the interface), and (ii) 12G10 has the same properties in a recombinant integrin that lacks a large part of the hybrid domain, suggesting that it directly affects the conformation of ␤A (24). The activating effect of the 15/7 and HUTS mAbs (9, 31) is likely to be due to their preferential binding to the active form of the integrin in which the hybrid domain is shifted away from the ␤-propeller. Implications for Inside-out and Outside-in Signaling-A recent NMR study of a complex between the intracellular segments of ␣ IIb and ␤ 3 suggests that integrin activation is prevented by a "handshake" between ␣ and ␤ cytoplasmic tails, and that unclasping of this handshake by proteins such as talin represents the first stage of activation (5). Separation of the cytoplasmic domains may then be linked to unbending of the integrin legs, which in turn is coupled to activation of head domains (16,17). In this scenario, hybrid domain movement is severely restricted by its interface with the leg domains in the inactive, bent form of the integrin. Unbending causes activation because it is coupled to the release of the hybrid domain from these constraints, allowing swinging of the hybrid to take place, which, in turn, would cause pulling on the ␣7 helix of ␤A. Our findings generally support this model of activation. However, since removal of the lower leg domains (in the truncated integrin) did not favor the active state (24), hybrid domain movement (rather than unbending per se) appears to be the essential requirement for activation. Hybrid domain motion provides the conduit for the transduction of signals to and from the head region. The attenuation of the epitopes on the ␤-propeller in the L358A mutant also suggests that the outward swing of the hybrid domain is coupled to a conformational rearrangement of the ␤A/propeller interface that is involved in ligand recognition (14). Similarly, ligand binding reinforces and stabilizes the conformational changes associated with activation (e.g. the movements of the ␣1 and ␣7 helices and hybrid domain). This feature of ligand recognition may explain the well known ability of integrin ligands to cause activation that persists after dissociation of the complex (50,51). Outside-in signaling could result, in part, from stabilization of the active conformation, allowing the separated cytoplasmic tails to stably interact with cytoskeletal and signaling molecules. In addition, integrin clustering is also a major contributor to this signaling (52,53). In summary, we have shown that a conformational shift in the ␣7 helix region of the ␤A domain is involved in the regulation of integrin activity. This movement is coupled to a swingout of the hybrid domain and thereby provides a pathway for signal transduction. Integrins are important therapeutic targets in many inflammatory and cardiovascular disorders (28), and our findings suggest a novel way in which highly specific regulators of integrin activity could be developed (e.g. by stabilizing the hybrid/propeller interface). A more complete understanding of the signaling mechanisms will require further characterization and crystallization of an integrin in defined conformational states.
2018-04-03T05:45:53.294Z
2003-05-09T00:00:00.000
{ "year": 2003, "sha1": "894ef16a09f1c914859f94548d51ea67dca2f4a5", "oa_license": "CCBY", "oa_url": "http://www.jbc.org/content/278/19/17028.full.pdf", "oa_status": "HYBRID", "pdf_src": "Adhoc", "pdf_hash": "2924526efab22da6cc1cf2687e53fa68360632ee", "s2fieldsofstudy": [ "Biology" ], "extfieldsofstudy": [ "Medicine", "Chemistry" ] }
237866186
pes2o/s2orc
v3-fos-license
Jurnal Akuntansi dan Auditing Indonesia Is there any interaction between real earnings management and accrual-based earnings management? This research aims to investigate whether firms employ real earnings management (REM) and accrual-based earnings management (AEM) as substitutes for each other when managing earnings to meet earnings benchmarks. It specifically looks at the sequential nature of both forms of earnings management. REM is proxied by an abnormal amount of operating cash developed by Dechow et al. (1998), while AEM is proxied by the discretionary accrual model by Dechow, Sloan, & Sweeney (1995). The data was obtained from the Economics and Business Data Center, Faculty of Economics and Business, Gadjah Mada University, focusing on manufacturing and mining companies during the period from 2005 to 2013, which resulted in 754 firm-years data. Using correlation tests and an empirical model developed by this research, which captures the interaction between REM and AEM, this research shows that firms use both forms of earnings management sequentially; managers more often engage in accrual-based earnings management if the earnings produced by real manipulations do not meet the earnings target. This finding is important as REM and AEM occur sequentially instead of simultaneously, and earnings performance is not only driven by accrual-based but also real Introduction Earnings management practices have long been studied by researchers, yet this topic still receives attention from both researchers and practitioners, especially since the rise in popularity of real earnings management over accrualbased earnings management practices. It is interesting as most prior studies in earnings management have only focused on accrual-based earnings management. This research examines the interaction between accrual-based earnings management and real earnings management practices as a strategy for meeting earnings targets. Prior studies have shown evidence that firms make choices between accruals and real earnings management (Alhadab & Nguyen, 2018;Baker et al., 2019;Cohen et al., 2008;Cohen & Zarowin, 2010;Sellami, 2016;Zang, 2012) , however, with the exception of Zang (2012), studies depicting the sequential nature of both earning management practices remain scarce so that further investigation of the sequential nature of both earnings managements is necessary. This research provides evidence that managers use a combination of accruals and real earnings management to meet earnings targets and shows that real and accruals earnings management have a sequential nature. It is important to examine both earnings management strategies in a research project as much evidence has been found to show that earnings management activities are not limited to accrual-based earnings management (AEM), but also extend to real earnings management (REM), especially after the Sarbanes-Oxley Act (SOX). Research into REM is a research topic that could be deemed to be a contemporary area. However, most prior studies tend to examine one or the other form of earnings management. Fields et al. (2001) argue that examining only one earnings management technique in one time period cannot explain overall earnings management activities, so it is important to include REM and AEM in determining earnings management. Earnings management is a method of presenting earnings that aims to maximise management's utility and/or to increase the firm's market value through selecting a set of accounting policies or choices by the management (Scott, 2015). Gunny (2010) classifies earnings management into two categories, i.e. AEM and REM. AEM is done through management discretion in choosing the particular accounting methods to produce the desired earnings. AEM is legal as this practice is still within the corridor of Generally Accepted Accounting Principles (GAAP). REM occurs when managers engage in actions that deviate structurally or by altering the timing of an operation, investment, and/or financing transaction in an attempt to influence the output of the accounting system (Gunny, 2010). Roychowdhury (2006) describes three ways of doing REM; these are manipulating the operating cash flow, discretionary expenses, and overproduction. Managers usually engage in AEM at the end of an accounting period when they realise that earnings have not met the desired target. However, accruals manipulation is limited to the accounting choices available in GAAP and limited by any accruals manipulation over the previous periods. Hence, there is the possibility for managers to substitute an accruals manipulation with real activities manipulation, as real activities manipulation is not easily detected and could be done throughout the accounting period. Cohen and Zarowin (2010) argue that there are two reasons why management prefer to manage earnings through REM. Firstly, AEM is more easily detected by auditors or regulators, as it appears in policies related to product pricing, revenue recognition, expense recognition and depreciation methods. Secondly, managing earnings using accruals has a greater risk of not achieving the desired earnings. Several studies which examine both REM and AEM assume that accounting choices (earnings management) occur simultaneously, however, they do not consider the sequential nature of REM and AEM as substitutes for each other (Barton, 2001;Cohen et al., 2008;Cohen & Zarowin, 2010). On the other hand, Zang (2012) argues that managers engage in REM during operating periods, then engage in accruals earnings management at the end of an accounting period if the earnings targets could not be achieved through real manipulations. This interaction shows that the nature of REM and AEM is sequential; hence there should be a negative relationship between them. Thus, with the exception of Zang (2012), very few studies examine the sequential nature of REM and AEM, while the previous literature suggests the substitution and utilisation of both earnings management practises by managers, not only one of them. In other words, the deployment of the combination of REM and AEM by managers, along with their sequential nature, is still under-researched. Therefore, it is important to examine both forms of earnings management and their sequential nature. This study attempts to identify whether the degree of using REM influences the degree of using AEM. This study will use samples from manufacturing and mining companies in Indonesia from 2005 to 2013, as the nature of manufacturing companies is to offer incentives to engage in earnings management (Roychowdhury, 2006). They will be compared to other companies, with those in the mining sector employed as a primary comparison. REM (the independent variable) is proxied by using a model developed by Dechow et al. (1998), following prior studies (Cohen & Zarowin, 2010;Roychowdhury, 2006), while AEM (the dependent variable) is proxied by using a modified Jones model (Dechow et al., 1995). This research assumes that the level of AEM depends on the level of REM used to meet earnings targets, so that AEM is dependent on REM. REM takes place during a fiscal year, whereas AEM takes place at the end of an accounting period, after identifying the earnings realised by using real earnings management. This assumption is supported by Zang (2012), who found that REM preceded AEM, so the level of AEM used by managers depends on the level of REM used by them. To find a broader conclusion, this research developed models to test this relationship in certain conditions by using dummy variables as follows: financial crisis; manufacturing and mining industry; and local companies and MNCs. To test the hypothesis, this study uses a multiple regression analysis. This research finds that REM negatively influences AEM. The more managers engage in REM, the less they engage in AEM and vice versa. It depicts that both forms of earnings management have a sequential nature and act as a substitute for each other. The negative relationship between REM and AEM shows the sequential nature of both forms of earnings management, where REM precedes AEM (Zang, 2012). Moreover, this research supports three hypotheses relating to the positive accounting theory: the bonus plan hypothesis, the debt covenant hypothesis and the political cost hypothesis. In a specific contextual environment, this research finds that the level of AEM decreased during the Asian financial crisis. As argued by Chia et al. (2007), firms experienced tighter controls and monitoring due to the uncertain business environment during the Asian financial crisis. Initially, multinational companies (MNCs) are expected to engage in less earnings management as they have better corporate governance than local companies, which could prevent managers from engaging in AEM. Shen and Chih (2007) found that good corporate governance could prevent managers from engaging in accruals manipulations. However, the type of company, whether it is a multinational or local enterprise, does not affect the tendency for a firm to engage in accruals-based earnings management. This might be caused by the weak institutional framework in Indonesia, which opens up incentives for firms to engage in earnings management. As expected, firms in the manufacturing industry engage more frequently in AEM than firms in the mining industry. This suggests that tight competition in the manufacturing industry may lead companies to engage in earnings management if they do not meet their earnings targets. On the other hand, the steady business environment of the mining industry shows that companies tend not to engage in earnings management. Roychowdhury (2006) argues that the manufacturing industry provides more incentives for managers to engage in earnings management. Three recent studies have examined both AEM and REM. Cohen et al. (2008) found that the level of AEM practices had declined after the issuance of SOX, while the level of REM practices had increased (indicating a negative relationship). This shows that firms have switched from accruals manipulation to real manipulation as the result of SOX. Cohen and Zarowin (2010) found that managers engaged in both forms of earnings management in the years of seasoned equity offering (SEO) was undertaken and found a positive correlation between REM and the cost of AEM. This shows that if the cost of AEM is relatively high, a firm tends to engage more frequently in REM and vice versa (indicating a negative relationship between AEM and REM). Zang (2012) found a positive relationship between REM and the cost of engaging in AEM, and also between AEM and the cost of engaging in REM. It showed that a firm tends to engage in AEM if the cost of engaging in REM is relatively high and vice versa, so there is a negative relationship between REM and AEM. Compared to prior studies, this research makes several contributions. The more managers engage in REM, the less they engage in AEM to meet earnings targets and vice versa. When Zang (2012) employed the cost of engaging in earnings management to find the relationship between REM and AEM, Cohen and Zarowin (2010) considered specific events, and Cohen et al. (2008) employed specific regulations (the issuance of SOX) to find the relationship between REM and AEM, while this research has directly examined REM and AEM to find the relationship. It provides evidence that managers employ both REM and AEM sequentially to meet earnings benchmarks and that this does not depend on the cost, specific events, or specific periods of regulation. Additionally, this research and Zang (2012) provide a new point of view that REM and AEM have a sequential nature and act as substitutes for each other, so when developing a research model that involves both forms of earnings management, we should consider this. It shows that focusing on only one form of earnings management will not fully explain all the earnings management activities and would produce indefinite conclusions. The next section provides the relevant theories and literature and develops the hypothesis. It is followed by the research's design, the measurement of REM and AEM and the measurement of the control variables. Section 4 highlights the analysis, discussion and research findings. The final section highlights the conclusions and implications of this research. Literature Review Positive Accounting Theory There are several motivations for why managers engage in earnings management. The positive accounting theory, a theory that is rooted in the agency theory, tries to explain accounting practices including earnings management practices. The positive accounting theory explains why a particular firm chooses particular accounting methods (Watts & Zimmerman, 1986). It is based on economic assumptions that actions by humans are driven by self-interest (opportunistic behaviour) and that they would try to increase their own wealth (Deegan & Unerman, 2005). According to the self-opportunistic behaviour of humans, the positive accounting theory predicts that an organisation would establish a system that aligns the self-interests of humans (managers of the firm) with the interests of the shareholders. According to the positive accounting theory, there are three hypotheses that explain why managers engage in earnings management, i.e. the bonus plan hypothesis, the debt-to-equity hypothesis, and the political cost hypothesis. The bonus plan hypothesis states that a firm's bonus plans promote the use of accounting procedures that accelerate revenue recognition from the next period to the current period. Earnings are usually seen as a firm's ability to create value for its customers and shareholders, so they are then used as a performance measurement for the managers, in order to determine any bonus for them. Since earnings are used as a performance measurement, ideally, they should reflect the fair and true value that has been created by the managers. Hence, the managers' discretion to manage earnings should be restricted to reach the true and fair value of earnings. Managers' discretion would not be completely restricted, yet it would be fairly restricted (Watts & Zimmerman, 1986). However, managers will tend to manage earnings to meet the set target if the normal operations of the firm fail to reach the target. The objective is clear; they want to earn a specific amount of bonus. Therefore, bonus plans based on earnings encourage managers to not only increase their performance but also to engage in earnings management if the desired earnings could not be reached. The second hypothesis is the debt-to-equity hypothesis. There are covenants in a loan agreement that has to be reached by the debtor and creditor. These covenants usually restrict a firm's (debtor) activities, so that the firm should not find itself in a condition in which it might not be able to settle its loans. Some covenants are based on specific earnings amounts, which a firm should be able to earn within a set period. Lower earnings usually bring a firm closer to the covenant's limits. Because of this, the closer a firm is to violating its debt covenants, the more its management will engage in earnings management to increase the earnings. Therefore, the debt-to-equity hypothesis states that the higher the debt-to-equity ratio is, the more managers engage in earnings management to accelerate future period earnings into current period earnings. The third hypothesis is the political cost hypothesis. The political cost hypothesis states that large companies have higher political costs than smaller companies. In other words, larger companies will attract more political attention than smaller companies (Watts & Zimmerman, 1986). The more companies attract political attention, the greater the possibility of facing new laws and regulations that may negatively impact them. Some countries adopt a progressive tax rate so that large companies tend to manage their earnings down, for example, to decrease income tax (political cost). Deegan & Unerman (2006) state that if managers feel that they are under political scrutiny, they tend to adopt accounting methods that decrease the reported earnings. Therefore, it could reduce the assumptions by the government or people that they are exploiting the other parties. In this context, managers tend to defer current earnings to the next period. Prior Studies and Hypothesis Development Earnings management is a method of presenting earnings that aims to maximise the management's utility and/or to increase a firm's market value through selecting a set of accounting policies or choices by its management (Scott, 2015). Earnings management could also be defined as an ability to increase or decrease the reported net income to the desired amount (Copeland, 1968). Healy and Wahlen (1999) describe three relevant aspects of earnings management. These are judgement in financial reporting and structuring transactions, misleading stakeholders about underlying economic performance and influencing contractual outcomes that depend on reported accounting numbers. Managers make choices about the accounting methods and policies that affect the accounting system's outcomes, in order to influence shareholders, hence this practice can be considered as earnings management. Earnings management is legal and not fraud as it is still within the corridor of GAAP. Gunny (2010) classifies earnings management into two categories, i.e. AEM and REM. AEM is done through managements' discretion in choosing a particular accounting method to produce the desired earnings. REM is conducted by using managements' discretion on the firm's real operations to manage earnings. REM occurs when managers engage in actions that deviate from the structure or timing of an operation, investment, and/or financing transaction in an attempt to influence the output of the accounting system. As explained before, AEM is done through managements' discretion to choose the particular accounting methods that produces the desired earnings, for example, by changing the accounting methods or estimates used when analysing particular transactions. AEM is a form of earnings management that has often been studied. Fields et al. (2001) and Healy and Wahlen (1999) argue that AEM has largely been the focus of much earnings management literature. However, there are three main constraints for managers who engage in AEM. These constraints are the auditors' scrutiny, the firm's accounting flexibility and the regulator's scrutiny. Becker et al. (1998) argue that large auditing firms such as the Big Four have more experience at (and good reputations for) auditing a firm's accounts, so they have the skills and ability to detect earnings management practices. The application of AEM is constrained by a firm's accounting flexibility, as AEM might have been applied to the previous periods and hence would be reversed on the next period (Barton & Simko, 2002). Furthermore, the regulator's scrutiny has reduced the use of accruals manipulation. Cohen et al. (2008) found that the level of AEM has been decreasing since the adoption of the Sarbanes-Oxley Act (SOX). It means that the regulator tries to protect shareholders from managers as insider parties, who may be trying to gain personal benefits by using AEM. In addition, Leuz et al. (2003) found that investor protection has a negative correlation with the level of AEM. It shows that the regulator has played an important role in reducing AEM practices in order to protect investors. As managers are constrained to engage in AEM, the use of REM is increasing (Cohen & Zarowin, 2010). There are two reasons why management prefers to manage earnings through REM. Firstly, AEM is more easily detected by the auditors or regulators, through such policies as product pricing, revenue recognition, expense recognition and depreciation methods. Secondly, managing earnings using accruals has a greater risk of not achieving the desired earnings. Additionally, Gunny (2010) argues that there are at least two reasons for managers to use REM over accruals manipulation. Firstly, aggressive accruals management has a high risk of scrutiny and litigation by the Securities and Exchange Commission (regulator). Secondly, a firm might have limited flexibility in its accounting policies, as argued by Barton and Simko (2002), as AEM can be constrained by business operations and accruals management that have been undertaken in the previous periods. REM occurs when managers engage in actions that deviate from the structuring or timing of an operation, investment or financing transaction in an attempt to influence the output of the accounting system (Gunny, 2010). Roychowdhury (2006) found that to avoid losses, firms decide to give a sales discount (sales manipulation), whereas to increase sales, firms engage in overproduction to report the lower cost of goods sold and decreasing discretionary expenditures. As it is conducted using real operational decisions, REM would directly influence a firm's cash flow and could have a negative impact on its future financial performance. Gunny (2005) found that firms using REM experienced a lower return on assets and cash flow in the subsequent periods. This occurs because managers are too focused on short-term targets, without considering the future consequences when meeting such targets. Consequently, future performance would fall below what is expected. With regards to the earnings management literature in emerging economies, they tend to only focus on AEM. Using firm-level data from nine Asian countries (Hong Kong, India, Indonesia, Korea, Malaysia, Thailand, the Philippines, Singapore, and Taiwan), Shen and Chih (2007) found that firms with good corporate governance tend to engage in AEM less frequently. Similarly, Liu and Lu (2007) found that firms with a higher corporate governance level in China's listed companies have a lower level of earnings management. They also argue that the agency conflict between the controlling and minority shareholders is responsible for a significant portion of earnings management practices in China. Moreover, Doidge et al. (2007) found that companies in less developed countries are reluctant to improve their firm-level corporate governance, as both the adoption of better corporate governance and external financing are expensive. It leads firms in less developed countries to be more vulnerable to engaging in earnings management. Furthermore, although it seems REM is favourable to use, we could not ignore the existence of AEM in identifying earnings management. AEM is still a popular tool for managers to manage earnings. Instead, the previous literature on earnings management primarily focuses on AEM (Fields et al., 2001). However, they deemed that most managers of firms only engage in AEM and seem to ignore the existence of REM. Additionally they explain that identifying only one form of earnings management in a period could not explain the impacts of the whole earnings management. For example, if managers use REM as a substitute for AEM, research separating these earnings management techniques would not be able to produce a definitive conclusion (Fields et al., 2001). In fact, managers engage in both earnings management forms to manage their earnings to the desired amounts. Prior studies show evidence that firms make choices (trade-offs) between REM and AEM. Cohen and Zarowin (2010) found a positive correlation between the costs of engaging in AEM with REM during a seasoned equity offering (SEO). When the cost of engaging in AEM increased, managers tended to engage in REM. This depicts a negative relationship between REM and AEM. Zang (2012) found a positive relationship between REM and the cost of engaging in AEM, and also between AEM and the cost of engaging in REM. It shows that a firm tends to engage in AEM if the cost of engaging in REM is relatively high and vice versa, so there is a negative relationship between REM and AEM. Zang (2012) also found that real activities manipulation precedes accruals manipulation. Cohen et al. (2008) found that the level of accruals manipulation decreased, whereas the level of real activities manipulation increased after the issuance of the Sarbanes-Oxley Act (SOX). Badertscher (2011) found that managers engage in AEM in the early years during the overvaluation period, REM in later years and non-GAAP earnings (which could be deemed as fraud). He argues that the determinant factor that influences managers to engage in earnings management is the duration of the overvaluation. However, the above-mentioned studies assume that REM and AEM occur simultaneously and they do not consider the sequential nature of both forms of earnings management (Badertscher, 2011;Barton, 2001;Cohen et al., 2008;Cohen & Zarowin, 2010). These studies focus on identifying the nature of two forms of earnings management within companies. As explained by prior studies (Badertscher, 2011;Cohen et al., 2008;Cohen & Zarowin, 2010), both forms of earnings management are jointly used to manage earnings. However, being jointly used does not mean that both forms of earnings management occur simultaneously. Zang (2012) argues that managers conduct AEM after they conduct REM, as REM can only be done during a fiscal period. After earning a specific amount of income by conducting REM, managers then engage in AEM at the end of the period to achieve the desired income. This means that the greater the management's efforts are to conduct REM, the less effort is needed to conduct AEM and vice versa. In other words, since both earnings management practises occur sequentially, a negative relationship exists between them. Zang (2012) explains that REM precedes AEM since AEM can only be used at the end of an accounting period, so that during the accounting period, managers engage in REM first, followed by AEM. It makes the degree of utilisation of AEM depend on that of REM. In other words, REM is employed as the dependent variable. Based on this argument, this study proposes the hypothesis: HA: Real earnings management negatively influences accruals-based earnings management. Additionally, this research added certain conditions through dummy variables. These dummy variables were the Asian financial crisis, multinational companies (MNCs) and local companies which were in the mining and manufacturing industry. Chia et al. (2007) argue that the Asian financial crisis led to uncertain conditions and limit earnings management practices, as there was increased monitoring and scrutiny of companies' activities, resulting in pressure on managers to report more credible financial information, including reported earnings. Local companies usually operate within national boundaries, with a weak institutional framework, especially in Indonesia. Beuselinck et al. (2019) argue that capital market pressures may induce MNCs to engage in earnings management. MNCs are also audited by the Big Four accounting firms that have good reputations for auditing and detecting earnings management. As mentioned earlier, Shen and Chih (2007) found that good corporate governance could prevent accruals manipulation. Consequently, MNCs may engage less often in earnings management. Furthermore, the manufacturing and mining industries in Indonesia have become the key drivers in the Indonesian economy. Roychowdhury (2006) argues that due to their nature, manufacturing companies offer more incentives to engage in earnings management than companies in other industries do. Therefore, this research argues that manufacturing companies engage in more earnings management. Research Method Sample Selection This study used accounting numbers data obtained from the Economics and Business Data Center, Faculty of Economics and Business, Gadjah Mada University, Indonesia with the following criteria: manufacturing and mining companies listed on the Indonesia Stock Exchange (IDX) during the period from 2005 to 2013. In order to obtain robust results, the outlying data were excluded. Data with a Z score of more than three and less than minus three were considered as outliers. Manufacturing and mining companies were employed as it is the nature of the manufacturing industry to offer incentives to engage in earnings management (Roychowdhury, 2006), compared to other industrial sectors, hence mining companies were employed as a comparison. After excluding the outlier data, this research had 754 firm-years of data. Independent Variable REM was employed as the independent variable that will be explained further in the hypothesis test section. It was proxied by using a model developed by Dechow et al. (1998), following Roychowdhury (2006) and Cohen and Zarowin (2010), i.e. an abnormal operating cash flow. An abnormal operating cash flow was obtained from the subtraction of the actual operating cash flow from the normal operating cash flow. AEM was treated as the dependent variable since the degree of the utilisation of AEM depends on that of REM during the operating period of the company. It was proxied by using a modified Jones model (Dechow et al., 1995). This model is a modification of a previous Jones model which eliminates the tendency of discretion error exceeding the revenue. This model was used by most of the prior studies as a proxy of accrual-based earnings management since this model is deemed to give robust results. Control Variables This study used four control variables to control the effect of earnings management and address measurement errors in earnings management proxies that were correlated with firm characteristics (Cohen & Zarowin, 2010;Fields et al., 2001;Gunny, 2010;Zang, 2012). These control variables were the debt-to-equity ratio (DE), company size, ROA (return on assets) and market-to-book value. The DE ratio was used to control a firm's leverage. Watts and Zimmerman (1986) formulated a leverage hypothesis explaining that a firm with high DE tends to manage earnings upwards. A firm may face the risk that they cannot settle their debts if they have a higher debt-to-equity ratio. It raises the incentive to manage earnings to comply with debt covenants so that the firm can avoid the possibility of higher interest rates. Therefore, since the higher level of earnings management will have more value for discretionary accruals, DE was expected to have a positive coefficient in this research. Firm size was a natural logarithm of the total assets of a company. Firm size is employed to control their relative size in the industry. Watts and Zimmerman (1986) propose a political cost hypothesis stating that big companies will receive more political attention, so bigger companies tend to manage their earnings downward to minimise political costs. Hence, bigger companies have incentives to adopt accounting methods that reduce the reported earnings. Therefore, the size variable was expected to have a positive coefficient. Davidson et al. (2005) found that there was a positive relationship between a firm's size and its use of earnings management. Cohen and Zarowin (2010) and Zang (2012) argue that earnings management's effect needs to be controlled by including ROA and the market-to-book value. The higher and more stable the ROA, the higher the possibility is that a firm would be followed by financial analysts and shareholders, which leads to better bonuses for managers. It means that managers tend to engage in earnings management to increase the reported earnings. Therefore, ROA was expected to have a positive coefficient in this research. Zang (2012) and Cohen and Zarowin (2010) argue that the market-to-book-ratio is used to control a firm's growth and the calculation is as follows: = / Shareholders and financial analysts tend to follow firms with stable growth. Therefore, a firm tends to manage its earnings in order to avoid volatility so that the firm would appear to have stable growth. The marketto-book ratio was expected to have a positive coefficient in this research. Hypothesis Test This research assumed that managers engaged in REM during the operating period and then they engaged in AEM at the end of an accounting period if the earnings produced by REM did not meet the earnings targets. Hence, AEM was employed as the dependent variable and REM was employed as the independent variable. Zang (2012) found that REM preceded AEM. This shows that the level of AEM depends on the level of REM engaged in by managers. The null hypothesis would be rejected if the ρ values were less than 0.05 (Bryman & Bell, 2011). Therefore, to investigate the relationship between accruals earnings management and real earnings management, this study developed the multiple regression equations as follows: To proxy the variable financial crisis, this study used a dummy variable where financial crisis years were valued one and zero for the others. This study grouped the years of the financial crisis as the years 2008 to 2010. The financial crisis variable was expected to have a negative sign, as there would be tighter controls and greater scrutiny of companies' activities, due to the uncertain business environment during the financial crisis. It would prevent managers from engaging in earnings management. The MNC variable was proxied by using a dummy variable. MNC companies equal one and local companies equal zero. As argued before, MNCs may have tighter monitoring and heavier capital market pressures than local companies, which prevent the managers of MNCs from engaging in earnings management. Hence, the MNC variable in this equation was expected to be negative. Man/Min variable was manufacturing and mining companies that were proxied by using a dummy variable. Manufacturing companies were valued one and mining companies were valued zero. As mentioned earlier, Roychowdhury (2006) argues that the manufacturing industry offers more incentives for engaging in earnings management. Therefore, the Man/Min variable was expected to have a positive sign and be significant in this research. Result and Discussions Descriptive Statistic and Correlation Test Table 1 Panel A depicts the descriptive statistics of the sample. Descriptive statistics include minimum, maximum, mean, and standard deviation values. In order to obtain robust results, the outlier data were excluded. Data with Z scores of more than three and less than minus three were considered as outliers, resulting in 754 firm-years. Table 1 Panel A shows that the AEM variable had a minimum and maximum value of -0.1571 and 0.6368, respectively. Epps and Guthrie (2010) state that positive discretionary accruals tend to be evidence that a firm manages earnings upward and negative discretionary accruals tend to be evidence that a firm manages earnings downward. AEM had a mean of 0.2131 and a standard deviation of 0.1217. It shows that firms tend to manage earnings upward. Compared to REM, it had a minimum and maximum value of -0.4836 and 0.4528, respectively. REM had a mean of -0.0237 and a standard deviation of 0.2133. Zang (2012) explains that a higher value of abnormal discretionary accruals and abnormal cash flow indicates more accruals and real manipulation, respectively. According to the descriptive statistics, the mean of AEM (0.2131) was higher than the mean of REM (-0.0237). Therefore, it could be considered that a firm tended to engage more frequently in accruals-based earnings management. This research has one main hypothesis; it predicted a substitutive relation between real and accrual-based earnings management that could be identified if firms employ more (or less) AEM if firms employ low (or high) REM during the year. This implied a negative relationship between real and accruals earnings management. To test the hypothesis, this research employed a multiple regression analysis to identify the relationship between accruals and real earnings management. Consistent with the hypothesis, Table 3 shows the REM variable had a negative coefficient of -0.726 with a significance level of one per cent. It meant that real earnings management had negatively influenced accrualsbased earning management, so HA was supported. The more managers engaged in REM, the less they engaged in AEM and vice versa. The negative relationship between REM and AEM shows the sequential nature of both forms of earnings management, where managers engage in REM during the operational period, and then engage in AEM at the end of the period, if the desired earnings could not be achieved through REM (Zang, 2012). It can be argued that both forms of earnings management have a substitution effect. In addition, Zang (2012) found that REM preceded AEM, meaning that the level of managers' engagement in AEM depended on the level of their engagement in REM. This depicts that AEM was dependent on REM. The value of R squared was 57.3 per cent, which was considered to be good. It means that 57.3 per cent of the variety in AEM can be explained by the model. The result of the hypothesis is consistent with Cohen et al. (2008), Zarowin (2010), andZang (2012). Generally, they found that AEM has a negative relationship with REM, although only Zang (2012) considered the sequential nature between them. It can be argued that managers usually engage in two forms of earnings management to manage earnings, not just one of them. REM acts as a substitute for AEM and vice versa if we look at the negative relationship between them. As Zang (2012) found a negative relationship between both forms of earnings management, this showed REM preceding AEM, therefore, the role of the two forms of earnings management is to act as a substitute for each other (sequential nature) instead of occurring simultaneously. On the other hand, prior studies (Barton, 2001;Cohen et al., 2008;Cohen & Zarowin, 2010) assume that accruals and real earnings management occur simultaneously. Nevertheless, this research, along with Zang (2012), strengthens the point of view that AEM and REM have a sequential nature (substitutes for each other), rather than occurring simultaneously. Table 3 also shows that the debt-to-equity ratio has a positive relationship with AEM. This is consistent with the debt covenant hypothesis, stating that the higher the debt-to-equity ratio, the more managers engage in earnings management to avoid a debt covenant violation that could lead to higher interest rates. Similarly, the size variable, which controls the relative firm size in the industry, shows a positive and significant relation with AEM. According to the political cost hypothesis in the positive accounting theory, the bigger companies get, the more they attract attention that could increase their political costs. Consequently, managers in bigger companies tend to engage in earnings management to manipulate earnings downwards. This finding is consistent with Davidson et al. (2005), who found there is a positive relationship between a firm's size and the use of earnings management. Regarding the bonus plan hypothesis, ROA and market-to-book ratio could be considered as benchmarks for managers to formulate their bonuses. Therefore, managers tend to engage their earnings to meet or beat these benchmarks so that they could get their desired bonuses. Interestingly, the market-to-book ratio and ROA each has a positive relationship with AEM, at a significance level of five per cent. Hence, this finding is consistent with the bonus plan hypothesis. As expected, Table 3 also depicts that the financial crisis dummy variable has a negative coefficient. Due to tighter scrutiny and monitoring, firms tend not to engage in earnings management during a financial crisis. Chia et al. (2007) argue that the Asian financial crisis led to uncertain conditions and limited earnings management practices, as there was increased monitoring and scrutiny of companies' activities that resulted in pressure on the managers to report more credible financial information, including reported earnings. This research undertook an additional test for the financial crisis's context by running a regression on yearly dummy variables. The results showed a positive and significant coefficient in the year 2007 (one per cent significance level) and a negative and significant coefficient in the year 2009 (one per cent significance level). This research assumed that the financial crisis happened during the period from 2008 to 2010. Firms were still engaging in AEM in 2007. However, due to the increased monitoring and scrutiny of their activities during the financial crisis, firms then engaged in less accrual earnings management. Hence, the yearly dummy variables support the result of the main regression analysis. The MNC variable was not significant, meaning that the type of company (multinational companies or local companies) in Indonesia did not affect AEM. This research predicted that MNCs may not engage in AEM that often. Better governance in multinational companies, compared to local companies, was expected to produce a different effect on the tendency of a company to engage in earnings management. However, the statistical test revealed that the type of company did not affect accrual-based earnings management. This might have been caused by a number of reasons. Firstly, Perera and Baydoun (2007) argue that Indonesia has a weak institutional framework. Therefore, regardless of whether MNCs have better corporate governance or local companies have weaker corporate governance, the type of company would not affect the tendency of the company to engage in accrual-based earnings management. Secondly, MNCs' subsidiaries might exploit the weak institutional framework in Indonesia to manage earnings. Dyreng et al. (2012) and Beuselinck et al. (2019) found that multinational companies, which have subsidiaries in countries with weak institutional frameworks tend to manage earnings more in those subsidiaries. This may apply in the Indonesian context, as Indonesia has a weak institutional framework. Some firms in the sample of this research are subsidiaries of multinational companies. As predicted, the Man/Min variable had a positive relationship with AEM and is significant at the 10 per cent level. It shows that manufacturing companies engage more often in accruals earnings management than mining companies do. As argued by Roychowdhury (2006), the manufacturing industry sector has more incentives to engage in earnings management than the other industrial sectors, because of the nature of its operations and production. Conclusion This research examines the interactions between real earnings management (REM) and accruals-based earnings management (AEM), seeking evidence that the nature of both forms of earnings management is sequential. This research employs two earnings management models to proxy REM and AEM. A modified Jones model is used to proxy AEM, and an abnormal operating cash flow model developed by Dechow et al. (1998) is used to proxy REM. The sample used in this research comes from mining and manufacturing companies listed on the Indonesia Stock Exchange (IDX), during the period from 2005 to 2013, totalling 754 firm-years. To gain a broader insight, this research added some dummy variables (financial crisis, manufacturing industry, and multinational companies) to capture the behaviour of earnings management in certain situations. The author developed a model to generate a testable hypothesis: REM negatively influences AEM. After performing the analysis, this research finds some interesting insights. Firstly, this research finds that REM negatively influences AEM. It suggests that whether managers engage in AEM at the end of an accounting period depends on the success of REM to meet their earnings targets. Hence, it depicts a direct substitutive relationship and the sequential nature between REM and AEM (Zang, 2012). Secondly, this research finds that the level of AEM decreased during the Asian financial crisis. Several constraints were experienced by firms due to the financial crisis, which prevented them from engaging in earnings management. Thirdly, this research finds that the type of company -whether it's a multinational company (MNCs) or a local company, does not affect the tendency for a firm to engage in accrual-based earnings management. This might be caused by the weak institutional framework in Indonesia. Lastly, firms in the manufacturing industry engage in accruals-based earnings management. Roychowdhury (2006) argues that the manufacturing industry offers more incentives to engage in earnings management than other industrial sectors do. More importantly, this research supports the three hypotheses in the positive accounting theory. This research makes some contributions to the literature and practices. This research has found that the direct relationship between AEM and REM is negative. The more managers engage in REM, the less they engage in AEM to meet earnings targets and vice versa. This negative relationship shows that both forms of earnings management have a sequential nature, where REM precedes AEM (Zang, 2012). Therefore, future studies should consider using better versions of models that include both forms of earnings management in identifying earnings management as this research, Zang (2012) and Cohen et al. (2008) have proven that managers engage in REM and AEM to meet earnings targets. Additionally, this research and Zang (2012) provide a new point of view that REM and AEM have a sequential nature and act as substitutes for each other, so that when developing further research models which involve both forms of earnings management, researchers should consider this result. It also shows that only focusing on one form of earnings management does not fully explain earnings management activities and would likely lead to indefinite conclusions. For the Indonesian regulator, it is interesting to consider some of these research findings. All interested parties should be aware that increasing constraints on accounting discretion does not guarantee that managers will not engage in earnings management, as they still have real earnings management as an alternative strategy, which is difficult to detect and more costly. It is also important for the Indonesian regulator to pay more attention to the manufacturing industry since this research has proven that it offers more incentives to engage in earnings management. Unfortunately, this research contains some limitations. Firstly, this research is in an Indonesian context so that different conclusions could arise in a different context. Secondly, this research only employs specific companies (manufacturing and mining) as its sample, so again the conclusions might not apply to other companies in different industries. Lastly, this research uses a modified Jones model and abnormal operating cash flows models as proxies of AEM and REM, respectively, whereas there are other models that could proxy real and accruals-based earnings management. For future studies, this research has some implications and suggestions. It is interesting to consider the relationship of AEM and REM in certain conditions (financial crisis, local/MNCs, across industries) in a broader context, and not only in Indonesia. A broader context such as East Asia, Asia as a whole or by comparing Asia with other continents would be interesting. Future studies should also employ companies in more varied industries so that the conclusions could be generalised across them. In examining earnings management, future studies should not only focus on either REM or AEM but should examine both of them. As explained before, future studies should also consider the nature of both forms of earnings management (their sequential nature) in developing a research model, as they do not occur simultaneously.
2021-08-27T16:52:09.004Z
2021-05-27T00:00:00.000
{ "year": 2021, "sha1": "41636bd803ea231c3808da0b26f834a525e4db4c", "oa_license": "CCBYSA", "oa_url": "https://journal.uii.ac.id/JAAI/article/download/18384/11356", "oa_status": "GOLD", "pdf_src": "ScienceParsePlus", "pdf_hash": "7b907593279578a140e2c660e37cfd9bc6842a59", "s2fieldsofstudy": [ "Business" ], "extfieldsofstudy": [ "Business" ] }
229484010
pes2o/s2orc
v3-fos-license
Addition of different level of zeolite powder on Japanese quail bird feed and its effects on carcass parameters Study was carried out at the poultry farm of the animal resource Department - Collage of Agriculture - Tikrit University for the period from 14\11\2018 to 12\2\2019 and the main aim of this study was to improve importance of addition of different levels of zeolite on Japanese quail bird feed and its effect on the primary and secondary carcass parts. And in this study a 240 birds were used and these were divided into four treatments with three replicates for each treatment and each replicates contained 20 birds with percent of one male and three female. And the birds feed with 20% percentage of protein and 2800 kilo calorie of energy. The treatments were divided as following: First treatment (T1) control treatment without addition of zeolite Second treatment (T2) second treatment addition of 2g\kg feed of zeolite powder Third treatment (T3) third treatment addition of 3g\kg feed of zeolite powder Fourth treatment (T4) fourth treatment addition of 4g\kg feed of zeolite powder The results showed that addition of zeolite have significant effect (p < 0.05) on live body weight, carcass weight and main carcass parts (chest, thigh, back, wings and neck) at the treatment with addition of 3g of zeolite powder in compare with other study treatments. Introduction: The contamination of feed with fungi and mycotoxins considered as a big problem which threat the developmental process of animal production in countries that have a weak or poor environment of feed storing (1) . So the farmers refuge to addition of many organic and inorganic matters in animal feed to decrease dangerous of toxins and one of these inorganic matters that use for this purpose is by addition of zeolite metals, the zeolite metal known as crystalized metal, have structure characterized by frame quadruple surface each one consisting of four atom of oxygen (SiO 4 ) surrounded by cations (sodium, calcium, potassium and barium) that formed open structure have open cavities as channels and wide cages. These cavities filled with water and the additional of cations able to replacement. The channels characterized by its big size that allow passing of hosted ion, and liquefied phase the water take off at the temperature mostly bellow 400 ºC, and have a high affinity to liquefaction again. The zeolite metals contained many 2nd International Scientific Conference of Al-Ayen University (ISCAU-2020) IOP Conf. Series: Materials Science and Engineering 928 (2020) 062001 IOP Publishing doi:10.1088/1757-899X/928/6/062001 2 of liquefied silicate metals, that same in its chemical structure and presence in nature, and it's known as aluminum silicate, and sodium and calcium at basic discretion, and contained high percentage of water, the hardness of zeolite metal ranger between 3.5 to 5.5, and specific weight between 2 to 2.4 ( 2) The natural zeolite metal used to purified water and building be ancient romans from about 2700 years ago, and it's discovered at 1756 AD as crystals presence in basalt rock cavities by Sweden scientist Baron Axel Fredrik, that give the name of zeolite on this metal, and the name the zeolite derived from Latin words Zein and Lithos and that mean boiling rocks, that mean when heating this metal excrete water and appeared as boiling (3) (1) The aqueous form (ammonium silicate) was formed in environment ranged between deep Ocean and shallow water in desert lacus that correlated with volcano emission and the zeolite metal characterized by many chemical and physical features make it use in many of agriculture, environmental and industry applications one of these features gas absorption high ability of ion exchange and mix between absorption and adsorption also zeolite can absorbed the ammonia and carbon dioxide and hydrogen sulfide and detoxification (4) . The variety in livestock and select the animals that short generation considered the best choice to reducing the decrease of proteins for humans and the country development, the Japanese quail remain the best bird for increase the base of animal proteins in developing country, and it is increasing meat and egg production, that characterized by high quality because of fast growth and early sexual puberty (3) . So that the study aimed to study the effects of addition of different levels of zeolite powder and evaluate it's effects on parameters of carcasses parts and parameters of brown feather quail birds. Materials and methods The study was done in the Japanese quail hall related to animal production farm -Agriculture College -Tikrit University at the period from 14\11\2018 to 12\2\2019 and the main aim of this study was to improve the importance of addition of different levels of zeolite on Japanese quail bird feed and its effect on the primary and secondary carcass parts. In this study, 240 birds were used and these divided into four treatments with three replicates for each treatment and each replicates contained 20 birds with percent of one male and three female. Take the birds from each treatment (three female and three male) randomly, after record of live weight slaughtered and let to bleeding for two minutes then scald with scolder with 54 ºC temperature for one minutes then remove the feather and viscera manually then cut into main parts; chest, thighs, neck, back and wings. Nutrition and feed: The birds feeding with provender contain 20% proteins and metabolite energy 2800 kilo calories (5) Nutrition treatment First treatment (T1) control treatment without addition of zeolite Second treatment (T2) second treatment addition of 2g \kg feed of zeolite powder Third treatment (T3) third treatment addition of 3g\kg feed of zeolite powder Fourth treatment (T4) fourth treatment addition of 4g\kg feed of zeolite powder The zeolite powder (zeogreen) with 100% purity as powder that used supplemented by Agriculture Green Zeolite Company. Body weight (g): the measurement done once weekly by using digital balance with two decimals. Carcass weight (g): the weighting done once after inedible parts removed by using digital balance with two decimals. The carcass washed with water singularly to measure the dressing percentage measured according to the body weight: Dressing percentage = After removal of viscera, the carcasses weighted and the cut parts also weighted for each carcass singularly (6) . Statistical analysis: The experiment done according to complete random design and the data Analyzed by using ready statistical program SAS (7) and the means compare by using Duncan test with multiple ranges (8) at level of significant 0.05 to identify the significant differences between means according to mathematical equation : Y ij = µ+Ti+e ij Whereas: yij= value of view j returned to treatment i µ = general mean of parameter Ti= effect of treatment I (that the study include effect of treatments was mentioned) eij= random error that distribute naturally with mean equal to zero and variance equal to δ 2 e. Results and discussion Noticed from the results of study in table (1) presence of highly significant effect on parameter of live body weight and carcass weight at significant level (p<0.05 ) which predominated the birds that preserved in a different percentage of zeolite in its feed as compared with control treatment that showed significant increase in body weight reached 222.58 g in treatment three (3g zeolite) while treatment one (control treatment) showed the lowest mean reached 155.99 g and the value of body weight of both treatment (2 and 4) g zeolite 196.51 and 190.23 respectively. And these results agree with data that reached by (9) (10) in their studies on some of carcass parameters and the primary and secondary carcass parts of Japanese quail bird. The reasons of these results is due to zeolite powder work on stimulation of microorganism that present in small intestine by decrease absorption of toxins in intestine also remove of ions mainly nickel, lead, carbon and ammonia by passing them throw pores of zeolite and execrated them with feces then work on increase efficiency of nutritional matter (11) (1) . Also the results at the same table mentioned showed a significant differences between treatments of zeolite addition at a different percentage (2, 3 and 4) % in compare with control group in parameter of Dressing percentage that concealed important parameter in poultry industry and necessary to study it in Japanese quail bird to evaluate the Dressing weight from percentage of carcass to total weight or empty weight. Treatment three predominant on all study treatment and its value reached (1) and the elevation in Dressing percentage returned to addition treatments in compare with control group due to elevation of Japanese quail bird in means of carcass weight because of increase body weight mean in compare with control group that the Dressing percentage have positive correlation and high in bird with parameter of body weight and these results agree with data that reached by each of (12) (13) (14) (15) in their studies on some parameter carcass parts of Japanese quail bird. While these results disagree with data that reached by (16) that the value of dressing percentage japanese quail bird was highest than of value this results. The results of statistical analysis appeared in table (1) to predominant significant increase in main carcass parts weight of zeolite addition (T3) on other addition treatment and control treatment on parameter of chest part and that noticed by (17) (18) . Also the results showed predominant highly significant increase in parameter on wings, back and thigh also in liver in treatment three (3g zeolite) in compare with other treatment. And the cause of that is due to presence of a positive correlating coefficient and highly significant between body weights and liver weight parameter that the liver conceded the main axis of metabolism in the body (15) . And these results disagree with what reached by (19) that fined the mean of liver weight of Japanese quail bird 2.37 and 2.44 g respectively and the reason of that is due to body weight of this treatment higher than other addition treatment. That the addition treatment work on improvement of digestion coefficient and increase mean of nutrient matter utilization of bird by its role in closing receptors present on the surface of mycotoxins that lead to prevent attachment to intestinal epithelial cells and formation of unabsorbable complex (1) . Appeared from weight of uneatable parts don't showed any significant difference in head weight parameters and these results is due to this percentage decrease significantly with age bird due to increase bulk size of Japanese quail bird with age and this reflex on increase bird body weight and carcass weight that gave low percentage to uneatable parts to carcass weight or percentage of carcass (20) . From results of the present study we can conclude that the addition of different level of zeolite powder on feed of Japanese quail bird work on increase efficiency of digestive system and improve digestion coefficient of nutrient material then increase bird utilization of it and increase mean of body weight and carcass weight also increase weight of main carcass parts of Japanese quail bird.
2020-11-26T09:03:51.241Z
2020-11-19T00:00:00.000
{ "year": 2020, "sha1": "00452454e13dce50a0707b0c165d7fcd92fd672a", "oa_license": "CCBY", "oa_url": "https://iopscience.iop.org/article/10.1088/1757-899X/928/6/062001/pdf", "oa_status": "GOLD", "pdf_src": "IOP", "pdf_hash": "86cc897a3b06fb71080ac616404dab7a0555a982", "s2fieldsofstudy": [ "Agricultural and Food Sciences" ], "extfieldsofstudy": [ "Physics", "Chemistry" ] }
8056618
pes2o/s2orc
v3-fos-license
Factors that favor the occurrence of cough in patients treated with ramipril – A pharmacoepidemiological study Summary Background Dry cough is a common cause for the discontinuation of ramipril treatment. The aim of this pharmacoepidemiological study was to assess the incidence of ramipril-related cough among the Polish population and to characterize patients at risk of experiencing the adverse effect of cough during ramipril treatment. Material/Methods This was a prospective observational study involving 10,380 patients treated with ramipril for a period of no longer than 8 weeks, consisting of 3 visits: baseline, first follow-up (after 4–8 weeks) and second follow-up visit (after 4–8 weeks of cessation of ramipril, conducted only for evaluating coughing patients). Results The incidence of ramipril-related cough was 7.1%. Logistic regression analysis identified female sex (OR=1.35), cigarette smoking (OR=2.50), chronic obstructive pulmonary disease (OR=1.70), asthma (OR=1.60) and previous history of tuberculosis (OR=6.20) to be significantly and independently associated with the onset of ramipril-related cough. Coughing subsided within a period of 2–20 days after ramipril was discontinued. In all patients reporting the appearance of cough within the first 5 days after therapy initiation, the adverse effect subsided after therapy discontinuation. If cough appeared within 6–10 days, it subsided after discontinuation in 81.6% of subjects. Cough persisted in 30.4% of those reporting later onset. Conclusions 1. Female sex, cigarette smoking, COPD, asthma, and previous history of tuberculosis increase the risk of ramipril-related cough. 2. The later the cough occurs during treatment, the less often the drug is the causative agent and the cough and also less likely to disappear after discontinuation of ramipril. Background Angiotensin-converting enzyme inhibitors (ACE-I) were first introduced in 1981. Initially only indicated for treatment of refractory hypertension, they are now widely used in hypertension, as well as to reduce morbidity or mortality in patients with congestive heart failure, myocardial infarction, diabetes mellitus, chronic kidney disease, and atherosclerotic cardiovascular disease [1]. ACE-I can also attenuate cardiac remodeling in different pathological models [2]. Ramipril is an ACE-I, primarily reducing the rate of conversion of angiotensin I to angiotensin II. Inhibition of angiotensin-converting enzyme is also associated with a decline of bradykinin degradation, which is likely to have beneficial effects on the circulation and kidneys [3,4]. Ramipril has been increasingly used after the publication of the HOPE trial, while the growth in rates of use of other ACE inhibitors remains constant or decline [5]. Due to its broad range of indications, especially for congestive heart failure treatment, ramipril is the second most widely prescribed ACE-I in Poland [6]. The number of patients on ramipril in Poland is estimated at 1.5 million. It is also the most prescribed pharmaceutical in Estonia and Lithuania in 2009 and is among the top 20 dispensed drugs in Canada in 2010 [7,8]. Annual sales of ramipril capsules in the United States were approximately $898 million USD for 12 months ending June 2008, according to IMS Health data [9]. Dry, persistent cough is a well-described adverse effect of the ACE-I class medications [10] and is the most frequently reported adverse drug reaction (ADR) [11,12]. In the general population, a significant and clinically relevant proportion of patients experiencing ACE-I-induced cough are treated with antitussive agents, which may subject them to extensive and unnecessary evaluations, diagnostic tests, and consultations [13,14]. The mechanism of ACE-I-induced cough remains unclear, but likely involves the protrusive mediators bradykinin and substance P, and is defined as extrathoracic airway hyper-responsiveness (EAHR) [15]. ACE-I-induced cough has not been demonstrated to be dose-dependent [10]. The use of ACE-Is can trigger the development of cough and also intensify stimulation of the cough reflex induced by other causes. ACE-I-related cough was reported more frequently in women treated for heart failure, in patients with respiratory diseases (bronchial asthma, chronic obstructive pulmonary disease), diabetes, concurrent use of other drugs, (indomethacin, amlodipine, nifedipine or theophylline) and in smokers [10,[16][17][18][19][20]. It has been shown that a genetic predisposition, especially in women, may increase the risk of ACE-I-related cough [21]. The incidence of ACE-I-related cough has been reported to be in the range of 5% to 35% [10]. In large observational studies, the incidence of cough in patients treated with ramipril ranged from 3.0% to 24.3% [22,23]. The incidence and prevalence of ramipril-related cough in the Polish population is unknown at present. The onset of cough is a common reason for discontinuation of ACE-I therapy. In the ONTARGET study, 4.2% of ramipril-treated patients experienced cough, and 100% of these patients discontinued its use because of that [15]. ACE-I-related cough may occur as early as after the first tablet or after many weeks or months. After discontinuation of ACE-I therapy, cough may persist for a few weeks, but generally no longer than 3 months [10]. The ACCP (The American College of Chest Physicians) Evidence-Based Clinical Practice Guidelines for ACE-I-related cough advocate that in patients in whom cough resolves after the cessation of ACE-I therapy, a repeat trial of such therapy may be attempted [10]. The only effective therapy for ACE-I-related cough is the cessation of therapy with the agent and substitution with another inhibitor of the renin-angiotensin system. The resolution of cough (usually within 1 to 4 weeks of the cessation of ACE-I use) confirms the diagnosis of ACE-I-related cough. [10]. Most guidelines recommend substitution with an angiotensin receptor blocker in case of troublesome and recurrent cough after ACE-I [24,25]. There are no specific Polish guidelines concerning this issue. The aim of this pharmacoepidemiological study was to assess the incidence of ramipril-related cough among the Polish population, and characterize the patients particularly vulnerable to the adverse effect of dry cough and risk factors during ramipril therapy. Material and Methods This survey was conducted in 2010 and is comprised of responses from 517 general practitioners (out of 800 invited) working in primary care, private practice, and specialty clinics throughout Poland. The inclusion criteria for the study were the use of ramipril for no longer than 8 weeks, and 18 years of age or above. Patients treated with angiotensin II receptor type 1 antagonists (sartans) were excluded. The patients were instructed (at the baseline visit) to report during the subsequent visit the onset of any adverse drug reactions (ADR) (hypotension, cough, headache, dizziness, fatigue, nausea, angioedema and other allergic reactions) that developed if and when they had stopped taking ramipril, and how such actions influenced the ADRs. Patients were not informed that incidence of cough was the main aim of the study, so there was no chance for Hawthorne effect [26]. At the baseline visit the incidence of cough, time of its appearance and disappearance after eventual ramipril discontinuation was analyzed retrospectively. The basic study questionnaire consisted of 2 parts. An additional, third, part of the survey was conducted only for patients experiencing cough and discontinuing ramipril therapy ( Figure 1). The patients did not actively participate in completing surveys. Surveys were completed by the physicians (as well as the monitoring of renal dysfunction and electrolyte imbalance). The first part of the form (collected during baseline visit) included patient demographic data (sex, age, place of Product Investigation Med Sci Monit, 2012; 18(9): PI21-28 residence, education, and labor activity). Anthropometric data (weight, height, and waist circumference), 2 blood pressure measurements, date of initiation of treatment with ramipril, and the indications for the use of ramipril. Current cigarette use and history of chronic diseases favoring the occurrence of cough (chronic obstructive pulmonary disease, asthma, allergic rhinitis, chronic rhinosinusitis, history of tuberculosis, mitral valve disease, and thoracic aorta aneurysm) were also recorded. We also collected data on eventual appearance of cough after ramipril treatment before the follow-up visit (whether it disappeared after ramipril discontinuation, and after how many days), nature of the cough, accompanying symptoms suggestive of acute respiratory tract infection (fever, rhinitis, muscle aches, bone and joint pain, shortness of breath), and data from an interview on the prevalence of cough during the previous ACE-I treatment. The second part of the questionnaire was collected during the follow-up visit (after 4-8 weeks) and included the data on patient adherence to ramipril therapy, eventual reasons for discontinuation, and the occurrence of cough and associated symptoms (as at the baseline visit). For patients who stopped taking ramipril before the follow-up, there was the additional question of whether the cough resolved after discontinuation of the ramipril (after how many days). The third part of the survey was conducted only for patients with cough who had discontinued ramipril therapy during their second follow-up visit. This survey was conducted between 4-8 weeks after the first follow-up visit (following the established method of diagnosing ACE-I-induced cough). It included only questions about the continuation or discontinuation of cough after cessation of treatment with ramipril. This questionnaire-based survey did not fulfill the criteria of medical experimentation and thus did not require ethics committee approval. Calculation of study size Based on existing published studies, we postulated that ramipril-related cough would occur in approximately 10% of the population. Sample size with 0.5% error was calculated at 9,740. However, given the possible lack of response from about 20% of responders, the sample size was estimated at n=12,175 [27]. Data analysis BMI was calculated on the basis of body weight and height. According to the widely accepted WHO criteria, overweight was defined as a BMI greater than 25 and lower than 30 kg/m 2 , obesity as a BMI equal to or more than 30 kg/m 2 , and morbid obesity as BMI ≤40 kg/m 2 . Visceral obesity was defined on the basis of the 2005 IDF criteria -waist circumference for Caucasians of ≥80 cm for adult women and ≥94 cm for adult men [28]. Ramipril-related cough was defined as cough not related to the signs of respiratory tract infections, and that disappeared Patients included -n=10,380 Patients reporting cough -n=869 Cough accompanied by the features of acute infection -n=162 Cough not related to the features of acute infection -n=707 • Discontinuation of ramipril because of cough before the initial visit -n=245 (cough subsided in n=189) • Continuation of ramipril regardless of cough -n=624 Patients lost to follow-up n=225 (14 with cough) Patients lost to the second follow-up visit n=56 Patients who discontinued treatment for ADR n=169 (due to cough n=134, subsided in 78) Patients who discontinued treatment in their own decision n=129 Baseline visit Patients on ramipril -n=9,612 Patients previously reporting cough -n=476 New incidences of cough -n=56 Cough accompanied by the features of acute infection -n=29 Cough not related to the features of acute infection -n=27 • Discontinuation of ramipril because of cough at the visit -n=379 • Continuation of ramipril regardless the cough -n=153 Patients -n=323 Cough persisted in n=63 Cough subsided in n=260 * Second follow-up visit was conducted only for patients with reported cough at rst follow-up visit Patients experiencing cough who did not attend the second follow-up visit were included as ramipril-related cough. The results of this analysis are presented as rates or as an average with standard deviation. Univariate and multivariate backward stepwise grouped logistic regression analysis were performed including factors potentially favoring the occurrence of chronic cough, including smoking. Ageadjusted odds ratios are presented with 95% confidence intervals. All variables were tested for the presence of multi co-linearity, which was assessed with the variance inflation factor (VIF) and the conditional index [29]. Based on the literature, to ensure that there is no co-linearity, the VIF value should not exceed 5, while the value of conditional factor should not exceed 30. The goodness of fit of the regression model was assessed with the Wald c 2 test. The frequency of categorical data was compared using the c 2 test, and 95% confidence intervals were calculated with continuity correction according to methods described by Newcomb [30]. Statistical analysis was performed with STATISTICA 8.0 PL software and R software. P<0.05 was considered as statistically significant. All tests were 2-sided. Characteristics of the study group A total of 10,380 patients treated with ramipril, including 50.8% men and 49.2% women, participated in the study ( In 23.8% of respondents there was more than 1 registered indication for the use of ramipril ( Table 2). The most common indication was hypertension (93.0%), followed by heart failure (21.4%). Diabetic nephropathy was the third most frequent indication (8.1%). Ramipril was obtained "off label" (outside of the registered indications) by 0.5% of patients Occurrence of cough A total of 869 patients (8.3%) (95% CI: 7.9-8.9%) complained of cough, mostly dry (91.8%), during the baseline visit. Cough occurred on average 13±9 days (from 1 to 60 days) after the initiation of ramipril therapy. After excluding patients with cough accompanied by the features of acute infection (fever, rhinitis, myalgia) from the analysis, the incidence of cough decreased to 6.8% (95%CI: 6.3-7.3%) (n=707). Of the study participants, 7.5% (n=695) had a previous history of cough related to etiology other than ramipril ACE-I. A total of 52.5% of responders who reported a cough that discontinued after cessation of ramipril, the adverse effect previously occurred during treatment with other ACE-Is. The cough did not appear after initiation of ramipril therapy until the baseline visit in 4.2% of all study participants with a positive history of cough during the use of another ACE-I. A total of 10,127 (97.6%) of participants attended the follow-up visit (after 4.1±0.5 weeks). A total of 9,612 (94.9%) were still on ramipril therapy, while 298 (3.1%) had discontinued the use of ramipril. Discontinuation of ramipril was reported secondary to ADRs in 56.8% (n=169) of patients, negative patient opinion concerning the necessity for the use of the medicine 41.2% (n=123), and economic reasons 2.0% (n=6). After therapy cessation, the cough resolved in 58.2% of affected participants who stopped the ramipril therapy. The median time of cough subsiding after discontinuation of ramipril treatment was 5 (interquartile range 4-11) days (range from 2 to 20 days). ADRs, excluding cough, were reported in 171 (1.7%) patients on ramipril therapy, including hypotensive episodes, impairment of renal excretory function, and headaches. Cumulative incidence of ramipril-related cough Ramipril-related cough not associated with infection, excluding cases with persistent cough 8 weeks after cessation of therapy, was observed in 736 participants (7.1% [95% CI: 6.6-7.6%]) of the (7.6% of women and 7.2% of men; ns) up to the end of the observation period, while episode of cough regardless of cause was observed in 925 of participants (8.9% [95% CI: 8.4-9.5%]) ( Figure 2). Therapy with ramipril was discontinued in 702 of 925 patients with cough. The symptom subsided in 527 of them (75.1%). In all patients reporting the appearance of cough within the first 5 days after therapy initiation, the cough symptoms resolved after therapy was discontinued. If the cough appeared within 6-10 days, it subsided after discontinuation in 81.6%, and persisted in 30.4% of those reporting the appearance of cough later than 10 days after therapy initiation. Ramipril-related cough occurred significantly more in patients with chronic diseases conducive to the occurrence of chronic cough -chronic obstructive pulmonary disease, asthma, allergic rhinitis, chronic sinusitis, history of tuberculosis, mitral valve disorder or thoracic aortic aneurysm (10.6% vs. 6.1%, p<0.001). In univariate age-adjusted logistic regression, ramipril-related cough occurred significantly more frequently in patients with hypertension, peptic ulcer disease, asthma, COPD, prior history of tuberculosis, and smokers. Ramipril-related cough occurred less frequently among those suffering from gastro-esophageal-reflux disease (GERD) and chronic rhinosinusitis (Table 3). In this study of over 10,000 patients treated with ramipril, logistic regression adjusted for age analysis identified female sex (OR=1. 35 .55]) to be significantly and independently associated with the onset of cough not related to acute infection, as well as subsiding after ramipril therapy cessation (Figure 3). GERD and chronic rhinosinusitis were the only 2 factors demonstrating a decreased risk of cough in this model. For any variable included in the regression model, the VIF did not discussion This study shows that ramipril-related cough occurred in 7.1% of Polish patients on ramipril therapy. Factors such as female sex, cigarette smoking, chronic obstructive pulmonary disease, asthma and previous history of tuberculosis seem to contribute to an increase in its occurrence. A baseline history of these factors may therefore be helpful in identification of patients particularly at risk of its occurrence. Careful attention to patients with these risk factors may prevent misdiagnosis and improper treatment of this well-known adverse-effect. Perhaps in these patients it would be reasonable to substitute ACE-I for an angiotensin II antagonist, bearing in mind that ACE-I-induced cough is a class-wide adverse effect and may occur with other agents in this class. There is also a 10-times increased risk for potentially fatal angioedema in patients with a history of ACE-I-related cough [31]. The CARE study evaluated the incidence of cough in a large population of Americans with hypertension (N=11,100) [22]. During an 8-week observational period, the incidence of dry cough was reported in 3.0% of patients. It is possible that not all incidences of cough were reported by the team of researchers. However, much higher rates were reported in a study conducted in India, where a rate of 24.39% was recorded [23]. The incidence of cough in patients treated with ramipril has also been reported in the ONTARGET and Pharao trials, in 4.2 and 4.8% of participants, respectively [15,32]. The most similar results to those reported in our study were reported by Lacourciere et al in n=405 Canadian patients and in n=1,048 patients in a study by Hathiala [33,34]. During a period of 14 weeks in the Canadian study and at the end of an 8-week period in the Hathiala study, the incidence of cough was reported as 10 respectively [34]. The observed discrepancies may be partially explained by various racial differences, co-morbidities, and pharmacotherapies. The CARE study determined that cough appeared most frequently among Caucasian patients, constituting 77.8% of study participants [22]. This racial differentiation may not explain the markedly greater incidence of cough in the Polish population reported in this study, compared to the Americans in the CARE study. In this study we found a higher incidence of ACE-I-related cough in women (OR=1.35). These results confirm results of another published study supporting the hypothesis that women are more susceptible to developing ACE-I-induced cough [21]. Co-morbid conditions are also factors that may influence variation in reported incidence of cough. In this study, we did not exclude patients with illnesses pre-disposing patients to chronic cough, which was 21.4% of the study participates. In these patients, the incidence of ramipril-related cough was 10.6%. One study identified a group of patients with asthma who exhibited cough during treatment with ACE-I, and found that the sensitivity of cough reflex increased during the treatment [16]. As shown, ACE-I sensitizes the cough reflex. It is therefore not surprising that cough is more common among patients with COPD (OR=1.7) or asthma (OR=1.6), or in smokers (OR=2.51). Although according to some researchers ACE-I-related cough is less common in smokers [8], in our study cigarette smoking increased the risk of cough by more than 2-fold (OR=2.51). Such an association was not found by Singh et al. in their observation of a much smaller population of patients (n=250) [23]. We found that rhinosinusitis was not among the illnesses that independently increased the risk of ACE-1-related cough, probable as this etiology is frequently associated with asthma [35]. The absence of chronic rhinosinusitis among the independent factors demonstrating an increased risk of ramipril-related cough in our study may suggest that only a subset of patients with eosinophilic airway inflammation have an increased risk of ramipril-related cough. During ACE-I treatment, cough occurs most frequently in the early period of therapy. In our study, ramipril-related cough occurred on average of 13±9 days after initiation of treatment. The Hathial study reported an even earlier appearance of this adverse drug reaction. According to this research, during the first week of treatment coughing occurred in 7.1% of 1,048 patients with a high risk of cardiovascular disease, and at the end of the 8-week observation the prevalence of cough increased to 10.0% [34]. The causal relationship between ACE-I and cough also indicates a reduced resolution if the symptom onset appeared in the later period of observation. According to our findings, after discontinuation of ramipril treatment, cough resolved in 75.8% of patients if the symptom occurred before the baseline visit, and in 58.8% of those who stopped taking ramipril after the baseline visit. The incidence of cough can also be affected by the use of other drugs. In this study, data on the concomitant treatment of cardiovascular and respiratory diseases were not collected. Thus the analysis of the impact of polypharmacotherapy on the incidence of cough cannot be performed and it is among the main limitations of this analysis. In this study patients that experienced cough and did not attend the second follow-up visit were assumed to have ramipril-related cough. This may possibly have led to overestimation of the percentage of ramipril-related cough. On the other hand, the first follow-up visit was planned for after 4-8 weeks, while ACE-I-related cough sometimes starts after several weeks or months. This method therefore might lead to some underestimation of the percentage of ramipril-related cough. Both of these assumptions are limitations of this study. conclusions 1. Female sex, cigarette smoking, COPD, asthma and previous history of tuberculosis appear to increase the risk of ramipril-related cough. 2. The later the cough occurs during the treatment, the less often the drug is the cause and the less likely the cough will resolve after discontinuation of treatment.
2016-05-31T19:58:12.500Z
2012-08-30T00:00:00.000
{ "year": 2012, "sha1": "a5263978b03ce950a44a199ef796673ab77e1ed7", "oa_license": "CCBYNCND", "oa_url": "https://europepmc.org/articles/pmc3560643?pdf=render", "oa_status": "GREEN", "pdf_src": "PubMedCentral", "pdf_hash": "a5263978b03ce950a44a199ef796673ab77e1ed7", "s2fieldsofstudy": [ "Medicine" ], "extfieldsofstudy": [ "Medicine" ] }
221398694
pes2o/s2orc
v3-fos-license
Drought characteristics and its elevation dependence in the Qinghai–Tibet plateau during the last half-century Associated with global warming, drought has destructive influences on agriculture and ecosystems, especially in the fragile Qinghai–Tibet Plateau (QTP). This study investigated spatial–temporal patterns of meteorological drought in the QTP and its surrounding areas and made an attempt to explore the relationship between drought conditions and elevation. Robust monitoring data from 274 meteorological stations during 1970–2017 were analyzed using the Sen’s slope method, Mann–Kendall trend test and rescaled range analysis. Results revealed that under the wetting trend in the QTP, Standardized Precipitation Evapotranspiration Index (SPEI) increased of maximum 0.012/year in spring. Moreover, severe drought frequency in winter and future drought risk in summer also showed an increasing trend. Wetter trends were positively correlated with elevation, with a key point being 4,000 m where the change trend above 4,000 m was about 6.3 times of that below 4,000 m in study area. The difference of drought severities between SPEI in the QTP and its surrounding areas has increased from − 0.19 in 1970 to 0.38 in 2017 and kept growing in future. Drought is one of the most widespread and costly natural disasters 1 , which can endanger the production of agriculture and animal husbandry, worsen the ecological environment, and even expose human to the risk of disease 2,3 . Previous studies have suggested that under global warming, the percentage of dry areas in the world has increased by approximately 1.74% per decade during 1950-2008 4,5 . With an average elevation above 4,000 m and an area of 200,000 square kilometers, the Qinghai-Tibet Plateau (QTP) is the source of major rivers in Asia [6][7][8] . It is extremely vulnerable to global change, and easily suffers from drought. The drought herein will have profound impacts on the neighboring regions. Therefore, a comprehensive understanding of drought characteristics in the QTP is of great importance. Many studies on spatiotemporal characteristics of drought in the QTP and its surrounding areas have been conducted, mainly concluded that the QTP has become warmer and wetter in the past decades, especially in the vast northwestern QTP [5][6][7][9][10][11][12][13] . Additionally, Gao et al. 8 analyzed the aridity changes using P/PET ratio in recent 30 years based on 83 stations, and found that the eastern QTP was becoming drier and the aridity change pattern was significantly correlated with precipitation, sunshine duration and diurnal temperature range. Liang et al. 14 investigated 74 stations in the QTP during 1980-2014 and found that the drought pattern exhibited obvious inter-decadal variation and severe drought mainly occurred before the 1990s. Yang et al. 15 forecasted an increasing drought trend in southwest China (including Yunnan Province) but an increasing wetting trend for the QTP based on simulation of Global Climate Models (GCM) taken from the Coupled Model Intercomparison Project Phase 5 (CMIP5) framework. Other studies have addressed the seasonal drought evolution in the QTP. Some concluded that the drought mainly decreased in spring, and a slight drying trend could be traced in winter 16,17 ; in autumn, extreme drought frequency increased in the eastern QTP but decreased in the northern region. Wang et al. 18 used the self-calibrating Palmer Drought Severity Index (scPDSI) to investigate the drought variation between 1961-2009 and revealed that the southern QTP experienced a significant wetting trend although the northern QTP became significantly drier, particularly in spring and autumn. Apparently, different opinions exist in seasonal drought variations in the QTP and further investigations are thus desired. 19 . Due to the drastic elevation changes, the QTP is a favorable place to explore the relationship between climate change and elevation. According to previous studies, mountainous areas are more sensitive to climate change compared to low-altitude areas at the same latitude [20][21][22] . In recent years, many scientists have focused on elevation-related climate change researches, confirming the evidences of the elevation-dependent warming 21,23 . Some have revealed that the warming trend displayed a slight decrease with elevation over 4,000 m 24 , Zhang et al. 25 found smaller changes of PET in highelevation areas at both annual and seasonal scales. Li et al. 26 concluded an increasing tendency of the precipitation with increasing elevation in summer. However, knowledge on elevation dependence of drying or wetting trends over the QTP is not well understood, particularly under low monitoring data availability and complex terrain conditions (low remote-sensing applicability). Investigating drought trends in mountainous regions at different time scales along the elevation gradient is of great significance for thoroughly exploring drought phenomenon. In this study, monitoring data from 274 meteorological stations in the past 48 years were used to analyze wet and dry conditions evolutions over the QTP and its surrounding areas. The main objectives of this study were to: (1) assess the spatial distribution and temporal variation of drought, particularly severe drought in the QTP during 1970 to 2017; (2) explore how drought changes with elevation in the QTP and their possible causes; (3) discuss the persistence of drought trends. Materials and methods Study area and data. The study area locates in southwest China (24.0-40.3°N, 75.1-106.1°E) (Fig. 1). In order to explore the spatial-temporal pattern of drought in the QTP, particularly relationships between drought and elevation, a 200 km buffer zone based on the QTP boundary within the Chinese border was established by using the ArcGIS 10.2, namely "the surrounding areas". The study area is thus composed of two parts, i.e. QTP and the surrounding areas. It includes Qinghai Province, Tibet Autonomous Region, part of Gansu Province, northern Yunnan Province, western Sichuan Province, part of Ningxia Hui Autonomous Region and southern Xinjiang Uygur Autonomous Region (Fig. 1). The average annual temperature of the study region reduced from 22 °C in the southeast to below − 7 °C in the northwest. As the warm and humid air mass moving from the Indian Ocean is blocked by huge mountains, average annual precipitation has also reduced from 2,597 mm to less than 1.9 mm from southeast to northwest. The QTP is the origin of many rivers in Asia including the Yarlung Zangbo River, Nu River, Yangtze River, Yellow River and Lancang River. It also comprises a series of high mountains such us the Kunlun, Qilian, Tanggula and Hengduan mountains. Because of its special geographic location and large-scale topography, the QTP has a strong impact on both regional and global climates. The meteorological data (i.e. daily precipitation and temperature) covering 274 stations ( Fig. 1) during 1970-2017 were obtained from the Data Center for Resources and Environmental Science, Chinese Academy of Sciences (https ://www.resdc .cn/). The dataset has been widely used in many studies [27][28][29][30] . 115 of them locate in Methods. Standardized precipitation evapotranspiration index. SPEI as an improved drought index of SPI was first proposed by Vicente-Serrano et al. 31 . It has many advantages and was widely used in many studies 17,32,33 . Compared to SPI, it adds temperature upon precipitation and can reveal the effects of global warming on drought 34 . Potential evapotranspiration (PET) is a key part of the SPEI. Different methods have been proposed to estimate PET over the past decades. Some of them are based on physical mechanism, such as the FAO-56 Penman-Monteith method (PM), and the others arose from empirical relationships (e.g. Thornthwaite method 35 , TH) that need less parameters. The previous studies show that aerodynamic factors often have impacts in spring and winter of northern China, but the overall estimation of PET (in both temporal evolution and spatial distribution) from two methods are very comparable 34,36 . Similar conclusions can also be found in Vicente-Serrano et al. 31 and Mavromatis 37 . Therefore, we adopted the TH method to calculate PET and SPEI considering data availability and natural features of QTP. In this study, SPEI-annual and SPEI-seasonal were computed by using the SPEI package in the R software 38 . The SPEI-annual and SPEI-seasonal were calculated from accumulation of climatic water balance during a 12-month period from the month to the preceding 12 months and a 3-month period from the month to the preceding 3 months, respectively. Among them, we identified specific SPEI values to present annual and seasonal conditions, i.e. SPEI-annual (the SPEI-12 of December) and SPEI-seasonal (the SPEI-3 of May, August, November and the next February for SPEI-spring, SPEI-summer, SPEI-autumn and SPEI-winter, respectively), linking to an estimation of meteorological drought. The detailed calculation steps of SPEI can be found in Vicente-Serrano et al. 31 . Table 1 shows the tentative range of SPEI and drought grade classification criteria. Sen's slope. When using Mann-Kendall trend test (MK-test) to detect a changing trend of time series, Sen's slope is usually employed to estimate the magnitude of the trend as follows 39 : where f(t) is the function of the linear trend, M is the slope and C is the constant of the equation. The formula of the trend's magnitude estimation is: where x i and x j are the data values at times t i and t j (i > j), respectively. Mann-Kendall test. The MK-test has been widely used in detecting the significance of Sen's slope of meteorological factors. In this study, the null hypothesis (H 0 ) is the SPEI series (x 1 , x 2 , x 3 , …), which is an independent and uniformly distributed random variable with n data points, the alternative hypothesis (H 1 ) is a bilateral test, for all k, j ≤ n and k ≠ j, x k and x j are distributed differently. The test statistical variable S is computed as follows: where Sgn (x) represents the sign function, which is computed as follows: S is distributed normally with a mean value of zero, and the variance can be expressed as: if n exceeded 10, the standard test statistical variable Z is computed as follows: (1) Rescaled range analysis. The Hurst index is used to predict the persistence of the time series. It can be computed by the method of rescaled range analysis (R/S) 25,26 . The following are the calculation steps: Firstly, divide the SPEI series (U) with length A into [A/B] subsequences u i (i = 2, 3, … [A/B]) with length B. The subsequences' extents are computed using the following formula: where Z u is the sequence cumulative bias of the subsequence of u i . Secondly, calculate Hurst's empirical formula (R N /S N = ωN H ) for logarithmic processing as follows: where S N is the standard bias of subsequence u i , R N /S N is each subsequence's rescaled range, H is the Hurst index, and ω is a constant. H ranges from 0 to 1 and can be categorized into three different intervals 40,41 . If 0 < H < 0.5, the trend of SPEI series in the future will reverse the current trend; if 0.5 < H < 1, the SPEI series is likely to keep the current trend; and H = 0.5 indicates the SPEI series will exhibit a random trend in the future. Figure 2 showed annual and seasonal distributions of temporal trends characterized by Z values at 274 meteorological stations. Annually, the SPEI-annual exhibited an increasing trend in more than 70% of the stations across the QTP (Fig. 2a), illustrating that most of the QTP were getting wetter with a mean rate of 0.0073/year (Table 2). While the surrounding areas were getting drier at the rate of − 0.0033/year with SPEI-annual decreasing in 64.8% stations. A total of 8 stations got drier significantly, mainly distributed in Gansu and Sichuan Provinces, but most stations in Yunnan Province showed slight drying trends. Results trend analysis of Spei series. Four seasons also experienced wetter trends in the QTP with the most significant trend of 0.0114/year in spring ( Table 2, p < 0.05), when 90 stations showed an increasing trend and 21 of them were significant. While 15 drying stations were concentrated in a small part of Gansu and Sichuan Provinces (Fig. 2b). In summer, autumn and winter, more than 52% of the QTP stations got an increasing SPEI-seasonal, but approximately 65% of the surrounding stations showed decreasing SPEI-seasonal, indicating a drying trend contrary to the QTP (Fig. 2c-e). www.nature.com/scientificreports/ Apparently, the QTP was getting wetter particularly in spring during the past half century, while the surrounding areas (southeast part, in special) got significantly drier. The QTP had ever been drier than the surrounding areas in the early period but became wetter after 1994, and the difference kept growing up from then on (Fig. 3). Temporal variation of severe drought. Figure 4 showed the annual and seasonal frequencies of severe drought events in the last half century. In general, the annual drought frequency of the QTP was decreasing while the surrounding areas exhibited an increasing trend. Before 1997, the differences of severe drought frequency between the QTP and its surrounding areas was not obvious (fluctuated in 0-6.2%), while in the last www.nature.com/scientificreports/ two decades, large differences have appeared with severe drought frequency in the surrounding areas increasing from 1.1 to 13.9% but that in the QTP decreasing from 0.4 to 5.1% (Fig. 4a). This indicated that the surrounding areas were more prone to severe drought in recent years. For the four seasons, the frequency of severe drought has decreased in spring, summer, but increased in winter in the QTP (Fig. 4e). In surrounding areas, however, spring, summer and winter experienced higher severe drought frequency increasing, and autumn had no obvious variations. elevation dependence of Spei trend. The relationship of Sen's slope of SPEI series from 274 meteorological stations versus the elevation were analyzed in Fig. 5 and Table 3. The significance level adopted here was p < 0.05. Apparently, stations in the high-elevation ranges showed more rapidly increasing trends in SPEI series than those at lower elevations. For the entire study region, SPEI trends at different time scales increased with elevation and all passed the significance test, indicating a wetter trend in higher elevation. Specifically, it was positively correlated with elevation below 2000 m and passed the significance test except for autumn. Trends at elevation between 2000 m and 4,000 m were similar to that below 2000 m, and the annual, spring and summer passed the significance test. The most rapidly increasing trend occurred above 4,000 m and passed the significance test except for autumn and winter. Additionally, change magnitudes of SPEI trends with increasing elevation above 4,000 m were most robust in annual and summer (Fig. 5a-c), which were about 6.3 and 8.5 times that of the entire study region, respectively. In order to further explore the reasons, we analyzed the trends of three meteorological factors (temperature, precipitation and PET) that were used to compute SPEI. On the annual basis, for the elevation range above 4,000 m, the trends of temperature (T) and PET were negative but that of precipitation (P) was significantly positively correlated with elevation (Table 4), making a strong positive correlation between SPEI trend and elevation. For the other two elevation ranges (2000-4,000 m and below 2000 m), all trends of the three meteorological parameters were positively correlated with elevation, resulting in a weakly positive trend between of SPEI trend and elevation. At the seasonal scale, similar to that in annual, faster wetting trends were detected in spring and summer for the elevation above 4,000 m. In autumn, although T and PET trends were negative and P trend was positive with www.nature.com/scientificreports/ elevation, the correlation coefficients were smaller, resulting in unobvious wetting trends with elevation. The opposite changes of P and PET trends with elevation from other seasons led to the stable SPEI trend in winter. In general, the negative changes of T and PET trends and the positive change of P trend may contribute to the rapid wetter condition above 4,000 m. This could be better confirmed in annual, spring and summer, when the phenomenons were more obvious. future persistence of drought. According to the variation of SPEI in the QTP and its surrounding areas over the last half century, R/S analysis was performed to evaluate the long-term correlation of time series 42 . Figure 6 showed the results of R/S analysis in different seasons. In general, the Hurst index of annual SPEI in the QTP and surrounding areas were 0.53 and 0.69, respectively, indicating the drought may maintain current trends, i.e. the QTP got wetter and surrounding areas got drier in the future (Fig. 6a). The persistence of SPEI series in the study region showed clear seasonal differences. In the QTP, only summer (H = 0.39) exhibited small www.nature.com/scientificreports/ trends that were predicted to be drier in the future but other three seasons would likely maintain current trends (H were 0.78, 0.61 and 0.72, respectively), indicating a wetter climate. In the surrounding areas, the existing drying trend would continue in summer (H = 0.69) and autumn (H = 0.60), winter (H = 0.56) would keep the current wetter trend, and spring (H = 0.41) would reverse the current trend to drier climate. Relevant stakeholders should therefore pay attention to prevent the potential damage of drought events in summer. Discussion Rapid wetting trends at high-elevation regions. Elevation is of great importance in analyzing the climate spatial changes in mountainous regions, particularly in the QTP. This study explored the elevation dependence of drought as well as its possible causes in the QTP and the surrounding areas from 1970 to 2017. We showed that wetter trends were positively correlated with elevation (p < 0.05), with the change trend above 4,000 m being 6.3 times higher than below, and the most significant difference was as high as 8.5 times in summer. In terms of meteorological parameters, negative changes of temperature and PET trends and positive change of precipitation trend were detected above 4,000 m, particularly in annual, spring and summer. All of them have caused the rapid wetter condition at highland regions. Together with global warming that glaciers in the QTP rapidly shrinks 43 , more streamflow was yielded in the region 25 . We concluded that the surrounding areas easily suffered from meteorological drought. However, this might be compensated by more streamflow at the low altitudes due to the upstream glacier melting. Glaciers are a uniquely drought resilient source of water, which may pose an important but underappreciated role in protecting downstream populations from the worst effects of droughts 44 . Glacial meltwater is also a source recharging river headwaters and downstream runoff in the QTP 45,46 . Previous studies showed that the runoff from headwater area of the QTP would be rising in future decades 47,48 , while the downstream precipitation was decreasing in the surrounding areas. This leads to the greater contribution and influence of runoff changes in the upstream on the downstream; for example, 70% runoff of Nujiang River in Yunnan Province comes from the upper reaches 49 . Affected by upstream water inflow, the scope and level of hydrological drought in downstream areas are weaker than those of meteorological drought 50,51 . Investigating the contribution of glaciers melting to the lower reaches of rivers is therefore necessary in the future, which can provide solid assistance for downstream drought adaptation. It is worth noting that, although the QTP has become wetter, severe drought frequency in winter has increased, indicating that winter was more prone to severe droughts. This could not only reduce soil moisture, affect the growth of overwintering crops and the emergence of spring-sown crops but also endanger human and livestock drinking water. More violent fluctuations of severe drought events in the surrounding areas could be an omen of flash drought, which may bring devastating impacts on crop yields and water supply and further trouble the people as for less effective response [52][53][54] . The risk of flash drought in this region in the future thus deserved more attention of local authorities. Consistency of different drought indices. The selection of duration, indices and purposes, as well as the quality of data sources would all affect the results. Previously, many indices were used to identify the drought trends, frequency and severity in the QTP, such as the Palmer Drought Severity Index (PDSI) 55 , the Standardized Precipitation Index (SPI) 56 and Temperature Vegetation Dryness Index (TVDI) 57 . Among them, PDSI may incorporate many climatic parameters (e.g. prior precipitation, moisture supply, runoff, evaporation demand) but is only applicable for mid-and long-term droughts due to the strong lagged autocorrelation 58,59 . SPI, on the other hand, only involves precipitation and shows high uncertainty in describing the drought in summer and winter; whereas SPEI simultaneously considers precipitation and evapotranspiration, and thus can accurately capture the effects of drought under the background of global warming. To validate the findings of this study, we further compared with some previous achievements using other drought indices and data sources in the QTP (Table 5). Our results confirmed the wetting trend in the QTP with the most significant level in spring reported by previous studies 8,16,17,60 . Meanwhile, the use of different methods to calculated PET of the SPEI generated similar results, all indicating the eastern QTP becoming drier 14,17,32 . However, this spatial pattern showed conflict with Yu et al. 61 and Wang et al. 18 They reported a wetting trend in the eastern QTP and a significant drying trend in the north in spring and autumn. A number of stations, length of study period and slightly different regions may contribute to the differences, demonstrating the importance of adequate high-quality data and careful selection of study area when investigated the spatial-temporal changes of drought in the QTP. conclusions In this study, we analyzed the SPEI from 274 meteorological stations in the QTP and its surrounding areas during 1970-2017 and draw the following conclusions. Firstly, drought characteristics between the QTP and its surrounding areas have great differences: a wetting trend existed in the QTP with spring getting wet mostly at the rate of 0.0114/year, while the surrounding areas showed a drying trend, especially in Yunnan Province; the difference between SPEI in the QTP and its surrounding areas has increased from -0.19 in 1970 to 0.38 in 2017. Secondly, although a wetter climate in the QTP, we found the severe drought frequency in winter has substantiality increased, which indicated winter was more prone to severe drought. Thirdly, SPEI trend exhibited an elevation dependence, which generally increased with elevation, the change trend along elevation above 4,000 m was about 6.3 times higher than ones below 4,000 m. This was mainly caused by the decreasing temperature and PET trends and increasing precipitation trend. Lastly, in the future, the QTP and its surrounding areas would continue to be wetter and drier, respectively, and occurrence of future drought is most likely to increase in summer. The findings provided the basis for related researches, improved our understanding of the responses of dry and wet conditions to climate change across the QTP, and provided early warning for regional drought and
2020-09-02T05:10:01.517Z
2020-08-31T00:00:00.000
{ "year": 2020, "sha1": "95a0ff6550c5aa61d530a4f8928f10c0145a7fc8", "oa_license": "CCBY", "oa_url": "https://www.nature.com/articles/s41598-020-71295-1.pdf", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "95a0ff6550c5aa61d530a4f8928f10c0145a7fc8", "s2fieldsofstudy": [ "Environmental Science", "Geography" ], "extfieldsofstudy": [ "Medicine", "Environmental Science" ] }
12438907
pes2o/s2orc
v3-fos-license
Sequence squeeze: an open contest for sequence compression Next-generation sequencing machines produce large quantities of data which are becoming increasingly difficult to move between collaborating organisations or even store within a single organisation. Compressing the data to assist with this is vital, but existing techniques do not perform as well as might be expected. The need for a new compression technique was identified by the Pistoia Alliance who commissioned an open innovation contest to find one. The dynamic and interactive nature of the contest led to some novel algorithms and a high level of competition between participants. Background In October 2011 the Pistoia Alliance [1] announced a contest to source a new compression technique for the management of data generated by next-generation sequencing machines. The volume of sequencing data produced is growing rapidly [2] and is putting pressure on existing techniques for storage and data transfer. New techniques are available which use reference-based compression to significantly improve ratios, such as the European Bioinformatics Institute's CRAM algorithm [3], but all such algorithms available at the time of the contest were lossy (i.e., they discard information that the algorithm considers unimportant). The Pistoia Alliance was concerned with finding a technique that was lossless that is, able to exactly reproduce the input data upon decompression without error or omission. This desire for high quality compression was driven by the demands for distributed science and the needs to send data files from the sequencing team to a remote analysis team. A primary goal of the contest was to ensure that the wider community would benefit from the discoveries made. All entries were required to be submitted under an open-source licence which would permit unrestricted use by anyone regardless of whether they worked for commercial or non-profit organisations. At the end of the contest the source code for all entries was made available via links from the contest website [4]. The contest featured a dynamic leaderboard on its webpage which used cloud computing technology to automatically assess and score every entry in real time as soon as it was submitted. When each assessment was complete, an email was sent to the entrant and the leaderboard updated to illustrate their performance. Main text The automated judging mechanism behind the dynamic leaderboard was the key technical feature of the contest. Entrants were asked to submit their compression and decompression code in an Amazon Web Service (AWS) S3 bucket whose contents conformed to the format specified on the contest website. The bucket had to contain all dependencies and external data that the entry required. Submission was via a web-based form which noted the entrant's details and a reference to their entry's location in AWS S3. A single AWS instance was kept running to monitor the database for new entries at five-minute intervals. When a new entry was detected, a new AWS instance would be started to judge it. The judging instances were discarded after each use in order to minimize the risk of cross-contamination between judging cycles. The singleuse approach also allowed multiple entries to be judged in parallel. Each judging instance contained a simple script which controlled the judging process. It operated as follows: 1. Download the entry 2. Set up a the contest data (a random extract from the 1000 Genomes Project [5]) 3. Secure the firewall 4. Run the entry in compression mode 5. Measure CPU and memory usage 6. Assess the compression ratio 7. Run the entry in decompression mode 8. Check that the total combined output files contain exactly the same information (header, sequence, and quality lines) as the input files 9. Update the results database 10.Email the results Discussion The real-time judging and dynamic leaderboard had a clear motivational effect on entrants as they were able to see immediately how their entries compared with their peers. In many cases this led to entrants submitting multiple entries as they attempted to regain pole position; thus encouraging further innovation and development of their ideas in a bid to stay ahead of the competition. A veritable flurry of activity occurred in the closing week of the contest where the most enthusiastic entrants were submitting up to three new attempts each per day. Interestingly by comparison, none of the entrants who waited until the last minute to submit their single attempts ended up further than halfway up the final leaderboard. Entries were ranked in a number of categories without an overall score. The aim of the contest was not to create a solution that came top of any one category, but to create one that performed well all-round. This required the participation of a human judging panel in order to assess, in their professional opinion, which entry had contributed most to progress in the field, as well as looking at the source code and concept to predict suitability for production-scale deployment. The overall winner of the contest was announced in April 2012 and a selection of entries are shown in Table 1. James Bonfield (Wellcome Trust Sanger Institute, UK) produced a technique [6] which relied on the compression of BAM alignment files rather than the original FASTQ data. The use of FASTQ had been mandated at the outset of the contest to remove any problems in comparing performance between vastly differing input formats. To obtain the BAM files, Bonfield first aligned the FASTQ against a reference human genome that had been bundled with his entry. This semi-reference-based approach led to good overall performance in most of the contest's categoriesmemory usage, speed, and ratiowhilst maintaining total data integrity without any round-trip loss. It was notable that so many entrants achieved full lossless compression that all those that did not could be safely removed from the running at the start of the final judging process without negatively impacting on the remaining pool of ideas. The reference-based approach was not mandated by the contest, but was a common feature amongst highranking entrants including Bonfield and Matt Mahoney (Dell Inc.) [7]. However, their techniques do not work at all in the absence of a reference genome. Referencebased approaches were not actively promoted because the organizers originally wished to see a solution that would work regardless of the source of the sequence. In the end, entries compressing the sequence data in isolation did not fare so well. The baseline entries using gzip and bzip2 achieved a consistently high placement in all categories. The organizers never revealed the exact format of the test data header lines (the only customizable part of the FASTQ specification) and thus no entries would have been over-tuned to just one format. This helped make the entries portable and robust when faced with unexpected header line formats. Bonfield did not actually have one winning entry; rather he had a set of related entries that populated most of the top positions in each category of the contest. This reflected a key outcome, that a one-size-fits-all approach is simply not appropriate in the compression of sequence data. Some organisations may need faster compression times (for quick storage of large volumes), some might want faster decompression (for later review of the data), whereas others might need better compression ratios The results from running bzip2 are shown against the winning entries in each category of the contest. Full results from all entries, including links to their source code, are available on the Sequence Squeeze website [4]. Compression ratios are the ratio of compressed file size to original file size (smaller is better). Times are in clockface seconds. Memory usage is peak in kilobytes. Entries with less than 100% round-trip accuracy are excluded. (for regular network transfer). The contest demonstrated that none of the algorithms would be able to deliver on all fronts -variations or configurations could improve performance in one single category, but never more. Conclusion The contest attracted in excess of 100 entries, but from a field of less than 20 entrants. The leaderboard clearly encouraged entrants to make repeated attempts to innovate and climb above their peers in the table of results. Using contests to drive innovation has been done before (e.g., Assemblathon [8]), but the dynamic leaderboard feature of Sequence Squeeze is clearly very useful as it gives transparency and immediacy to a competitive process which could otherwise be opaque and secretive. However, in the case of Sequence Squeeze, the lack of clarity on objective criteria for the overall winner, as opposed to subjective opinion of the judges, is an area that would need to be addressed. The end result of the contest was a set of brand new compression algorithms for next-generation sequencing data, all of which are fully open-source and available for the community to use and build upon with their own ideas. This open-source requirement laid down by the Pistoia Alliance ensured that the whole community would benefit from the open innovation that it was promoting via the contest, and the data compression lessons learnt in the process could be shared with everyone. Abbreviations AWS: Amazon Web Services; S3: Simple storage service.
2016-05-04T20:20:58.661Z
2013-04-18T00:00:00.000
{ "year": 2013, "sha1": "3ec6842baa83680ab13c32a212f46ccf38e80737", "oa_license": "CCBY", "oa_url": "https://academic.oup.com/gigascience/article-pdf/2/1/2047-217X-2-5/25511103/13742_2013_article_22.pdf", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "c31eaab725036d148026f7bb0f1391b45922a10e", "s2fieldsofstudy": [ "Computer Science" ], "extfieldsofstudy": [ "Medicine", "Computer Science" ] }
233473186
pes2o/s2orc
v3-fos-license
Free radical scavenging activity and cytotoxicity study of fermented oats (Avena sativa) Veena Sunderam1, Sathak Sameer Shaik Mohammed2, Yasasve Madhavan2, Manojj Dhinakaran2, Shobana Sampath2, Nirmala Patteswaran2, Lakshmi Thangavelu3, Ansel Vishal Lawrence*2 1Centre for Nano Science and Technology, A. C. Tech Campus, Anna University, Chennai-600025, Tamil Nadu, India 2Department of Biotechnology, Sree Sastha Institute of Engineering and Technology, (Af iliated to Anna University), Chennai-600123, Tamil Nadu, India 3Department of Pharmacology, Saveetha Dental College and Hospitals, SIMATS, Saveetha University, Chennai-600077, Tamil Nadu, India INTRODUCTION Oats has been recognized as a salubrious cereal containing high levels of soluble ibre (beta-glucan), protein, lipids, vitamins, antioxidants, phenolic compounds, and minerals. Oats as a functional food has physiological bene its in reducing hyperglycaemia, hyperinsulinemia, hypercholesterolemia, hypertension, and cancer (Adom et al., 2005). Phenolic acids, beta-glucan, tocopherols, avenan-thramides, etc., contributes towards the antioxidant activity of oats (Emmons et al., 1999). All these phenolic compounds possess potential healthpromoting properties because of their membranemodulating effects. β-glucans, a soluble dietary ibre present in oats, also exhibit an antioxidant capacity against free radicals (Sridevi et al., 2010). Anticancer activity is the effect of natural, synthetic, or biological agents that suppress and prevent carcinogenic progression. Several synthetic agents plant-derived chemotherapeutic drugs are being used in the treatment of cancer (Sunderam et al., 2019). Oats contain more than 20 unique polyphenols, avenanthramides exhibiting anti-in lammatory, and anti-proliferative activity, which inhibits the progression of cancer (Meydani, 2009). The primary component of oats encompasses a class of polysaccharides identi ied as beta-D-glucan, which produces immune responses by activating the monocytes/macrophages (Daou and Zhang, 2012). Antitumor and anticancer effect of beta-glucan helps in the adaptation of the immune cells and other components of the innate immune system (Hong et al., 2004). The antitumor killing mechanism of beta-glucan is mainly anchored by the neutrophils, primed with betafection (Haas et al., 2009). The current study is to assess the antioxidant activity of Avena sativa in fermented (in the presence of Lactobacillus acidophilus) and nonfermented samples. In addition, anticancer activity was performed using a colon cancer cell line (HT29). Sample Collection The oats were purchased from stores, cleaned to remove the impurities. They were ground to a ine powder and preserved in a sealed container maintained at room temperature. One gram of inely powdered oats were taken with 50ml of water (in the proportion of 1:50) and autoclaved for 45 minutes. The sample was stored in 4 o C for further use. 100µl of Lactobacillus acidophilus was added for the preparation of fermented oats. The conical lask was plugged with cotton and was left undisturbed for a time period of 72 hours at room temperature. Finally, both sample extracts were iltered, and the supernatants were collected in separate beakers. 2,2-diphenyl-1-picrylhydrazyl (DPPH) Assay Radical scavenging activity of the aqueous extracts of oats was measured using the standard procedure (Ye et al., 2013). The stock solution of the oats sample was prepared using Dimethyl sulfoxide DMSO in the concentration of 1mg/ml. Reaction mixtures were taken in different concentrations (50,100,150,200, and 250 µg/mL), and about 3 ml of a 0.004% methanolic solution of DPPH was added to all the test tubes. The absorbance was measured at 515nm after 30 minutes of dark incubation against the blank (DPPH + methanol), and ascorbic acid was used as the standard. The reduction of the DPPH radical was determined by the decrease in its absorbance at 515 nm. The radical scavenging activity (Inhibition %) was calculated using the formula: Inhibition % = [Ac-As/Ac] X 100, where Ac is the absorbance of the control and As is the absorbance of the sample. Radical scavenging activity of the samples was expressed as IC 50 , which is the concentration of the sample required to inhibit 50% of DPPH concentration. Anti-Cancer Activity (MTT Assay) The Colon cancer cell line (HT29) were plated separately using 96 well plates with the concentration of 1×104cells/well in Dulbecco's Modi ied Eagle's Medium (DMEM) media containing 10% fetal bovine serum (FBS). The cells were maintained in a CO 2 incubator at 37°C (5% CO 2 , 95% air, and 100% relative humidity). The cells were washed with 200 µL of 1X Phosphate Buffer Saline (PBS), and then the cells were treated with various test concentrations of the compound in serum-free media and incubated for 24 hours. The medium was aspirated from cells at the end of the treatment period. 0.5mg/mL of 3-[4,5-dimethylthiazol-2-yl] 2,5 diphenyltetrazolium bromide (MTT) in 1X PBS was added to each well and incubated at 37°C for 4 hours. After the incubation period, the medium containing MTT was discarded from the cells and washed using 200µL of PBS. The formed crystals was dissolved with 100 µL of DMSO and thoroughly mixed. The formazan dye turns to purple, blue color. The absorbance was measured at 570 nm using a microplate reader (Meerloo et al., 2013). The Cytotoxicity activity (Inhibition %) was calculated using the formula, Inhibition % = [Ac − As/Ac] × 100 DPPH Assay This assay is used to evaluate the free radical scavenging activity of samples based on the drop in the DPPH concentration, a stable free radical (Benzie and Strain, 1999). The results obtained shows the free radical scavenging activity gradually increases along with the concentration of test samples. Fermented oats exhibited greater antioxidant activity than non-fermented oats. A comparison of antioxidant activity between fermented oats and nonfermented oats resulted in a p-value > 0.05. IC 50 values of fermented and non-fermented oats were 201.03 and 236.46, respectively, indicating 50% of scavenging activity. Lactobacillus acidophilus and yeast improve the quality of the fermented products (Ak and Gülçin, 2008). The fermented product exhibited 55.71% radical scavenging activity, whereas the control sample (non-fermented cereal product) recorded a scavenging activity of 40.83% (Figure 1 and Table 1). A number of cases exhibit that an oat-containing diet enhances the antioxidant capacity. This is due to the presence of bioactive components like Vitamin E, phytic acid, lavonoids, phenolic compounds, sterols, and avenanthramides. The antioxidant compounds are concentrated in the periphery of the kernel (Bajpai and Chaudhary, 2015). A study reported that four beta-glucan hydrocolloids isolated from oats exhibited a signi icant amount of antioxidant activity determined by the DPPH method (Hastings and Kenealey, 2017). In vitro Cytotoxicity Activity The anticancer activity of samples was determined using the MTT assay against the colon cancer cell line (HT29). The mitochondrial activity of living cells is determined based on the conversion of tetrazolium salt MTT into formazan crystals. The concentration of the test sample increases, cell viability decreases. Fermented oats showed lower cell viability than non-fermented oats. IC 50 values of fermented and non-fermented oats were 64.35 and 88.41, respectively, indicates 50% of cell viability decrease ( Figure 2 and Table 2). Avenanthramides are bioactive compounds, found exclusively in oats and have shown anticancer property against breast cancer cell lines (MDA-MB-231) estimated by MTT colourimetric assays (Razali et al., 2008). Avenanthramides decreases the functionality of breast cancer cells in time and concentration a reliant manner. A similar study reported that avenanthramides isolated from oats showed anti-proliferative action against cancerous human colon cell lines. Several systematic reviews of case-controlled studies suggest that high ibre content can enhance the gut environment conditions by carcinogens-dilution in the colon and decrease transfer time, which might contribute to this form of forti ication. Of late, a large, population-based study showed that after further ine-tuning for cereal ibre, the intake of whole grains decreased the danger of colon cancer by 25% (Pašić et al., 2008). Anti-cancer properties of low molecular weight beta-glucan have been investigated against Me45 (melanoma cell lines), A431 (human epidermal carcinoma cells), normal HaCaT (human epidermal keratinocytes) and murine macrophages P388/D1 (Reddy et al., 2000). Low molecular weight beta-glucan from oats signi icantly decreased cancer cells viability with increased concentration (Choromanska et al., 2015). CONCLUSIONS The outcome of the present study reveals fermented and non-fermented oats as an accessible source of natural antioxidants with considerable health bene its. Oats may serve as an excellent lead for the development of an anti-cancer drug against colon cancer. These results suggest future delivery studies on animals with fermented oats for cancer therapy.
2020-04-02T09:14:37.359Z
2020-01-01T00:00:00.000
{ "year": 2020, "sha1": "bceaf8b93a0473ee56db6400c35a03c67711ad66", "oa_license": "CCBY", "oa_url": "https://doi.org/10.26452/ijrps.v11i1.1967", "oa_status": "GOLD", "pdf_src": "MergedPDFExtraction", "pdf_hash": "a4276c718d200eb391453005ac0279e792ec7cdb", "s2fieldsofstudy": [ "Environmental Science", "Agricultural and Food Sciences" ], "extfieldsofstudy": [] }
220843044
pes2o/s2orc
v3-fos-license
Adapting and implementing training, guidelines and treatment cards to improve primary care-based hypertension and diabetes management in a fragile context: results of a feasibility study in Sierra Leone Background Sierra Leone, a fragile country, is facing an increasingly significant burden of non-communicable diseases (NCDs). Facilitated by an international partnership, a project was developed to adapt and pilot desktop guidelines and other clinical support tools to strengthen primary care-based hypertension and diabetes diagnosis and management in Bombali district, Sierra Leone between 2018 and 2019. This study assesses the feasibility of the project through analysis of the processes of intervention adaptation and development, delivery of training and implementation of a care improvement package and preliminary outcomes of the intervention. Methods A mixed-method approach was used for the assessment, including 51 semi-structured interviews, review of routine treatment cards (retrieved for newly registered hypertensive and diabetic patients from June 2018 to March 2019 followed up for three months) and mentoring data, and observation of training. Thematic analysis was used for qualitative data and descriptive trend analysis and t-test was used for quantitative data, wherever appropriate. Results A Technical Working Group, established at district and national level, helped to adapt and develop the context-specific desktop guidelines for clinical management and lifestyle interventions and associated training curriculum and modules for community health officers (CHOs). Following a four-day training of CHOs, focusing on communication skills, diagnosis and management of hypertension and diabetes, and thanks to a CHO-based mentorship strategy, there was observed improvement of NCD knowledge and care processes regarding diagnosis, treatment, lifestyle education and follow up. The intervention significantly improved the average diastolic blood pressure of hypertensive patients (n = 50) three months into treatment (98 mmHg at baseline vs. 86 mmHg in Month 3, P = 0.001). However, health systems barriers typical of fragile settings, such as cost of transport and medication for patients and lack of supply of medications and treatment equipment in facilities, hindered the optimal delivery of care for hypertensive and diabetic patients. Conclusion Our study suggests the potential feasibility of this approach to strengthening primary care delivery of NCDs in fragile contexts. However, the approach needs to be built into routine supervision and pre-service training to be sustained. Key barriers in the health system and at community level also need to be addressed. (Continued from previous page) Conclusion: Our study suggests the potential feasibility of this approach to strengthening primary care delivery of NCDs in fragile contexts. However, the approach needs to be built into routine supervision and pre-service training to be sustained. Key barriers in the health system and at community level also need to be addressed. Keywords: Non-communicable diseases, Primary care strengthening, Sierra Leone, Feasibility assessment, Fragile setting Background Non-communicable diseases (NCDs), including heart disease, stroke, cancer, diabetes and chronic lung disease, contribute to almost 70% of all deaths worldwide [1]. More than 75% of all NCD deaths occur in low-and middle-income countries (LMICs). Each year, 15 million people die prematurely from an NCD, and 85% of these deaths occur in LMICs. The World Health Organisation (WHO) has proposed a framework for integrating NCD prevention into primary health care through a Package of Essential Noncommunicable (PEN) Disease Interventions for Primary Health Care in Low-resource Settings with a set of cost-effective priority interventions for poor-resource settings [2]. As the first point of contact with health services, primary-health-care (PHC) facilities are recognised as the most appropriate places for patient screening and early disease detection, continuous care provision for uncomplicated patients and referral of patients to specialists. Prevention and high-quality patient management are essential components in the control of NCDs such as hypertension and diabetes, and there is wealth of evidence on effective and cost-effective interventions for preventing and managing NCDs [3]. However, finding ways to implement these interventions and sustainably incorporate them into practice remains a challenge [3], especially in fragile settings. In sub-Saharan Africa, taskshifting, to overcome the shortage of trained physicians and other issues relating to access to primary care, is an increasingly widespread delivery approach for PHC interventions for hypertension, diabetes and other NCDs [4][5][6][7], along with other simple interventions [8,9]. Sierra Leone, with a population of over seven million, has almost the lowest life expectancy at birth (52 years for men and 54 years for women) in the world [10]. The health system in Sierra Leone is also one of the most fragile in the world, partly due to a civil war between 1991 and 2002 that destroyed infrastructure and left thousands of people dead and displaced as refugees in neighboring countries (including the health workforce). Health systems in Sierra Leone were further burdened by the recent Ebola virus disease (EVD) outbreak in 2014 [11]. Sierra Leone is currently in a health system reconstruction phase. While Sierra Leone is facing a high communicable disease burden, NCDs and its associated conditions represent an increasingly significant burden. WHO estimated that the percentage of deaths attributable to NCDs in Sierra Leone was 18% in 2008 and this has increased to 26% in 2012, with cardiovascular diseases accounting for 9% [12]. The WHO further estimated that around 30% of adult men and women had raised blood pressure respectively, while 4.8% of adults had raised blood glucose in 2014 [12]. Other risk factors of NCDs are also common: about 33% of men and 6.2% of women over 15 years smoked every day, and nearly 10 and 30% of adults were obese and overweight respectively [12]. Despite the increasing NCD burden, a scoping review in 2017 showed that the country's capacity to address and respond to NCDs remained limited [13]. It highlighted that no specific programme or action plan was operational for the prevention and control of the major NCDs and risk factors. A more recent review of health system readiness for NCDs identified that NCD control receives very limited resources, with no NCD budget line, although both national and district stakeholders are increasingly aware of the importance of NCD control [14]. NCDs are still mainly being addressed at the tertiary care level, and patients often present at this level with complications of their uncontrolled disease. This highlights the need for context specific health education about NCDs and its associated complications and the need to strengthen the first point of contact within the health systemthe PHC level. It is therefore important to explore how to improve the primary care-based risk reduction and case management for the prevention and control of the NCDs. Despite the introduction of the WHO PEN, this still needs to be operationalised in Sierra Leone [2]. However, a systematic review of community-based interventions for prevention of CVDs in LMICs suggested that training health care providers, implementing treatment guidelines and health education with a focus on diet and salt were key to the success of programmes of preventing CVDs [15]. With the NCD policy and strategic plan currently being reviewed, this study was timely as it could inform the review and implementation process in Sierra Leone. An earlier exploratory study highlighted a number of challenges of implementing NCD control in Sierra Leone, including financial barriers for users, lack of access to quality-assured drugs and high recourse to private and informal care seeking. However, it also identified potential leverage points for strengthening the system within existing (low) resourcing, such as through improved clinical guides and tools, combined with more effective engagement with communities, alongside regulatory and fiscal (tax revenue) measures [14]. Informed by this and other studies, an international partnership was developed to adapt and pilot guidelines and other clinical support tools to strengthen primary care-based hypertension and diabetes diagnosis and management in Bombali district, Sierra Leone. The aim was to develop learning to enable more effective primary care based NCD management in Sierra Leone and similar contexts. This study describes this intervention and assesses its feasibility. Study setting Bombali District is one of fourteen districts of Sierra Leone, located in the Northern Province, which borders the Republic of Guinea to the north. It is the second largest district (7985 km 2 ) in Sierra Leone and had a population of 606,183 in 2015. The district has 19 Community Health Centres (CHCs), 36 Maternal and Child Health Posts (MCHPs), and 51 Community Health Posts (CHPs) at the primary care level; and two government hospitals, two private clinics. Local health services are managed by a District Health Management Team (DHMT). Each CHC is managed by a community health officer (CHO) and staffed with State Enrolled Community Health Nurses (SECHN) and Community Health Assistants (CHAs). The CHPs and MCHPs are managed by SECHNs and MCH Aides respectively. The partnership The intervention was a partnership by a number of local and international organisations. The Royal College of General Practice (RCGP) came to Sierra Leone in 2017 to agree with Ministry of Health and Sanitation (MoHS) on developing an intervention based around training of CHOs on NCDS in Bombali District. This would be delivered and facilitated by 3 volunteer General Practitioners (GPs) from the UK working through Volunteer Service Overseas (VSO) as the local implementing partner. The package of tools was adapted from an earlier version developed by Communicable Diseases/Health Services Delivery Research Consortium 1 , University of Leeds, with support from the National Institute for Health Research (NIHR) funded Research Unit for Health In Fragility (RUHF) project, 2 led by Queen Margaret University, Edinburgh, and working in partnership with the College of Medicine and Allied Health Sciences (COMAHS) at the University of Sierra Leone. Overall leadership came from the MoHS, while support to adaptation and pilot implementation came from the RCGP and VSO, working with the DHMT in Bombali. The RUHF project team led on the assessment of the intervention. The project has gone through three phases over March 2018 to June 2019: (1) intervention adaptation and development, (2) delivery of training and implementation of care improvement package and (3) collecting data to assess feasibility (Table 1). Data collection The feasibility assessment includes understanding the processes of (1) intervention adaptation and development, (2) delivery of training, (3) implementation of care package and (4) preliminary outcomes of the intervention. The preliminary outcomes included improvement of systolic blood pressure (SBP) and diastolic blood pressure (DBP) after 3 months' follow-up as compared to the baseline SBP and DBP level. The baseline BP level is the BP level recorded in the treatment cards at first consultation. A mixed-method approach, through mainly qualitative methods, was used. Table 2 lists the methods, key questions or indicators and time points of data collection in relation to the specific assessment questions. Semi-structured interviews Semi-structured interviews were conducted by researchers before and after the intervention. Interviewing sought to collect information related to the context of intervention, processes of intervention adaptation, and the respondent's experiences, perspectives and opinions of the intervention (training and case management). In the first round of interviews, we purposively selected four CHCs, two in urban settings and two in more rural areas. Purposive sampling techniques were used to identify interviewees. In each CHC, we interviewed one CHO, two SECHNs. We selected 2 patients -one with hypertension and one with diabetes from each CHC. In addition, we interviewed one DHMT member who was involved with the intervention, and two doctors supporting it. In total, we conducted 23 and 28 interviews before and after the intervention respectively. We stopped the interviews when data reached a saturation point to address the research questions. The interview guide was specifically developed for this study, and has not previously been published elsewhere (Additional file 1). The interview was conducted by experienced qualitative researchers. The interview with providers was conducted in English while an interpreter who was a recent public health graduate at University of Makeni, in Bombali district helped with the local language interviews. The interviews were recorded with consent from the participants. Each interview lasted between 30 and 45 min. Written consent was sought from all the interviewees. Review of project reports Training registers and training report forms were analysed to understand who attended or did not attend the training. Records of mentoring and supervisory visits (by doctors together with the selected CHOs) were extracted and analysed to understand the project progress and challenges during the initial period and final period of implementation. Observations Participant observations were made during the intervention adaptation and development processes. Observations were conducted of all the training events by the doctors and researchers (if available for field visits), using a structured observation checklist. Observation was also conducted during the mentoring and supervisory visits by the doctors and researchers, using a structured observation checklist. The observational records were extracted and analysed during the initial period and final period of implementation. Review of routine data (treatment cards) The treatment registers and treatment cards were completed by CHOs on a routine basis based on the individual patients' conditions. Review of treatment registers and cards helped to understand the diabetic and hypertensive control outcomes and case management information, such as prescription and use of related drugs and life-style interventions. In this paper we included the hypertensive patients who were recruited after the first training was conducted in May 2018 until 30 March 2019, and we defined the follow up period for all the included patients to be 3 months. Data analysis Interview data was transcribed and framework analysis was used for qualitative data, starting from a coding frame based on the interview guides and study protocol, with new codes added inductively after reading through the transcripts. The themes initially included process of adaptation and development of intervention, delivery of the training and care improvement package, and factors influencing them (enablers and barriers to the intervention). Software (NVivo) was used to assist the data analysis. Quantitative data was input into the Excel sheet and then exported to the SPSS 20.0. Descriptive analysis of quantitative data was carried out using mean, Results This section reports the processes of intervention adaptation and training, findings on delivery of care delivery and health systems factors influencing the intervention and its feasibility. Intervention adaptation and development Technical working groups (TWG) were established at local and national level in April 2018. The local TWG worked on the initial adaptation of the desktop guidelines for clinical management and lifestyle interventions, and associated training curriculum and modules, which had been pilot tested and shown to be acceptable and feasible in rural settings in China [16], Pakistan [17][18][19], Swaziland and other LMIC countries. The local TWG included two RCGP doctors and 5 CHOs who scored highly in the first module and were most keen to teach on NCDs and these CHOs became the trainers in the following training activities and mentors during the pilot implementation of the intervention. The local TWG reviewed the materials and adapted each session to be context-specific and easy to use for CHOs during training and in the consultation room. Adaption also took into consideration recent evidence (e.g., WHO PEN), local terminology, and low availability of free, good quality, affordable and accessible diagnostics and medications. The local TWG kept regular interactions and communication with the national TWG, which included the NCD Directorate of MoHS, tertiary hospital NCD experts, so that the national needs and priority were fed into the adaptation processes. For instance, initially the adaptation was only targeted at CHOs, and at the request of the NCD Directory, the materials were also adapted for the lower level of health staff such as SECHNs, CHAs, MCH Aides. The national TWG experts reviewed the materials and a roundtable meeting was called to discuss and agree on the materials before the approval by MoHS. The adaptation process lasted for 8 months between April and December in 2018. The national TWG approved a set of materials including (1) a guidelines (a desktop guide for use in clinics which gives staff accessible and practical advice about the diagnosis, management and follow-up care actions that need to be taken) that if feasible would be rolled out nationwide; (2) training materials for CHOs and other Primary Health Units (PHU) staff; (3) a treatment card (to improve management and follow up of NCDs); (4) eye test and BMI charts and a pictorial lifestyle change education chart. Our interviews with the participants suggested that TWG allowed for a lot of learning within the group and developed members' knowledge and confidence in NCDs significantly. The process guided major adaptions to make the materials more applicable to CHOs and their background knowledge and low resource setting and made it locally owned. The local TWG members met after the first time each module had been taught and they trained other groups as facilitators under this module. It was challenging to re-orient members from their accustomed lectures to a case study and role-play active learning approach. However, the training process helped identify problems and refine the desk-guide and training materials. Delivery of training In total, 35 CHOs (community and hospital based) in the district had the opportunity to attend 4 modules on NCDs, each one on separate days (4 days), spread over several months. The three full day modules focused on communication skills, diagnosis and management of hypertension and diabetes, and the half day module included epilepsy and depression with a short session on cascading knowledge to other PHU staff (midwives, SECHNs and MCH Aides). For the full day modules, the CHOs were split into two groups (15 for each group approximately). The first training in each full day module served as the pilot course that helped to refine the deskguide and training materials prior to subsequent training of the remaining groups. The hospital CHOs, who are responsible for primary care to a local population, were also trained to ensure consistent and appropriate NCD management at all levels and stages of healthcare. The final half day module was done as one large group. In addition, approximately 300 PHU staff (midwives, SECHNs and MCH Aides) attended a one-day training course, facilitated by the trained CHOs with support from the RCGP/VSO doctors. This included sections on communication skills, lifestyle planning, and diabetes and hypertension basics. This aimed to increase detection, referral, and lifestyle behaviour change (but not to initiate prescription of a medication or open a treatment card, which was done by CHOs). The training improved the knowledge of hypertension and diabetes among CHOs and other PHU staff, as suggested by the pre-post tests conducted by the RCGP/ VSO doctors (Table 3). The training provided was seen as helpful by all health workers interviewed. They valued the knowledge they were taught and reported that they had put it into use since. 'It [the training] went well because at times when we are choked up with job, when we see the patient after medication, we now give them lifestyle advice. … We cannot do everything all by ourselves, we share the responsibilities with other health workers who also had the training' (CHO Manager 2). Almost all health workers complained that the training was too short and called for more training on NCD prevention. It was acknowledged that training all health workers was a vast undertaking, given the amount of time taken to explain all points and the low starting levels of knowledge on the recognition and management of hypertension and diabetes and the very limited teaching during CHO training (which covers a broad range of medicine and surgery over a short 3-year training period.) Providing messages/training via TV was suggested as an alternative for refresher training, but it was acknowledged that this might not be easy due to unreliable electricity supplies and lack of television equipment in health facilities. The findings also raise the importance of improved pre-service training in NCDs for frontline clinical staff like CHOs. Processes and preliminary outcomes of implementing the care improvement package NCD diagnosis and symptoms Consistent with the improved scores after training, the CHOs reported that training improved their confidence and skills in diagnosing NCD patients and they demonstrated some knowledge of hypertension and diabetes diagnosis. For instance, CHOs and other PHU staff (SECHNs and MCH Aides) were aware of the importance of doing two blood pressure tests for diagnosis. However, their capacity for case identification was poor, given the low number of cases identified (based on our observation and interviews). Our interviews suggested that the numbers of patients identified with hypertension and diabetes averaged about 2 per CHC (ranges from 0 to 12), which is probably mainly due to low attendance rates at clinics. Health workers stated that NCD patients were being more widely diagnosed than before the intervention, marking an improvement in practice and awareness. According to our interviews, the number of hypertension patients diagnosed in CHCs variedthe diagnosed numbers were: 9 (with one transferred to Freetown), 11, 12, 10 and one CHC reported over 60 patients since the intervention started (until feasibility assessment interviews were conducted in June, 2019). The number of diabetes patients diagnosed in CHCs varied but was lower than the number of hypertensive patients diagnosed. The numbers CHOs stated were: 0, 2, 0, 0, 1. CHOs reported lack of working equipment to diagnose high blood glucose levels. Analysis of the treatment cards suggested similar results. From June 2018 to March 2019, 50 (94%) of 53 treatment cards retrieved were recorded as having hypertension (including 19 male and 31 female patients, average age 62), while only 3 (6%, male, average age 59) were recorded as having diabetes. Desktop guides and treatment cards Desk guides were observed in use in two of the four CHCs visited, and CHOs stated that they were useful. As a CHO commented: 'Before now, we looked in books, it took more time, but now we only need to look at the technical guidelines and flow charts.' CHO Treatment cards and patient registers were reported as being widely used in the four CHCs visited. One CHO suggested that the treatment cards could be revised to be more concise. Another, on the other hand, preferred to use a notebook to list patient details rather than the treatment cards because he could record more information there. Though treatment cards were being used, our interviews and observation suggested that most CHCs recorded fewer treatment cards than NCD patients. Some CHOs need more encouragement to complete forms. One CHO stated that they only used treatment cards when they prescribed drugs to the patient, which was not their intended use (they should be used for all cases). Compliance with medication Analysis of the treatment cards suggested that the three most often prescribed hypertensive drugs were Amlodipine, Nifedipine, and hydrochlorothiazide (HTCZ) (Fig. 1). Although Amlodipine remained the most often prescribed, the prescription rate declined consecutively from the first consultation (58%) to Month 1 (44%), Month 2 (20%) and Month 3 (22%). Some patients who were interviewed (5 of 7 interviewed) stated that they had been taking their medication to manage their condition: 'When I give money they give me the medicine, only the hypertension medicine they're giving me. Like yesterday I spent le50.000 and they give me the hypertension medicine that I should take throughout the month [until the next appointment]." Patient with hypertension. Due to financial constraints, health workers reported that many patients could not afford to comply with instructions to take medication regularly to treat their NCD conditions (e.g. hypertension). Lifestyle advice All health workers interviewed showed some awareness of the importance of lifestyle advice for those suffering from hypertension and knowledge of how to treat patients with NCDs, including how to deliver lifestyle advice, indicating this aspect of the intervention had been effective. Many CHOs gave advice to patients to reduce their unhealthy lifestyle habits bit by bit, encouraging them not to give up suddenly, but to gradually improve their lifestyle. As showed by the treatment card analysis, the most common lifestyle messages for patients were exercise and low salt. CHOs reported that some patients say they have made lifestyle changes, but contrasting signs were sometimes noted (for example, patients claiming they have given up smoking but still smelling of smoke). Follow up management Patients reported that once they had been diagnosed, they did attend the CHC for follow-up. However, connected to distance to health facilities and lack of mobility, health workers and patients reported that lack of follow up was a common problem. It was common that patients did not return for further treatment for NCDs due to the cost of transport or medication. Some patients did not go to their follow up appointment because they started to feel better when they start taking medication. Several patients went to hospital to treat acute symptoms when regular follow up at their health centres could have prevented this. Many CHOs stated that the only sure way to follow up with patients was to visit their homes. However, this was not practical for many health workers, though community health workers (CHWs) were discussed as a potential resource who could regularly visit patients' home due to their close proximity (often living in communities). The treatment card analysis showed that of 50 hypertensive patients recruited for evaluation, the proportion of the hypertensive patients who attended the follow up appointment reduced from 62% in Month 1 to 40% in Month 2 and 38% in Month 3 (Fig. 2). Referral CHOs stated that they thought referral processes were not always working. Often patients could not afford to travel to the hospitals they were referred to. CHOs reported that they had not referred many patients to hospitals. Mentoring and feedback Most CHCs received a mentoring visit conducted by local TWG members. The intention was to develop a peer support process which would continue longer term. During the mentoring visits to the CHCs, mentors acted as colleagues supporting other CHOs to improve the quality of care, by assessing treatment activity, identifying problems and solutions, building confidence and skills and encouraging more screening. All 22 CHCs were visited by the end of April, 2019. Mentors completed and submitted the mentoring report based on a standard mentoring form. CHOs agreed that the mentoring they received was very helpful. There was also a WhatsApp group to share information and advice about individual clinical cases between CHOs and VSO doctors. From the mentoring the TWG had done, they noticed there was misdiagnosis and mis-treatment of NCDs, and feedback was given during the visit. There are widespread mistaken beliefs about hypertension held by both health staff and the community. It was seen as an acute, dangerous symptomatic disease, which affects what type of treatment is expected for it. Overtreatment has continued in the hospital despite training provision. The mentoring visits also found that some patients in PHUs were given the available but unsuitable medicines such as methyldopa, meant for pregnant women through the free health care initiative. Preliminary outcomes The treatment card analysis showed a positive outcome after initiation of treatment for hypertension (Fig. 3). The SBP for all the hypertensive patients (n = 50) steadily decreased from baseline (172 mmHg) to Month 1 (159 mmHg) and Month 2 (153 mmHg), before increasing again in Month 3 (157 mmHg). The average SBP decreased by 15 mmHg from baseline to Month 3, which is not statistically significant (t = 1.701, p = 0.106). The DBP for all the hypertensive patients remained unchanged from baseline (98 mmHg) to Month 1 (98 mmHg) then decreased dramatically in Month 2 (88 mmHg), and slightly in Month 3 (86 mmHg). The average DBP achieved a significant reduction by 12 mmHg from baseline to Month 3 (t = 4.069, p = 0.001). Health systems factors influencing the pilot implementation Our interviews and observations identified a number of health systems factors that affected the pilot implementation of the care improvement package. Distance to health facility Some patients paid for transport to attend the health facilities, others used motorbikes, but several patients and CHOs stated that it was difficult for patients to get to the health facility and this affected follow up and referral. Equipment not working or lacking There were numerous accounts of equipment in the health facilities not functioning or not being maintained. A common example was blood pressure monitors not working or running out of battery, while no one at the CHC had money or time to replace them. Most CHCs did not have working glucometers and all CHOs stated they struggled to get access to glucose testing strips. Most CHCs had BP machines, but lacked a battery. Almost all health workers who were interviewed, called for more and improved equipment (e.g. BP machines, glucometers, strips for measuring blood glucose levels) as well as training on how to use it. Lack of drug supplies There was an overwhelming consensus that health facilities did not have enough drugs and those that they did have were not affordable. Interviews show that the drug supply (apart from the free MNCH drugs) is provided through ad hoc/informal practices, including purchasing and selling drugs to patients. 'Some of these drugs are not in the facility unless we buy them because they are not supplying them to us, so at least if they start giving them to us … we are really in constraint for the drugs in the facility unless we have to go to Makeni for them.' MCH Aide. Financial barriers Medication access and affordability has, as expected, been a major difficulty. Only a very small minority are likely to be willing or able to pay for two or more drugs per month. Many patients struggled to pay for the medication they needed after they were diagnosed with hypertension. This was an impediment to them getting adequate treatment for their condition. Many CHOs, SECHNs and MCH Aids stated there was a desperate need to provide affordable drugs to patients. CHO-patient relationship Both CHOs and patients reported trust in the CHOpatient relationship. Patients reported that speaking to the CHO made them feel better and that they felt comfortable speaking to the CHOs and other health workers. Traditional healers CHOs stated that traditional healers were important influencers in communities. Two CHOs stated that they had worked to develop friendly relationships with traditional healers. These CHOs had worked with traditional healers to help them understand the importance of them sending sick people to the CHC for medical treatment. This was being done with varying degrees of success. Nevertheless, there were several examples given by health workers of traditional healers not managing NCDs conditions well in people they saw, allowing them to get worse, and eventually resulting in patients going to CHCs needing urgent medical attention with acute medical conditions, for example very high blood pressure. Often when they arrive at the CHCs their conditions (e.g. hypertension) are serious. This adds to the challenge of case management for the CHOs. Discussion This is one of the first studies to describe the adaptation and implementation of interventions to improve primary care based hypertensive and diabetic management in a fragile setting and to assess its feasibility. Our study demonstrates that a partnership between the local and national health authorities, international NGOs and researchers could work to facilitate NCD service delivery improvement at primary care level in Sierra Leone. The pilot intervention improved the SBP and DBP in 3 months. It worked as intended in terms of process of implementing the care improvement package to some extent, thought numerous challenges in health systems hindered the effective delivery of intervention. A number of key issues arise from this study that will help to generate lessons for much-needed future NCD interventions in fragile settings. Technical working group process In this partnership, the TWG has played a crucial role in planning, adapting, training, implementing and mentoring of the intervention throughout the process. With the TWG process, the project effectively engaged the local and national stakeholders to improve the content and chance of local 'buy-in' to the technical guidelines. As previous studies have suggested, TWG activities build local research capacity, foster in-country ownership and promote research uptake [3,20]. In LMICs, especially in fragile settings, service delivery in primary care settings for patients with NCDs is generally "unstructured" and poorly monitored. One study implementing a nurse-led NCD service in a resource-poor area of South Africa suggested the use of protocols and treatment strategies that were simple and responsive to the local situation enabled the majority of patients to receive convenient and appropriate management of their NCD at their local primary care facility [21]. The TWG developed a package, which includes simple, systematic, user-friendly, and context-specific case management guides and supporting tools for training and service performance monitoring at primary care. This package has potential to promote quality and consistency of NCD care at primary care. The materials could be further adapted and incorporated into the national plan and national curriculum for the pre-service training programme for CHOs and other cadres, such as nurses. The TWG members acted as trainers and mentors, which was found to be an effective peer support strategy for the intervention implementation phase. Process of implementing the care improvement package A systematic review and evidence synthesis of primary care approaches for chronic disease in Sub-Saharan Africa suggested that the use of standardized protocols for diagnosis, treatment, monitoring and referral to specialist cares should become one of the priorities for disease management at primary healthcare clinics [22]. Our study highlighted mixed results of implementing the NCD care improvement package in terms of diagnosing, treating, conducting lifestyle education and follow up, using desktop-guides and treatment cards and other supportive materials, at the primary care level in a fragile context. On the one hand, these improved processes may have helped to achieve positive improvement of SBP and DBP in 3 months. On the other hand, despite the reported improvement in NCD knowledge and care processes, case identification, use of standard guidelines and tools (e.g., treatment cards), treatment capacity and referral processes remained sub-optimal. For instance, among the 53 eligible treatment cards for hypertensive and diabetic patients, only 3 were recruited as diabetic patients during the defined period. This indicates the very poor capacity of case detection for diabetes in primary care settings (as the prevalence of raised blood glucose was nearly 5% among the adults in 2014 [12]). Our results suggest that implementing the intervention in primary care requires strengthening of training and mentoring, both during pre-service training, and inservice training, to improve the diagnostic and treatment capacity and counselling skills. Addressing health systems and community barriers to improve the primary care delivery of NCDs A systematic review of primary care approaches for chronic disease in sub-Saharan Africa suggested the importance of the availability of essential diagnostic tools and medications, in addition to the use of standardized care protocols for disease management at local primary healthcare clinics [22]. Consistent with our exploratory study [14], this study identifies a number of health systems barriers that are typical of fragile settings [23] and even wider sub-Saharan Africa [24][25][26] and which hinder the effective delivery of NCD care in primary care, such as distance, cost of transport and medications, lack of supply of medications, poor treatment equipment and health seeking behaviours which often delay careseeking or prioritise traditional medicine. These barriers may have jointly contributed to the low primary care attendance of NCD patients and hence poor detection of hypertensive, and in particular diabetic cases,-which require more sophisticated diagnosis and treatment. CHCs need to be routinely seeing a large proportion of the adult population in the course of a year to detect most patients with asymptomatic conditions. In particular, lack of equipment or working equipment not only hinders the diagnosis but also the timely treatment and consistent follow-up care. Shortage of drug supply and financial barriers, as indicated by the decrease of Amlodipine, the most common prescription, prevent timely access to NCD care and quality of case management. The barriers we identify occupy the often turbulent space between communities and the health system which have been shown to be problematic in fragile health settings [27]. As such, it is clear from our findings that further work needs to be done to strengthen weak public health systems in fragile health settings to enable them to more effectively interact with the needs of vulnerable communities, promoting engagement with all providers and communities, and building awareness and trust. CHOs will need on-going support, mentoring and essential NCD supplies, all of which requires small but critical funding. This can be done, for example, by ensuring interventions such as ours are properly embedded into a national NCD strategy which is supported to be scalable and sustainable. Similarly, greater communication should be stimulated between the health system and communities through, for example, social mobilisation or other community engagement activities around NCDs which are connected to the wider public health system. Such activities have been recently advocated [28,29] and found to be valuable in tackling NCDs in other fragile health settings [30,31]. Limitations This article reports on a pilot intervention project conducted in a district of Sierra Leone. The focus was on assessing feasibility, using mixed methods, with an emphasis on qualitative data, which has deeper exploratory potential. The qualitative data may be biased as respondents may have chosen to provide positive responses in the interviews, although we disseminated and validated the key messages in a district-wide CHO meeting. Given the data system limitations in Sierra Leone, quantitative sources also have weaknesses, which are balanced through our use of mixed methods. The mix of data sources, including observation and record checking, supports more robust overall assessment. In this study, sample size included for evaluation was too small (especially with only three diabetic patients recruited) and the follow up period may be too short to achieve the optimal BP change. In addition, we did not assess the incremental costs of the intervention, related to training, implementation and treatment (direct medical and nonmedical costs such as transportation costs), given the focus of the research on feasibility. Long-term follow up of the intervention with a more rigorous evaluation design, including incremental cost-effectiveness analysis, would be an appropriate next step to better understand the effects and sustainability of the intervention. Conclusion Our study demonstrates how an international partnership can work to adapt and support a primary care strengthening intervention for NCDs in a fragile setting. Our study suggests the potential feasibility of this intervention. However, careful attention needs to be paid to sustainability and integration of the approach in preservice training, as well as addressing significant remaining barriers in the health system and at community level. Additional file 1. Interview guides for RUHF feasibility study of strengthening NCD services in Sierra Leone. Abbreviations CHAs: Community health assistants; CHCs: Community health centres; CHPs: Community health posts; CHO: Community health officer; DBP: Diastolic blood pressure; DHMT: District health management team; HTCZ: hydrochlorothiazide; MCHPs: Maternal and child health posts; NCDs: Non-communicable diseases; PHU: Primary health unit; SBP: Systolic blood pressure; SECHNs: State enrolled community health nurses; WHO: World health organisation
2020-07-29T14:58:21.184Z
2020-07-29T00:00:00.000
{ "year": 2020, "sha1": "981f586e8eb460595caa9b32c17f3812d2c4d465", "oa_license": "CCBY", "oa_url": "https://bmcpublichealth.biomedcentral.com/track/pdf/10.1186/s12889-020-09263-7", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "981f586e8eb460595caa9b32c17f3812d2c4d465", "s2fieldsofstudy": [ "Medicine" ], "extfieldsofstudy": [ "Medicine" ] }
15086511
pes2o/s2orc
v3-fos-license
APP Overexpression Causes Aβ-Independent Neuronal Death through Intrinsic Apoptosis Pathway Abstract Accumulation of amyloid-β (Aβ) peptide in the brain is a central hallmark of Alzheimer’s disease (AD) and is thought to be the cause of the observed neurodegeneration. Many animal models have been generated that overproduce Aβ yet do not exhibit clear neuronal loss, questioning this Aβ hypothesis. We previously developed an in vivo mouse model that expresses a humanized amyloid precursor protein (hAPP) in olfactory sensory neurons (OSNs) showing robust apoptosis and olfactory dysfunction by 3 weeks of age, which is consistent with early OSN loss and smell deficits, as observed in AD patients. Here we show, by deleting the β-site APP cleaving enzyme 1 (BACE1) in two distinct transgenic mouse models, that hAPP-induced apoptosis of OSNs is Aβ independent and remains cell autonomous. In addition, we reveal that the intrinsic apoptosis pathway is responsible for hAPP-induced OSN death, as marked by mitochondrial damage and caspase-9 activation. Given that hAPP expression causes OSN apoptosis despite the absence of BACE1, we propose that Aβ is not the sole cause of hAPP-induced neurodegeneration and that the early loss of olfactory function in AD may be based on a cell-autonomous mechanism, which could mark an early phase of AD, prior to Aβ accumulation. Thus, the olfactory system could serve as an important new platform to study the development of AD, providing unique insight for both early diagnosis and intervention. Introduction More than 5 million Americans have Alzheimer's disease (AD), yet there are no effective treatments (Alzheimer's Association, 2013). The amyloid hypothesis posits that the widespread neurodegeneration found in AD patients is caused by cerebral accumulation of the cytotoxic amyloid-␤ (A␤) peptide, which is derived from the amyloid precursor protein (APP) and forms plaques (Hardy and Selkoe, 2002). While this hypothesis has gained strong support from studies in both human patients and animals models (Hardy and Selkoe, 2002), recent findings have identified A␤-independent mechanisms that may also contribute to AD-related neurodegeneration (Pimplikar et al., 2010;Winton et al., 2011). Resolving this issue has been difficult as most transgenic models that overproduce A␤ to generate cerebral deposits poorly mimic the neuronal loss found in AD patients (Wirths and Bayer, 2010). Thus, the question remains: is A␤ the primary source of neurodegeneration in AD? Olfactory sensory neuron (OSN) loss and smell deficits occur early in AD (Talamo et al., 1989;Bacon et al., 1998). We previously established an olfactory model of ADrelated neurodegeneration in which we expressed a humanized APP gene (hAPP), containing both the Swedish and Indiana familial AD mutations, under the control of olfactory-specific promoters that target expression in either mature (OMP-hAPP line) or immature OSNs (G␥8-hAPP line; Cheng et al., 2011). Importantly, we observed large-scale apoptosis of OSNs in both lines by 3 weeks of age with no detectable extracellular A␤ deposits (Cheng et al., 2011(Cheng et al., , 2013Saar et al., 2015). This finding raised questions about the participation of A␤ in APP-induced neurodegeneration while presenting us with a unique model to explore the underlying mechanism, since OSN loss is robust and can be observed relatively early. Thus, we sought to determine whether A␤ peptide is required for APP-induced OSN cell death and which apoptosis pathway mediates the induced neurodegeneration. ELISA of A␤ Colorimetric sandwich ELISA kits with antibodies against human A␤ 42 (Invitrogen) were used. Acutely dissected olfactory epithelium tissue was homogenized and centrifuged. Supernatant was loaded on ELISA plate. Assay was performed according to manufacturer's manual with all standards and samples measured in duplicate. N represents numbers of animals. Cell counting Caspase-3-and caspase-9-positive cells in the septal epithelium spanning dorsal to ventral zones were counted manually using signal intensity and size threshold. Images of four to six sample sections were taken from each animal representing the anterior, middle, and posterior parts of the turbinates. Cell counts were expressed as the number of cells per millimeter in the septal epithelium. N represents numbers of animals. Electron microscopy A previously established protocol (Tao- Cheng et al., 2006) was followed. Mice were perfused transcardially with 2% PFA and 2% glutaraldehyde in 0.1N cacodylate buffer. Sections of OE were prepared by the Electron Microscopy Facility of the National Institute of Neurological Disorders and Stroke, and examined with an electron microscope (1200EX II, JEOL). Mitochondrial membrane potential assay Mitochondrial membrane potential is monitored by the staining of tetramethylrhodamine ethyl ester (TMRE; Life Technologies), which is a cell-permeable and negatively charged dye that accumulates in active mitochondria in a potential-dependent manner. TMRE was prepared in DMSO for 1 mM stock and diluted to a 100 nM working solution in Ringer's solution (Hospira). Fresh olfactory epithelium tissues were dissected from 4-week-old mice and sectioned at 200 m thickness in ice-cold Ringer's solution on a Leica Vibratome. Acute OE sections were incubated in TMRE working solution at 37°C for 30 min, gently rinsed in Ringer's solution, and then immediately imaged under a fluorescence microscope. Statistical analysis A Student's t test was performed to test statistical significance, assuming two-tailed distribution and twosample unequal variance. All values are reported as the mean Ϯ SD. The p values (‫ء‬p Ͻ 0.05 and ‫‪p‬ءء‬ Ͻ 0.001) are also indicated in the corresponding figure legend. Results To extend our previous findings showing hAPP-induced OSN loss through olfactory-specific expression, we used a third mouse model in which hAPP is expressed using the Camk2a promotor, which is broadly active in many excitatory neurons (Mayford et al., 1996), including both mature and immature OSNs (Wei et al., 1998). By crossing Camk2a-tTA mice (Mayford et al., 1996) with tetO-hAPP mice (Jankowsky et al., 2005), we generated a mutant line (Camk2a-hAPP) to determine whether OSN cell death occurs earlier than neuronal loss in other brain regions (Fig. 1A,B). A previous study using the same Camk2a-hAPP line reported amyloid deposits in many brain areas by 2 months of age, but no active apoptosis (Jankowsky et al., 2005). Here we show broad colocalization of hAPP immunohistochemical signal and endogenous Camk2a signal in OSNs from 3-week-old mutant animals (Fig. 1C), which also exhibit a thinner OE and a clear reduction in mature OSNs compared to controls (Fig. 1C). Importantly, analysis with antibody against cleaved caspase-3 revealed that Camk2a-hAPP animals had significantly more apoptotic cells in the OE than controls (Figs. 1D, 2C ; p ϭ 6 ϫ 10 Ϫ6 ), which visibly colocalized with hAPP expression (Fig. 1E). These results demonstrate a clear increase in OSN cell death in Camk2a-hAPP mice, which is similar to what was previously observed in OMP-hAPP mice (Cheng et al., 2011). To test whether accumulation of A␤ peptide was the direct cause of OSN cell death in Camk2a-hAPP mice, we crossed them with BACE1 Ϫ/Ϫ mice (Cai et al., 2001) to generate Camk2a-hAPP/BACE1 Ϫ/Ϫ compound mutant mice. Since A␤ peptide is generated by the sequential proteolytic cleavage of APP via ␤and ␥-secretases (O'Brien and Wong, 2011), and BACE1 is the major ␤-secretase used by neurons (Cai et al., 2001), A␤ peptide should be nearly absent in the Camk2a-hAPP/BACE1 Ϫ/Ϫ mice. Using an ELISA, we quantified A␤ 42 (amyloidogenic APP fragment) in OE tissue from 3-week-old animals and indeed found extremely low peptide levels in Camk2a-hAPP/BACE1 Ϫ/Ϫ animals, similar to both the tetO-hAPP and tetO-hAPP/BACE1 Ϫ/Ϫ control groups, and in sharp contrast to the high levels found in Camk2a-hAPP/ BACE1 ϩ/ϩ mice ( Fig. 2D; p Ͻ 0.001). Interestingly, despite the loss of A␤ peptide, we still observed a large number of caspase-3-positive cells in the Camk2a-hAPP/BACE1 Ϫ/Ϫ mouse epithelium ( Fig. 2A), similar to those observed in Camk2a-hAPP/BACE1 ϩ/ϩ animals (P ϭ 0.08), and significantly greater than those in controls (Fig. 2C, p ϭ 0.008). Similar results were observed when we crossed the OMP-hAPP line from our previous study into the BACE1 Ϫ/Ϫ background and compared them to controls ( Fig. 2C; P ϭ 0.008), indicating that neither BACE1 activity nor increased A␤ levels were necessary for hAPP-induced apoptosis of OSNs. Moreover, the caspase-3 signal clearly colocalized with hAPP expression in all mutant lines (Fig. 2B), further demonstrating the autonomous nature of the cell loss (Cheng et al., 2011). To further confirm the absence of BACE1 activity in the compound mutant lines, we examined the expression levels of the C-terminal fragments of hAPP cleavage using Western blot, and found that ␤-C-terminal fragment (␤-CTF) was absent in the BACE1 Ϫ/Ϫ background, while the expression levels of the full-length hAPP and the ␣-C-terminal fragment (␣-CTF) showed little change (Fig. 2E,F). To determine whether hAPP-induced neurodegeneration of OSNs continued into adulthood, we also examined OEs from 2-month-old mice and found that increased levels of caspase-3-positive cells persisted in both Camk2a-hAPP/BACE1 ϩ/ϩ and Camk2a-hAPP/BACE1 Ϫ/Ϫ Figure 1. OSN apoptosis in Camk2a-hAPP mice. A, Strategies to generate mutant lines with olfactory-specific or broad CNS-specific overexpression of hAPP by using OMP or Camk2a promoter, respectively, and the tTA-TetO system. B, Diagram showing the organization of the OE, with markers for mature and immature OSNs highlighted. C, Left panels, Camk2a (red) and hAPP (green) immunohistochemical signal in the olfactory epithelium from 3-week-old Camk2a-hAPP and tetO-hAPP mice, respectively. Camk2a was broadly expressed in OSNs and mostly colocalized with hAPP in Camk2a-hAPP mice. Right panels, GAP43 (immature OSN marker, red) and OMP (mature OSN marker, green) immunohistochemical signal in the epithelium. Note that Camk2a-hAPP animals had less mature OSNs and thinner epithelia than controls. D, E, The 3-week-old Camk2a-hAPP animal had many more cleaved caspase-3-positive cells in the epithelium than the control animal (D), which colocalized in hAPP-expressing neurons (E, arrowheads). Scale bars: C, E, 20 m; D, 100 m. The mutants had many more OSNs with caspase-3 signal than controls, which colocalized with the hAPP signal. C, Quantification of caspase-3-positive cells showed that hAPP-expressing lines had significantly more dying cells than the control lines, regardless of BACE genotype (Camk2a-hAPP, 32.8 Ϯ 5.5, n ϭ 7; and OMP-hAPP, 38.5 Ϯ 4.6, n ϭ 5, compared with tetO-hAPP, 11.7 Ϯ 4.3, n ϭ 7; while Camk2a-hAPP/BACE Ϫ/Ϫ , 44.1 Ϯ 9.2, n ϭ 4, and mice (Fig. 3A,B). Interestingly, we observed the formation of A␤ deposits in cortical and hippocampal areas of 2-month-old Camk2a-hAPP/BACE1 ϩ/ϩ animals, as previously reported (Jankowsky et al., 2005), but found them absent in Camk2a-hAPP/BACE1 Ϫ/Ϫ animals (Fig. 3C), further demonstrating that A␤ was not present in the BACEnull background and therefore could not be the basis of increased OSN apoptosis. In addition, despite the widespread expression of hAPP in higher brain regions of the Camk2a-hAPP animals, there was no increase in caspase-3 signal in these areas either at 3 weeks of age (data not shown) or 2 months of age (Fig. 3C;Jankowsky et al., 2005;Wirths and Bayer, 2010), suggesting that the olfactory system exhibits a clear phenotype through hAPP-induced cell death that is measurable much earlier than corresponding changes in cortical regions. This difference in timing both underscores the utility of an olfactory model, and may be the basis of the OSN loss and early smell deficits reported in AD patients (Talamo et al., 1989;Bacon et al., 1998). Since active apoptosis is a unique feature of this model, we next sought to determine which of the apoptosis pathways (Elmore, 2007; Tait and Green, 2010) was responsible for triggering the hAPP-induced OSN loss. We first examined the expression of several initiator caspases in the OE of both OMP-hAPP and Camk2a-hAPP mice, and, while we did not detect any cleaved caspase-8 in the OE (data not shown), we found many cells positive for cleaved caspase-9 in both mutant lines (Fig. 4A). We showed that these cleaved caspase-9-positive cells directly colocalized with hAPP-expressing OSNs (Fig. 4B) and that their numbers were significantly higher in both mutant lines compared with controls (Camk2a-hAPP, p ϭ 0.02; OMP-hAPP, p ϭ 0.01; Fig. 4C). Western blot analysis showed similar results indicating an increase in both cleaved caspase-3 and caspase-9 protein levels in OE tissue from both mutant lines, while full-length caspase-3 and caspase-9 levels showed a small reduction compared with controls, which is consistent with their active conversion to the cleaved form (Fig. 4D). Interestingly, we also revealed a striking increase in BCL-2-associated X (BAX) protein in both mutant lines, which is typically associated with mitochondrial damage and linked to caspase-9 cleavage via the intrinsic apoptosis pathway (Elmore, 2007; Tait and Green, 2010; Fig. 4D). Thus, to examine the state of mitochondria in hAPPexpressing OSNs, we used TMRE, a vital-dye stain for mitochondria applied directly to OE tissue. Since TMRE labeling is dependent upon mitochondrial membrane potential, it can be used as a marker of functional mitochondria. Given that TMRE is sensitive to fixation, we crossed both the OMP-hAPP mutant mice and tetO-hAPP controls with the OMP-GFP reporter line (Potter et al., 2001), which enabled us to directly identify OMP-hAPP-expressing OSNs in live tissue via the GFP label. In Figure 5, we show that in tetO-hAPP control mice TMRE effectively labels mitochondria within the mature OSNs (OMP-GFP positive) and also in sustentacular cells. By comparison, TMRE applied to OE tissue from OMP-hAPP mice shows labeling in sustentacular cells but very little signal in hAPPexpressing OSNs (identified by OMP-GFP), suggesting dysfunctional mitochondria. We next performed electron microscopy on OE from both mutant and control animals to examine OSN mitochondria at higher resolution and observed a clear difference in mitochondrial morphology. Mitochondria in mutant OSNs showed a darker matrix and indistinct cristae, which was in sharp contrast to the healthy appearance of mitochondria in the control OSNs (Fig. 6). Together, these data suggest that hAPP-induced cell death of OSNs is initiated in a cell-autonomous manner and mediated by the intrinsic apoptosis pathway (Fig. 7). Discussion Our olfactory studies have presented three transgenic lines (OMP-hAPP, G␥8-hAPP, and Camk2a-hAPP), all of which show visible APP-induced apoptosis of OSNs by 3 weeks of age. Since the Camk2a promoter drives hAPP expression both in OSNs and throughout the brain, the consistent early emergence of olfactory phenotypes suggests that OSNs are more sensitive to the detrimental effects of hAPP than other neuronal types. If so, this would provide a scientific basis for olfactory dysfunction occurring early in AD (Talamo et al., 1989;Bacon et al., 1998;Hawkes, 2003;Doty, 2009;Arnold et al., 2010) and present a clear advantage to using OSN for uncovering these early-stage mechanisms. In addition, their increased sensitivity to neurodegenerative factors makes OSNs an attractive candidate for use in screening assays. It has been generally accepted that apoptosis contributes to neurodegeneration in AD (Behl, 2000;Shimohama, 2000), as evidence of apoptotic cell death has been reported in patients (Behl, 2000;Shimohama, 2000). However, the mechanisms remain unknown, in part due to the many animal models of AD that do not display active neurodegeneration (Wirths and Bayer, 2010). Studies of continued OMP-hAPP/BACE Ϫ/Ϫ , 36.3 Ϯ 9.7, n ϭ 5, compared with tetO-hAPP/BACE Ϫ/Ϫ , 14.8 Ϯ 0.2, n ϭ 4). D, ELISA on OE tissue shows very low levels of A␤ 42 concentration in hAPP-expressing mutants with a null BACE background, not significantly different from the levels found in the control lines. E, F, Western blots confirming similar levels of full-length hAPP protein expression in the OEs from OMP-hAPP mice and OMP-hAPP/BACE Ϫ/Ϫ mice (E), as well as Camk2a-hAPP mice and Camk2a-hAPP/BACE Ϫ/Ϫ mice (F; individual animals in each lane). Actin was used as a loading control. The expression of ␣-CTF was also detected across all mutant genotypes, while ␤-CTF was absent in both the OMP-hAPP/BACE Ϫ/Ϫ mice and Camk2a-hAPP/BACE Ϫ/Ϫ mice, consistent with a loss in BACE activity. In addition, A␤ peptide is found in OEs from OMP-hAPP and Camk2a-hAPP mice, but is not detectable in OMP-hAPP/ BACE Ϫ/Ϫ and Camk2a-hAPP/BACE Ϫ/Ϫ mice, further confirming the lack of BACE activity. All values are reported as the mean Ϯ SD. Scale bars: A, 100 m; B, 20 m. ‫ء‬p Ͻ 0.01, ‫‪p‬ءء‬ Ͻ 0.001. apoptosis in both physiological and pathological conditions have shown that it is a highly sophisticated process, with two main pathways, the intrinsic or mitochondrial pathway and the extrinsic or death receptor pathway (Elmore, 2007;Tait and Green, 2010). The intrinsic pathway involves the signaling cascade of mitochondrial damage, the release of cytochrome c, and the activation of caspase-9, while the extrinsic pathway involves transmembrane receptor-mediated interactions and the activation of caspase-8. Both pathways converge on the same execution cascade initiated by the cleavage of caspase-3 (Elmore, 2007;Tait and Green, 2010). Previous studies have shown evidence of mitochondrial damage in AD patients (Lin and Beal, 2006), implicating activation of Figure 3. Cell death in the olfactory epithelium continued into adulthood without A␤ deposits. A, Cleaved caspase-3 signal in the OE of 2-month-old Camk2a-hAPP/BACE Ϫ/Ϫ , OMP-hAPP/BACE Ϫ/Ϫ , and control animals. B, At this age, the mutants still had many more OSNs with caspase-3 signal than controls, which colocalized with hAPP signal. C, Amyloid deposits developed in the cortex and hippocampus of 2-month-old Camk2a-hAPP/BACE ϩ/ϩ mice (arrowheads), but not Camk2a-hAPP/BACE Ϫ/Ϫ or tetO-hAPP/BACE Ϫ/Ϫ mice. Cleaved caspase-3 signal was not elevated in the cortex or hippocampus of any genotype at either 3 weeks of age (data not shown) or 2 months of age. Scale bars: A, 100 m; B, 20 m; C, 200 m. the intrinsic pathway. Using our olfactory models in which large-scale neurodegeneration is readily observable, we show here that the intrinsic apoptosis pathway is clearly activated in OSNs overexpressing hAPP, supporting this link and further suggesting that targeting this pathway may hold some therapeutic value. In the past several decades, significant effort has centered on eliminating A␤ plaques or reducing A␤ levels as a general strategy for combating AD. Unfortunately, therapeutic development based on this approach either through antibody-based clearance of A␤ (Doody et al., 2014;Salloway et al., 2014) or supressing A␤ production by interfering with BACE1 or ␥-secretase activity (Green et al., 2009;Ghosh et al., 2012;Doody et al., 2013), has generally proven ineffective (Karran et al., 2011;Cummings et al., 2014). While there are many reasons why this approach has not been more successful, including compounds that simply fail to perform, it is also possible that A␤ deposits are not the central cause of AD. Indeed, studies have shown that A␤ plaque load does not directly correlate with cognitive function (Engler et al., 2006;Price et al., 2009;Rentz et al., 2010), which is often used as an outcome measure for assessing treatment efficacy. Another possibility is that the progression of AD is such that diagnosis based upon cognitive deficits places the disease beyond a critical threshold for effective intervention through A␤ clearance, making early diagnosis the key factor. Thus, it would be prudent to consider alternative treatment strategies that are not based upon A␤ levels and may be more evident in those aspects of AD that occur prior to cognitive decline, such as olfactory loss. . Active caspase-9 expression in hAPP-expressing OSNs of both OMP-hAPP and Camk2a-hAPP mice. A, B, Cleaved caspase-9 signal in the OE of 3-week-old OMP-hAPP, Camk2a-hAPP, and control animals (A) showed OSNs that were positive for caspase-9 signal and clearly colocalized with hAPP signal shown in both mutant lines (B). C, Quantification of caspase-9-positive cells showed a significant increase in both hAPP-expressing lines compared with the controls (Camk2a-hAPP, 16.7 Ϯ 4.1, n ϭ 4; and OMP-hAPP, 27.3 Ϯ 7.0, n ϭ 4; compared with tetO-hAPP, 7.4 Ϯ 0.6, n ϭ 4). D, In addition, relative expression levels of the apoptosis markers, active caspase-3, caspase-9, and BAX were all increased in the mutant lines compared with controls, while full-length caspase-3 and caspase-9 levels showed a small reduction in mutant lines compared with control, which is consistent with an increased conversion to their cleaved active forms. Scale bars: A, 100 m; B, 20 m. ‫ء‬p Ͻ 0.05. We have demonstrated that hAPP-induced apoptosis of OSNs occurs independently of BACE1 and A␤, supporting our assertion that hAPP expression alone can cause widespread cell death of OSNs without the presence of extracellular amyloid deposits (Cheng et al., 2011). Moreover, this finding suggests that A␤ accumulation may not be the sole cause of neuronal death in AD and that there may be an early cell-autonomous phase of the disorder that is independent of A␤. One possibility involves the APP intracellular C-terminal domain (AICD), which is derived from both the ␣-CTF and ␤-CTF, and already has been shown to cause A␤-independent neurodegeneration (Ghosal et al., 2009). The AICD is also elevated in AD patients and can produce AD-like pathology with cognitive deficits in transgenic mice, again independent of A␤ levels (Ghosal et al., 2009). Interestingly, our olfactory models (OMP-hAPP and Camk2a-hAPP) both show the expected loss of the ␤-CTF in the BACE1 Ϫ/Ϫ background, but little effect on the ␣-CTF (Fig. 2E,F), suggesting that it could be involved in OSN neurodegeneration. Alternatively, given the various other components of APP shown to produce neurodegenerative affects (Ghosal et al., 2009;Nikolaev et al., 2009;Simón et al., 2009;Willem et al., 2015), it is also possible that each piece is involved at different stages of the disease with A␤ crucial to a later stage. Thus, some components may play a larger role during the initial phases of AD, affecting only very sensitive neurons such as OSNs, while other components act later in the disease process, more as catalysts to reinforce and propagate neurodegeneration to less sensitive areas of the brain. While the neurotoxic effects of elevated A␤ are well established (Forman et al., 2004;Querfurth and LaFerla, 2010; Holtzman et al., 2011), our results clearly Figure 6. Ultrastructural imaging of OE shows damaged mitochondria in dendritic knobs of hAPP-expressing OSNs. A, Schematic of OE depicting the superficial sustentacular support cells lining the apical surface with OSN dendritic knobs protruding between them into the lumen. Electron micrographs corresponding to the OE apical region (red boxed region in schematic) with tetO-hAPP control (middle) showing three support cells, two with distinct nuclei (shaded green), and portions of four OSN dendritic knobs (shaded blue) protruding into the lumen among the fragmented cilia, while an OMP-hAPP mutant (bottom) shows disrupted support cell organization (green nucleus) and very few OSN dendritic knobs (one shaded blue). B, Comparison of an OSN dendritic knob from tetO-hAPP control (top panels) and OMP-hAPP mutant (bottom panels) reveals a clear alteration in the mitochondrial morphology of OMP-hAPP animals, which appear dark with indistinct features compared with the healthy appearance of mitochondria in the control animals showing clear cisternae. The panels on the right correspond to the boxed regions on the left panels. Arrowheads point to mitochondria. Scale bars: A, 2 m; B, 500 nm. demonstrate that hAPP-induced neurodegeneration can also occur independent of A␤, indicating the presence of other pathogenic mechanisms that may be closely linked to early-stage disease and thus provide important insight toward understanding how AD is initiated. Figure 7. hAPP expression activates the intrinsic apoptosis pathway in OSNs. Schematic showing the key steps in both extrinsic and intrinsic pathways of apoptosis. Both intrinsic and extrinsic apoptosis pathways can lead to cell death by activating common late stage factors, such as caspase-3 but may be triggered through distinct mechanisms. The extrinsic pathway is typically activated via death receptors in the plasma membrane [e.g., Fas, TNFR (tumor necrosis factor receptor), TRAIL (tumor necrosis factor-related apoptosis-inducing ligand)], which activate FADD (FAS-associated death domain)/TRADD (Tumor necrosis factor receptor type 1-associated death domain), leading to caspase-8-mediated activation of the late-stage apoptosis pathway. The intrinsic pathway can be triggered by cell-autonomous factors (e.g., DNA damage, cell stress) possibly involving mitochondrial dysfunction and activation of caspase-9, which can activate caspase-3 followed by apoptosis. The clear activation of caspase-9 in hAPP-expressing OSNs both in OMP-hAPP and Camk2a-hAPP mutant mouse lines combined with the overt mitochondrial changes strongly indicate that hAPP-induced apoptosis is mediated by the intrinsic pathway.
2016-10-08T01:47:31.943Z
2016-07-01T00:00:00.000
{ "year": 2016, "sha1": "f2780b2d3182197b1140cd7a03962392960d2d03", "oa_license": "CCBY", "oa_url": "https://www.eneuro.org/content/eneuro/3/4/ENEURO.0150-16.2016.full.pdf", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "ef2c53b391d25e7ac958482e003e83e9a5e09e5a", "s2fieldsofstudy": [ "Biology" ], "extfieldsofstudy": [ "Medicine", "Biology" ] }
204909869
pes2o/s2orc
v3-fos-license
Measures of Satisfaction from the Implementation of Quality Standards in Higher Education Institutions of Saudi Arabia Background: The purpose of this papers was investigate into the satisfaction from implementation of quality standards in higher education institutions of Saudi Arabia. Quality is the key factor in achieving accreditation. Researchers believe that implementation of quality standards is closely related with the satisfaction of the students and thus quality standards in higher education institutions indirectly plays a pivotal role to improve the outcomes of the universities. Method: Hermeneutics, discourse and heuristic were employed for data analysis with the help computer based qualitative package ATLAS.ti. Findings: The findings shows that there was a satisfaction about the implementing of quality standards in higher education of Saudi Arabia in the large universities, however, on the other hand, the also identified that findings the main reasons of dissatisfaction with implementation of quality standards in some small universities. Conclusion: The study concludes that implementation of quality standards is yet to be dome due to slow pace and they are just in papers and yet until now they have not been put into effect in reality. standards and satisfaction by the supervisor in higher education institutions, while Harvey and Green's presented the following dimensions of quality: 1. Providers i.e. the funding agencies and community where quality is interpreted as value for money, 2. The users of products i.e. the existing and future students, here quality is interpreted in terms of excellence, 3. The users of outputs i.e. employers and this regard the quality is taken as fitness for purpose, and last but not least 4. The employees working in that area/ sector i.e. the academics and administrators, here quality is construed as consistency. According to Van Kemenade et al. (2008) quality is to be described with four parts: 1. the object, 2. standard, 3. subject and, 4. the value, these expounds on four value systems on the quality and the quality management, which includes the control, the continuous improvement, the commitment and the breakthrough. Quality management systems have been established to direct and control an organization with regard to quality (ISO 9000):2000 (Magd, H., & Curry, A. (2003). The quaity management systems consists of the quality planning (crireia driven), it defines the standards and determines how to satisfy those standards (Koilias et al., 2011). It also lays out the roles and responsibilities, resources, procedures, and processes to be utilized for quality control; quality assurance (prevention driven), it is the review to ensure aligning with the quality standards, an assessment is provided here, it is planned and systematic quality activities, which provide the confidence that the standards will be met; quality control (inspection driven), it addresses the assessment conducted during quality assurance for corrective actions and measure specific results to determine that they match the standards. According to Shure, Jansen., & Harskamp (2007), it uses statistical process control (SPC), which provides a methodology for monitoring the process and to identify special causes of variation and signal the need to take corrective action when appropriate and it it relies on control charts (Feng, Prajogo, Tan, & Sohal, 2006). However, the most prominent is the concept of organization wide management philosophy i.e. continuously improving the quality of products / services and its processes. Whilst continuously quality improvement is the responsibility of everyone who is involved in the production or use of the products or services offered and hence is interested in its quality. Higher education of any country plays significant role in transformation of the society from fused to prismatic and then to the diffracted as grouped by Riggs (Heady, 2013) for comparative analysis. It is the education sector upon which uplift of a nation's depend for economic growth and development. Higher education produces graduates according to the precise need of the country in order to cater the local, nation and sometimes international requirements with regards to different sectors of the economy and considered as the back bone of any economy. The quality of the produced graduates make a difference. The Saudi higher education systems is gradually moving ahead towards excellence and therefore, the government is more focusing on the quality of education and not just merely on the education. The Saudi National Commission for Academic Accreditation & Assessment (NCAAA) is the official agency established in 2004 with the vision to develop and ensure implementation of the quality standards in Saudi higher education institutions (NCAAA, 2013). This official accreditation and quality standards of Saudi Arabia body is working under the board of directors, the members include both from public and private sector experts i.e. director are drawn from government, institutions and industry professionals. This study was undertaken to measure the satisfaction of the quality supervisors about the implementing of quality standards in higher education in Saudi Arabia. This paper relates to the accreditation and quality assurance theme. Logical Argumentation from the Review of the Literature Several research studies have been under taken to investigate the relationship between satisfactions from the quality standard implementation in higher education institutions around the globe in relation to the HEIs performance, likewise, they found variation and sometime contradictory results. Researchers like, Bou-Llusar et al. (2009), Tari, Molina andCastejon (2007) and Kaynak (2003) have found positive and significant relationship between these two significant factors however, researchers like Corredor and Goni (2010); Macinati (2008) and Benner and Veloso (2008) found results contrary to above mentioned studies, their studies identified negative relationship between satisfaction and implementation of the quality standards with regards to management practices especially in the universities. More recently, several researches in the field of quality in higher education have also been carried out, which highlight the significant of the related notion. If we explore the history of quality in higher education, we can trace back the history of application of quality assurance in higher education from the quality assurance schemes in European Higher Education, which were for the first time introduced respectively in France (1984), the UK (1985) and the Netherlands (1985) as reported by Westerbeijden et al. (2007). The significance of the quality assurance in HEIs was acknowledged by the Louvain meeting (April 2009) in which minister from European union members countries participated, they stressed to enhance quality in European Higher Education institutions thus universities throughout the Europe adopted external evaluation systems and also introduced an ISO9001: 2000 certification being integral part of their internal quality management system (Terziovski, & Power, 2007;Hutyra, 2005;Lagrosen, Seyyed-Hashemi, & Leitner, 2004). It is believed by the researchers like, Feng, Prajogo, Tan, & Sohal, (2006) and Shemwell et al. (1998) that quality of the services is diligently connected with the customer's satisfaction. They further asserted that the perceived quality of HEIs depends on satisfaction of the students. Studies conducted by Martensen et al. (2000) in Europe has used the European Customer Satisfaction Index in order to measure the student's perceived quality and satisfaction from their HEIs. Similarly, Sureshchandar et al. (2002) also explored the relation between service quality and customer satisfaction with regards to perceived service quality whereas, Elliot and Shin (2002) worked on the positive effect of the quality with regards to the student's satisfaction and concluded that their satisfaction plays significant role in motivation, retention of the students besides recruiting efforts. Bigne et al. (2003) found similar results and found that the overall service quality is significantly associated with students satisfaction, the findings of the Bigne etal. Were further confirmed by Ham and Hayduk (2003) who reports that there is a positive association between perception of service quality and satisfaction of the students? Likewise, Suhre et al. (2007) also investigated the impact of quality on satisfaction of the students and their academic accomplishment and dropout. They find that student accomplishment largely depends on the program satisfaction inter alia the metamorphoses in academic ability. More recently, Lee and Tai (2008) explored some of the critical factors with regards to satisfaction from quality standards and quality assurance practices in HEIs, which might have significant impacts on the student's satisfaction in higher education institutions. Kim and Richarme (2009) while exploring the phenomena of satisfaction from quality standards implementation in HEIs have find students satisfaction is one of the best indictors which pave way for improvement and result into the positive financial implications for institutions. Researcher who are sing the Kano model have observed that there is an asymmetric relationship between quality and satisfaction (Tsirintani et al., 2010). Martensen et al. (2000) suggested the use of multi-criteria methodology to understand the dynamism of the issue of satisfaction from the quality standards implementation. He suggest to connect the features of the quality of education services rendered by HEIs to the student satisfaction, his model focuses on the several satisfaction criteria and sub-criteria with varying quality attributes of the services offered by them i.e. the study program, teaching environment, the staff, and the tools and equipment etc. used during teaching and learning process. Furthermore, a study conducted by Westerheijden Hulpiau, & Waeytens (2007) have recommended, to link the proposed multi-criteria with student satisfaction and contends that actions is need to be undertaken to improve the overall performance of these factors. Perspective on Quality Standards in Saudi Arabian HEIs The Saudi agency NCAAA for quality implementation in HEIs was established in 2004 as an independent authority for accreditation and quality assurance by Higher Council of Education of the Kingdom of Saudi Arabia. The NCAAA is responsible to set out by-laws in the field of Higher Education; yet, in general its description of responsibilities include its role in the system of accreditation and quality assurance, whereas its key role is to establish standards, criteria and procedures for academic assessment and accreditation besides provision of training and support to the faculty and staff responsible for establishment and development of quality assurance systems in HEIs; furthermore, NCAAA also evaluate and assist the HEIs in development of quality assurance documents and reports that are needed for their accreditation process; the NCAAA is also responsible to manage and coordinate the external accreditation reviews of the specific programs as well as that of the institutions. The quality establishment and development process id done in three stages, 1: The System for Quality Assurance and Accreditation, 2: Internal Quality Assurance Arrangements, and 3: The External Reviews for Accreditation and Quality Assurance (NCAAA, 2013). Al Arefa &Waqran (2007) identified lack of activate usage and implementation of the modern information systems in all administrative processes as cause of poor implementation of the quality standards in Saudi universities, he further pointed out that there is no additional incentives program for the people involved in the quality department of Saudi universities in order to encourage their productivity and further to improve the efficiency by adequately and timely implementing the standards, furthermore there is lack of awareness and in capacity and incapability on part of the administration with regard to quality concepts. Likewise, there is lack of proper training facilities and refresher courses and workshops to raise the level of awareness about significance of quality and to enhance the knowledge and skills of the quality related professionals. Al Harbi (2002), while investigating the reason of dissatisfaction of the students from the quality standards and quality assurance practices have observed that there is no or very weak culture among teaching staff which needs their orientation through arrangement of workshops, seminar in order to cultivate new culture of quality in the universities. He also posited out lack of incentive program as major barriers towards increasing the teaching capabilities and promoting the academic research. Further, he contends that there is need to improve the skills and knowledge of information technology in education besides continuous development and review of the curriculum and course contents. Agreeing to the Al-Harbi (2002), Al Arefa &Waqran (2007) further reports satisfaction of quality is also related with the physical and infrastructural facilities with regards to classroom facilities in terms of illumination, ventilation and availability of technology. Vol.9, No.9, 2019 24 With this context, the review of the literature highlights that most of research studies have been conducted in western context that explored the relationship between these two factors yet, there is a dearth and merely very few studies can be found in in the gulf perspective. Despite the dearth of literature, however, some of the studies in Saudi perspective have identified that following weak areas responsible for weak or otherwise slow implementation of the quality standards in HEIs of Saudi Arabia, these include, customer/ students have no role in measuring the HEIs performance, and thus they have no effect on the organizational performance, however the top management is committed which is a positive sign of transformation and modernization of these institutions (QAAAU, 2009). Thus the major determinants along with the criteria and sub criterial emerged from the literature review after operationalization that measure the students satisfaction from the implementation of quality standards in HEIs of Saudi Arabia are presented in the below table-1 that describes the critical factors of the issue under study. Methods and Tools In order to understand the issue under investigation, Review of the existing literature was done. Likewise, according to the qualitative research approaches of data analyses, the researchers have examined, categorized, tabulated and recombined the data with the help of hermeneutics (James, 1992), discourse (Max, 1990) and heuristic (Moustakas, 1990) analyses. In a later step, the computer based software ATLAS.ti was employed for qualitative data analysis by feeding the major concepts and variables of the study into ATLAS.ti. Coding, extraction of quotes and memos creation was done with the help of ATLAS.ti. The schematic diagram of theoretical framework given below explain the association between DV and IVs in the background of satisfaction about implementation of quality standards in higher education of Saudi Arabia. Conclusions, Suggestions & Implications According to a famous saying "there is always room for improvement" because the human created organizations and systems cannot me marked as perfect, as the perfection only lies with Allah Almighty. Err is to human, so mistakes, errors, omissions and deficiencies are always expected to be the part and parcel of the human created organizations and systems. This demand strict watch over, monitoring, evaluation and direction and control in the form of revisiting the objectives, policies, plans, programs and decisions in order to keep pace in tune with the modern day changers and challenges. Therefore, it is not only essential rather imperative to keep on improving the quality of service as one of the most important responsibility and task of the higher education institutions. Researchers, however, have developed several arguments and these argument which are based on logic also support their argument that there is close relationship between the service quality and students/clients satisfaction. Likewise, few of the studies, discussed above, however, contend that the perceived quality of HEIs largely depends on the satisfaction and therefore, subsequently it is the increasing customer's/ students satisfaction that leads these HEIs to an upswing in service quality. The current study in hand, yet employed the methodology of multi-criteria while analyzing the level of students satisfaction from the implementation of quality standards in HEIs of Saudi Arabia in order to shed more light in understanding the association of student satisfaction with characteristics of the quality through a set of criteria and sub-criteria which represent different dimensions of the quality standards and their implementation process in the Saudi HEIs. Based on our qualitative analyses of the related literature, this study has found that there is a slight or marginal need for improvements as implementation of quality stands in larger universities is smooth as planned, yet, the results points to confirm the importance of the investigation and analyses of the satisfaction of the students and their implications that are given to definite quality dimensions of higher education. The study found interesting results that are consistent with the previous studies for example, students take more interest in the criteria of satisfaction that consider the demanding level of the students as displayed in the criteria. More precisely, researcher believes that students give high importance to the credibility and fame of the programs of the study in these high seat of learnings, because the credibility reflect their complete quality and reliability. Similarly, it is argued that most of the studies contends that the criteria like, program of the study, the role of the teaching staff, the services offered the administrative and the equipment and tools are less important. The study concludes that a one of the most significant indicator of the satisfaction from the quality standard implementation for Saudi HEIs could be the adaption of a satisfaction barometer, while evaluating the level of satisfaction from the quality standards in these higher education institutions. This necessitates the development of such and system and mechanism that should regularly and frequently monitor the satisfaction of the students and that should be linked quality policies and actions. The later one may be also associated with the external evaluation system of the quality standards, thus combination of the internal and external assessments and evaluation will result into a cohesive and a structured quality framework through which the pace of the implementation of the quality standards can be accelerated thereby the satisfaction of the students could be achieved which imperative to earn credibility and good name in the education sector. With this background, the study suggest that the higher education institutions in Saudi Arabian context, must follow quality assurance & accreditation as a vehicle to materialize their goals and in smooth implementation of the quality standards, for this purpose, it is further suggested to disseminate the awareness culture of quality among their personnel through on the job training programs by employing range of means in order to assure the quality of their educational services they render meet the national demands in general and the expectations of the students in particular. Furthermore, this study suggest that a built in mechanism must be put into quality systems so that through continuous improvement these institutions could achieve the high level of independence, credibility and stability and for this purpose they need to develop an overall strategic plan with clear vision for quality assurance. And the best option to know the quality of educational services they offer is to get an accreditation from the national or international institutions/ agencies this could pave a way the recognition of these HEIs locally, nationally, as well as internationally.
2019-10-10T09:26:29.065Z
2019-01-01T00:00:00.000
{ "year": 2019, "sha1": "33d8f1e509e9a8bc250d113ac64c93b6f2e8126c", "oa_license": "CCBY", "oa_url": "https://www.iiste.org/Journals/index.php/DCS/article/download/49434/51077", "oa_status": "HYBRID", "pdf_src": "MergedPDFExtraction", "pdf_hash": "f9094e265441034b3b426fc2ea329c33a4eea951", "s2fieldsofstudy": [ "Education" ], "extfieldsofstudy": [ "Business" ] }
90423355
pes2o/s2orc
v3-fos-license
Physicochemical properties and antimicrobial activities of soap formulations containing Senna alata and Eugenia uniflora leaf preparations Senna alata leaf extract demonstrates antimicrobial properties that promise utility for treatment of topical infections. Its combination with similarly bioactive Eugenia uniflora leaf extract in soap formulation could enhance anti-infective efficacy. The objective of this study was to develop potent antiseptic herbal soap formulations (HSFs) with the combined leaf extracts of the two plants. A soap base having suitable physicochemical properties (emolliency, foaming potential, and pH) was selected from a series of trial formulations produced from basic soap ingredients. Into this was incorporated three different preparations (namely, the methanolic fresh leaf extract (FLE), methanolic dry leaf extract (DLE), and the pulverized dry leaf sample (DLP) of S. alata and E. uniflora, respectively, singly or combined in 1:1 (w/w) ratio; to produce HSFs containing 5, 9, or 11%w/w concentrations of the leaf preparations. The physicochemical properties of the HSFs were determined as well as their antimicrobial activities by hole-in-plate agar diffusion assay against Staphylococcus aureus, Bacillus subtilis and Candida albicans. The selected soap base exhibited highest-rank emolliency, satisfactory stable froth production, and pH value. The physicochemical properties of the resulting HSFs were similar. The HSFs containing combinations of the DLEs at 9 and 11% concentrations demonstrated antimicrobial activities against S. aureus and C. albicans comparable (p>0.05) to those of the comparator commercial antiseptic soap containing 0.30% triclosan. B. subtilis was less sensitive (p<0.05) to the HSFs. On the other hand, when used singly, the DLEs as well as the FLEs and DLPs were significantly less potent (p<0.05) than the DLEs combined in the soap formulations. In conclusion, the HSFs containing S. alata and E. uniflora DLEs combined (1:1 w/w) at 9 and 11% concentrations exhibited satisfactory physicochemical properties and potent antimicrobial activities similar to the comparator commercial antiseptic soap employed in the study. INTRODUCTION Plants have always contributed largely to medicines and healthcare preparations by providing lead compounds for drug development or as refined herbal remedies (Iwu, 1993).Different plant parts have been used in traditional medicines around the world for treatment of human diseases and infections (Vineela and Elizabeth, 2005;Ekpo and Etim, 2009).Plants containing bioactive (antimicrobial) principles demonstrate potential for use as anti-infective agents and could be formulated as topical herbal remedies (as ointment, cream, lotion, gel, soap or crude/solvent extract) for the care and treatment of skin infections, as alternative to using synthetic antimicrobial agents. Senna alata (L.) Roxb (Caesalpiniaceae), synonym Cassia alata, is a shrub widely distributed in tropical countries and popularly known as ringworm plant due to the utilization of its fresh leaves for treatment of skin diseases such as ringworm, eczema, pruritis, scabies, and ulcers (Burkill, 1995;Reezal et al., 2002).Phytochemical screening of alcoholic extract of Senna leaves has revealed the presence of anthraquinone glycosides, phenolic compounds and saponins, which could account for some of its biological activities, including antimicrobial and antioxidant effects (Sharma et al., 2010). The leaf extract of S. alata prepared in different solvents and by various techniques has been reported to demonstrate antimicrobial activity.When the fresh leaves were extracted with different solvents, only the extracts derived from polar solvents (water, methanol) exhibited antibacterial activity against Staphylococcus aureus, while the extracts by non-polar solvents (n-hexane, acetone) were inactive (Faruq et al., 2010).Whereas the freeze-dried aqueous extract of the fresh leaves showed antifungal activities comparable to that of acriflavine (6 mg/ml) against Epidermophyton floccosum and Candida pseudotropicalis (Akinde et al., 2002), the air-dried powdered leaf ethanolic and aqueous extracts demonstrated much broader spectrum of antimicrobial activities (Ogunjobi and Abiala, 2013).Air-dried S. alata leaves formulated as soap exhibited antifungal activity against the fungus, Saccharomyces cerevisiae, but showed no inhibitory activity against bacterial organisms: S. aureus, E. coli and P. aeruginosa (Aminuddin et al., 2016).Thus, preparation and formulation factors were shown to influence the antimicrobial properties of S. alata crude preparations. Antimicrobial activities of dried, powdered leaf ethanolic extract and leaf essential oil of Eugenia uniflora Linn (Myrtaceae) have also been reported against several bacterial and fungal species (Fiuza et al., 2008;Victoria et al., 2012), while other biological activities and potentials of its various parts and constituents are also reported, which support the ethnomedicinal uses of the plant in treating bronchitis, influenza and intestinal problems (Souza et al., 2004;Fortes et al., 2015;da Cunha et al., 2016).Furthermore, combinations of E. uniflora with other plant extracts (Bernardo et al., 2015) or chemical agent (metronidazole) (Santos et al., 2013) have demonstrated enhanced antimicrobial activity of the plant, while the activity of formulations of E. uniflora extracts as soaps and ointments has also been reported (Alalor et al., 2012;Aminuddin et al., 2016). Studies on triclosan, an antimicrobial agent popularly used in antiseptic toiletries, have raised questions on its possible hazard to human health (Deliaert et al., 2008;Zorrilla et al., 2009) and its contribution to development of antibiotic-resistant germs in the environment (Chalew and Halden, 2009).In the United States of America, the Food and Drug Administration (FDA) has announced the prohibition of sale of "consumer antiseptic washes" containing triclosan effective September, 2017 (FDA, 2016).The need for safer antiseptic ingredients has, therefore, become more apt.This present study aimed to develop an effective anti-infective herbal soap formulation with a combined leaf extracts of Senna alata and E. uniflora using soap ingredients that would enhance emolliency on the skin. MATERIALS AND METHODS Good quality grade palm kernel oil, coconut oil and shea butter were procured locally at the Main Market, Ile-Ife Nigeria.The shea butter was purified by melting and filtering through a filter paper No. 100 (24 cm diameter, Rundfilter MN713 Macherey-Nagel D-5160 Duren, Germany) in a funnel into a flask placed in an oven (60°C).The filtrate was poured into a clean glass container and left for seven days at room temperature (30±2°C) to solidify.Standard grades of other formulation ingredients namely, sodium hydroxide (pellets), sodium lauryl sulphate, stearic acid, and oleic acid (Evans Medical Ltd., Liverpool) were also used. Collection of S. alata and E. uniflora leaves Fresh leaves of S. alata and E. uniflora were collected from Adagun Abiri Road Ile-Ife, and at New Buka, Obafemi Awolowo University (OAU) Ile-Ife, respectively, within the period from July to August, 2013.The leaves were authenticated at the herbarium of the Faculty of Pharmacy, OAU Ile-Ife, Nigeria. Preparation of S. alata and E. uniflora leaves Approximately, 200 g of the freshly collected leaves of each plant was macerated in neat methanol (solvent) on the same day of collection, extracted using a Soxhlet extractor (Scientific Glass Laboratories (SGL) Ltd.Staffordshire) at 40°C, and subsequently concentrated using a rotary evaporator (Rotavapor RII, Buchi Labortechnik, UK) at 40°C.The concentrate was oven-dried at 35°C for 2 h to produce the methanolic extract of the fresh leaves (that is, the fresh leaf extracted, FLE).The dried, pulverized (dry leaf powdered, DLP) forms of the S. alata and E. uniflora leaves were prepared by air-drying (for 50 to 60 days; approximately 400 *Corresponding author.E-mail: aoyedele@oauife.edu.ng.Tel: +234 803 370 3029. Author(s) agree that this article remain permanently open access under the terms of the Creative Commons Attribution License 4.0 International License g) of the collected leaves at the ambient temperature (30±3°C), and then grinding the dry leaves with a laboratory mill (Christy and Morris Ltd., Chelmsford Essex, NJ USA) into fine powder. A 150 g portion of the dry, powdered leaves of each plant was macerated and extracted with methanol using the Soxhlet extractor, concentrated with the rotary evaporator at 40°C, and oven-dried at 35°C for 2 h to produce the methanolic extract of the dried, pulverized leaves (that is, the dry leaf extracted, DLE). Preparation of soap base formulations Three formulations of soap base (A, B, and C; 130 g each) were initially prepared in duplicates by the cold and hot processes using the basic soap ingredients: palm kernel oil (PKO), sodium hydroxide (NaOH) and distilled water, in concentrations shown in Table 1. In the cold process, the required weight of NaOH pellets was dissolved in the required quantity of water and approximately 2 min was allowed for the exothermic dissolution of the pellets.The PKO was heated on a water-bath to about the same temperature (≈ 60°C) as the NaOH solution.The NaOH solution was then slowly poured into the oil (PKO) while stirring continuously with a plastic spatula until a slurry was formed (that is, the slurry stage).The slurry was then poured into plastic moulds to produce soap tablets, approximately 25 g each, and allowed to stand undisturbed for 48 h at the ambient temperature (29±2°C), to solidify.The soap preparation was removed from the moulds, wrapped in cellophane and kept for four weeks, to allow for curing. The hot process was similar in procedure at its initial steps to the cold process.After mixing the warm aqueous NaOH solution with the heated oil (PKO), the hot slurry was further heated on a waterbath until a suitable endpoint for the required heating process was reached, indicated by whitish coagulates appearing in the hot slurry.The slurry was then poured into moulds and allowed to set, and subsequently cured over the next four weeks. Other soap base formulations (D, E, F, G, H, I, J, and K; 130 g each; Table 1) were also prepared by the hot process using shea butter and/or coconut oil, in varied proportions of the soap base ingredients as well as other soap base formulations (L, M, and N; 130 g each; Table 1) with inclusion of excipients, such as sodium lauryl sulphate (a surfactant), stearic acid or/and oleic acid (fatty acids) intended to enhance performance and stability of the soap. Determination of physicochemical properties of soap base formulations All the soap base formulations prepared were tested for their physicochemical properties. Foaming propensity testing To determine the foaming propensity, a 1 g portion of each soap formulation was dissolved in 10 ml of water (distilled and tap water) by minimum heat (≤60°C) and 5 ml of the resultant solution was transferred into a 10-ml test tube.The test tube was shaken for 1 min using a vortex test tube mixer (Salford Scientific Supplies Ltd, Henderson Biomedical, UK) and then left to stand undisturbed.The time taken for the soap solution to defoam, in triplicate tests, was recorded. pH determination The pH value of 1 g sample of each soap formulation dissolved in 10 ml of distilled water was determined in triplicates with a digital pH meter (HM Digital Inc. Culver City, USA) at preset time intervals after production of the soap, namely, 24 h (Day 1), Day 7 (Week 1), Week 4, Week 12, and Week 18. Emolliency test The emolliency test was designed to evaluate occlusiveness of the formulations.A 2 g portion of each soap formulation was smeared onto the surface of white sheets of paper over approximately 5 cm 2 surface area and left to stand on the laboratory shelf for 24 h (temperature 29±1°C; humidity 78±2%, determined with wet/dry bulb hygrometer); after which the degree of translucency was graded into a three-level ranking: mild, moderate, or strong translucency. Preparation and determination of physicochemical properties of herbal soap formulations The FLE, DLE, and DLP preparations of S. alata and E. uniflora, as well as equal quantity combinations (1:1 w/w ratio mixing) of the preparations, namely: S. alata FLE mixed with E. uniflora FLE; S. alata DLE mixed with E. uniflora DLE; and S. alata DLP mixed with E. uniflora DLP; were each incorporated into the selected soap base formulation (coded K) at the slurry stage of the preparation process before pouring into moulds.The different test preparations were incorporated at concentrations of 5, 9, or 11%w/w into the soap base formula K (Table 1).Foaming propensity test and pH determination at preset intervals over 12 weeks were carried out on the resulting herbal soap formulations.Similar tests were carried out on the comparator soap, Septol® antiseptic soap (Bush W.J. & Co. (Nig.)Ltd.), a commercial antiseptic soap product containing 0.30% Triclosan as the active (antimicrobial) principle. Antimicrobial activity testing of herbal soap formulations The antimicrobial activities of the herbal soap formulations, the soap base (K) (negative control), and of Septol® antiseptic soap (positive control), were determined using the hole-in-plate agar diffusion assay against S. aureus (NCTC 6571), Bacillus subtilis (NCTC 8236) and Candida albicans (a clinical isolate obtained at Microbiology Department, OAU Ile-Ife, Nigeria).A pure distinct colony of each bacterial strain inoculated in 10 mL Mueller Hinton broth (Oxoid, UK) aliquots and incubated at 37°C for 18 h was used.0.2 mL of the culture of each organism was then seeded into 20 mL aliquots of molten Mueller Hinton agar (Oxoid) (MHA) in sterile Petri dishes and allowed to set.Antifungal activity against Candida was tested using a 48 h surface culture of C. albicans on Sabouraud dextrose agar (SDA; Oxoid) slopes, which after being washed off was diluted to an inoculum size of 10 7 cfu/mL and used to seed 20 mL aliquots of molten SDA in sterile Petri dishes and allowed to set. Wells (9 mm diameter) were cut into the seeded agar plates with a sterile cork borer and approximately 150 mg of each herbal soap sample was introduced into the holes in quadruplicate experiments.The plates were left at room temperature (29±1°C) for 1 h to allow for diffusion and then incubated at 37°C for 24 h for bacteria and 25°C for 48 h for fungi, after which the diameters of inhibition zones were measured. Statistical analysis The data obtained were evaluated by two-way analysis of variance (ANOVA) followed by the F test, and Student"s t-test for paired mean comparisons, to determine statistical significance of differences in computed mean values.In all cases, differences were Foam stability and pH profile of soap base formulations The time taken for foam disappearance or complete foam collapse of the aqueous solution of different soap base formulations varied.The foams persisted longer (indicating higher foaming capacity and foam stability) in distilled water than in tap water (community pipe-borne supply).Formulation J gave the most stable foam produced in distilled water, lasting 69 min (Table 2).Soap base formulations C, D and E contained the same quantities (61.5%) of PKO, SB and CO and NaOH (7.7%).However, E demonstrated the longest foam stability in distilled water while D had the shortest stability in tap water.Soap base K had the least foam stability and pH lower than the other two soap bases in the three oil-combination soap bases (I, J, K). All the soap base formulations expectedly produced alkaline pH solutions (Tokosh and Baig, 1995), values of which decreased gradually over 18 weeks of study (Table 2).The use of relatively high concentrations of NaOH with low oil concentrations resulted in higher pH of the soap base solutions (B, F). Emolliency of soap base formulations The ranked emolliency results of soap base formulations (Figure 1) revealed a trend.The relative translucency produced by the formulations showed general correlation with overall concentrations of oil present in the soap formulations (rounded, in Figure 1, to nearest integers).Thus, most of the formulations that produced strong translucency (A, C, D, E, and K) contained very high (62 to 67% w/w) total oil concentrations (Figure 1). Formulations G and H (55% oil content), which also gave strong translucency on white paper, contained two oils combined in their formulae (Table 1).Of the three soap bases prepared with combinations of the three oils (I, J, K), soap base K which contained a relatively higher proportion of SB demonstrated the highest emolliency (Figure 1). The fact that formulation F, having only one oil component at 46% concentration in its formula (Table 1), demonstrated strong emolliency (Figure 1), suggests that its coconut oil component possesses greater oleaginous (lipophilic) property than does PKO (the oil component of formulation B; Table 1); formulation B being a similar (single oil, PKO) composition soap product with higher (51%) oil concentration level (Table 1), but showing only mild occlusive character (Figure 1).Formulation B had the lowest oil concentration (51%) among the formulations containing PKO as sole oil ingredient (Table 1) but contained the highest NaOH of the three indicating more effective saponification of the oil which would have lesser unsaponified oil to give emolliency.The soap base formulation K was finally selected as the most suitable for incorporation of the S. alata and E. uniflora leaf preparations, since it demonstrated the highest emolliency (Figure 1) and consistently showed the lowest pH values throughout the 18 weeks of study (Table 2). Physicochemical properties of herbal soap formulations Foaming propensities of the herbal soap formulations in tap and distilled water (Table 3) were in similar range to those of the soap base K (Table 2), but were much lower than those of the comparator soap, Septol®, the froths of which lasted 129 ± 5 and 275 ± 8 min (approximately 2 and 4½ h), respectively, in tap and distilled water.On the other hand, pH values of the herbal soap solutions (Table 3) were also similar to those of the plain soap base K (Table 2), indicating that incorporation of S. alata and E. uniflora leaf preparations did not alter the physicochemical properties of the soap base considerably.Septol® aqueous solutions demonstrated a lower pH value (9.34 ± 0.12) than soap base K (9.7; Table 2) but the values were not significantly different (p>0.05), and remained virtually unchanged over the study period, as found also for the herbal soap counterparts (Table 3). Antimicrobial activities of herbal soap formulations The soap base (K) demonstrated antibacterial activities, giving inhibition zone diameters of 15.5±1.5 and 18.5±1.5mm (mean±SEM), respectively, against S. aureus and B. subtilis; but no activity against C. albicans.However, incorporation of the plant preparations made the resulting herbal soap formulations active against C. albicans and also more active against the bacteria innocula ( 4).They were, however, significantly lower in activity than Septol® against B. subtilis (p<0.05).B. subtilis was also insensitive to some of the herbal soap formulations produced with the dry leaf powdered (DLP) or FLE forms of both plants combined.On the other hand, the activities of S. alata and E. uniflora leaf preparations singly used in soap formulations were significantly lower than those of Septol® (p<0.05)(Table 4). DLE forms of S. alata and E. uniflora used in the soap formulations (whether singly or combined) were active against all the test organisms (Table 4), and so proved superior in their antimicrobial activities to similar formulations produced with the other leaf preparations, against which some organisms demonstrated no apparent sensitivity (Table 4).The E. uniflora DLE used alone in the soap formulations generally demonstrated greater antimicrobial activities than the S. alata DLE used alone, particularly at the lower (5 and 9%) concentrations (Table 4).The 9 and 11% concentrations of both S. alata and E. uniflora leaf extractives in the soap formulations generally produced higher antimicrobial activities than the lower concentration (5%) (Table 4).Higher than 11% w/w concentrations of the plant preparations were, however, not used in the study because preliminary experiments had shown that foaming propensity of the soap formulations was lost at such concentrations. DISCUSSION The importance of an anti-infective soap formulation is to keep the user"s skin both clean and healthy through its cleansing and antimicrobial actions, removing skinsurface hydrophobic dirt and microbes, which can clog and infect dermal pores.It is the combination of these functions that makes an antiseptic soap formulation superior and often preferred above the use of ordinary soap preparations for the prevention or treatment of inflammatory skin conditions, such as acne or impetigo.Soaps belong to the anionic group of surfactants.The anionic and cationic surfactants are known to generally enhance penetration of antimicrobial agents through the cell wall of microorganisms; and this constitutes a possible mechanism by which they usually enhance the activities of such agents against infection-causing organisms (Hugo and Russell, 1998;Aiello et al., 2007). The soap bases prepared in this study demonstrated some degree of antibacterial activity.Inclusion of leaf extracts increased the antibacterial activity and extended antifungal activity.Fresh leaf juice of S. alata has been reported to exhibit reduced activity on storage beyond 48 h at ambient temperature, due probably to hydrolysis of the active constituents (Akinde et al., 2002).The presence of water in the fresh leaves that would reduce its relative weight used for the extraction in comparison to the weight of the dry leaves would affect the overall as being comparable to the activities of the comparator antiseptic soap, Septol®.These organisms are known to be commonly associated with human skin (Chiller et al., 2001) or as opportunistic pathogens in man (Gow and Yadav, 2017).Evaluation of emolliency may include the use of ranked indices (Parente et al., 2008).The emolliency test used in the present study was designed to evaluate occlusiveness of the soap formulations.Occlusive agents produce translucency on white paper due to the presence of residual oils in the formulation.The extent of translucence should therefore indicate the relative amount of residual oils present in the soap sample after the saponification process.This is demonstrated by the results in Figure 1 where the highest emolliency was observed with soaps containing high concentrations of oils, singly or combined.By mechanism of action, emollients are occlusive, humectant and/or restorative.Occlusive agents form a thin film on skin surface preventing moisture loss, mostly due to the presence of natural oils (Choi and Maibach, 2005;Bouwstra and Ponec, 2006).The use of emollients in topical products corrects problems in skin scaling disorders and emollients may also have suppressive effects on epidermal thickening, in addition to anti-inflammatory activity and transient relief from irritation (Nola et al., 2003).The glycerol (end-product of saponification reactions) in all the soap formulations of this study was not separated from the soap, for the possible benefit of contributing its moisturizing quality to the user"s skin (Tucker, 2011) from the soap products when used. Coconut oil is reputed for producing good quality suds when used in the preparation of soaps (Gervajio, 2005), hence, the quality of the foam produced with the oil compared to those of PKO and SB.Shea butter contains a higher proportion of unsaponifiable matter than the other two oils (Moharram et al., 2006).This might be responsible for its low foaming ability but caused its soap base to be more emollient and with lower pH.The presence of excessive NaOH in a soap preparation will increase the pH of such soap, as observed with soap bases B and F. Tap water is likely to contain divalent and trivalent metals which may reduce foaming and foam stability of the monovalent sodium soaps in water by forming water immiscible divalent soaps. The skin has a pH range of 4 to 6.To reduce irritation, skin products are expected to have pH as close to this range as much as possible.The pH of the comparator soap product is similar to that of the formulated herbal soap, even though the values are not within the pH range for the skin.The comparator soap is popularly used with no reported adverse effect on the skin due to pH. Conclusions This study has shown that S. alata and E. uniflora dry leaf methanolic extracts combined in 1:1 w/w ratio and formulated into soap at 9 or 11% w/w concentration exhibit antimicrobial activities against S. aureus and C. albicans comparable to those of a comparator soap, Septol®, containing 0.30% triclosan.The resultant herbal soap formulations also demonstrated suitable pH and foam stability properties, and could therefore serve as a substitute for soaps containing synthetic antiseptic agents especially triclosan, which has become controversial because of its untoward effects in humans (Deliaert et al., 2008) and the environment (Chalew and Halden, 2009). Figure 1 . Figure 1.Emollience ranking of soap base formulation relative translucency produced on white paper. Table 1 . Composition of soap base formulations. Composition attributes/ Formulation codes/ Ingredient quantities (%w/w) Containing soap base ingredients without excipients Containing soap base ingredients with excipients Single oil present (PKO) PKO: Palm kernel oil; SB: shea butter; CO: coconut oil; NaOH: Sodium hydroxide; SLS: sodium lauryl sulphate.considered significant at the p≤0.05 level.The data were presented as mean ± standard error of mean (SEM). Table 2 . Foam stability duration and pH of soap base formulations. Table 1 , **Data are expressed as mean±SEM.ϯϯ Different superscript letters indicate significant difference (p<0.05);Values with same superscript letter are not significantly different (p>0.05).*Values in parenthesis indicate the results of formulation samples prepared by cold process, monitored for 1 week. Table 3 . Foam stability duration and pH of herbal soap formulations. Table 3 . Cont"d.*Data are expressed as mean±SEM.ϯϯ Different superscript letters indicate significant difference (p<0.05);Values with same superscript letter are not significantly different (p>0.05).*Data indicate mean pH values; there was no significant difference (p>0.05) in all the values here presented. * Table 4 . Microorganisms' susceptibility to herbal soap formulations containing S. alata and E. uniflora leaf preparations. constituent of formulation Plant preparation forms/Microbe type/Inhibition zone diameter* (mm) quantity of active components of the FLE.These factors may account for the higher antimicrobial activity of S. alata DLE compared to the FLE.On the whole, the results of this study have established the potency of S. alata and E. uniflora DLE forms combined in soap formulation against susceptible organisms (S. aureus and C. albicans)
2019-01-02T23:17:20.354Z
2017-12-25T00:00:00.000
{ "year": 2017, "sha1": "c1311325070ce90fdb8497d392057e123218a059", "oa_license": "CCBY", "oa_url": "https://academicjournals.org/journal/JMPR/article-full-text-pdf/9A03E2C67053.pdf", "oa_status": "GOLD", "pdf_src": "Anansi", "pdf_hash": "c1311325070ce90fdb8497d392057e123218a059", "s2fieldsofstudy": [ "Agricultural And Food Sciences" ], "extfieldsofstudy": [ "Biology" ] }
4980903
pes2o/s2orc
v3-fos-license
Evolution-Guided Structural and Functional Analyses of the HERC Family Reveal an Ancient Marine Origin and Determinants of Antiviral Activity ABSTRACT In humans, homologous to the E6-AP carboxyl terminus (HECT) and regulator of chromosome condensation 1 (RCC1)-like domain-containing protein 5 (HERC5) is an interferon-induced protein that inhibits replication of evolutionarily diverse viruses, including human immunodeficiency virus type 1 (HIV-1). To better understand the origin, evolution, and function of HERC5, we performed phylogenetic, structural, and functional analyses of the entire human small-HERC family, which includes HERC3, HERC4, HERC5, and HERC6. We demonstrated that the HERC family emerged >595 million years ago and has undergone gene duplication and gene loss events throughout its evolution. The structural topology of the RCC1-like domain and HECT domains from all HERC paralogs is highly conserved among evolutionarily diverse vertebrates despite low sequence homology. Functional analyses showed that the human small HERCs exhibit different degrees of antiviral activity toward HIV-1 and that HERC5 provides the strongest inhibition. Notably, coelacanth HERC5 inhibited simian immunodeficiency virus (SIV), but not HIV-1, particle production, suggesting that the antiviral activity of HERC5 emerged over 413 million years ago and exhibits species- and virus-specific restriction. In addition, we showed that both HERC5 and HERC6 are evolving under strong positive selection, particularly blade 1 of the RCC1-like domain, which we showed is a key determinant of antiviral activity. These studies provide insight into the origin, evolution, and biological importance of the human restriction factor HERC5 and the other HERC family members. IMPORTANCE Intrinsic immunity plays an important role as the first line of defense against viruses. Studying the origins, evolution, and functions of proteins responsible for effecting this defense will provide key information about virus-host relationships that can be exploited for future drug development. We showed that HERC5 is one such antiviral protein that belongs to an evolutionarily conserved family of HERCs with an ancient marine origin. Not all vertebrates possess all HERC members, suggesting that different HERCs emerged at different times during evolution to provide the host with a survival advantage. Consistent with this, two of the more recently emerged HERC members, HERC5 and HERC6, displayed strong signatures of having been involved in an ancient evolutionary battle with viruses. Our findings provide new insights into the evolutionary origin and function of the HERC family in vertebrate evolution, identifying HERC5 and possibly HERC6 as important effectors of intrinsic immunity in vertebrates. immunodeficiency virus, innate immunity, interferons, intrinsic immunity, simian immunodeficiency virus V ertebrates possess multiple defense mechanisms to inhibit the replication of viruses. This defense system is largely composed of specialized hematopoietic cells that react nonspecifically to pathogens (innate immunity), an antibody-dependent and cell-mediated response (adaptive immunity), and core cellular effector proteins called restriction factors (intrinsic immunity). Restriction factors are considered to be the front line of defense against viral infection, since their activity typically does not require virus-triggered signaling or intercellular communication (1). The importance of intrinsic immunity in vertebrates is highlighted by the evolutionarily ancient origin and broad antiviral activity of restriction factors, such as bone marrow stromal antigen 2 (BST-2)/ tetherin (2). Other restriction factors, such as apolipoprotein B mRNA-editing enzyme catalytic polypeptide-like 3G (APOBEC3G) and tripartite motif protein 5 alpha (TRIM5␣), are unique to placental mammals and appear to play more specialized antiviral roles by targeting a more limited range of viruses, largely retroviruses (3)(4)(5)(6)(7)(8). Interferon-stimulated gene 15 (ISG15) and/or its conjugation to newly translated proteins (referred to as ISGylation) exhibits broad antiviral activity toward evolutionarily diverse viruses, including those belonging to the families Retroviridae, Orthomyxoviridae, Flaviviridae, Togaviridae, Herpesviridae, Poxviridae, Arteriviridae, and Pneumoviridae . The main cellular E3 ligase responsible for ISGylation activity is "homologous to the E6-AP carboxyl terminus (HECT) and regulator of chromosome condensation 1 (RCC1)-like domain-containing protein 5" (HERC5), an interferon (IFN)-induced restriction factor that has evolved under strong positive selection in vertebrates (11,(28)(29)(30)(32)(33)(34)(35)(36). HERC5 belongs to a subfamily of four small HERC proteins, HERC3 to -6. Although referred to as "small," the small HERC proteins are ϳ116 kDa in size, each containing a single amino-terminal RCC1-like domain and a carboxyl-terminal HECT domain. The small HERCs are classified as E3 ligases due to the presence of their HECT domains and their ability to conjugate ubiquitin or ubiquitin-like molecules to proteins (32)(33)(34). Although the biological functions of the small-HERC family have not been fully defined, their E3 ligase activities have been implicated in a variety of biological processes, such as protein degradation, cell signaling, spermatogenesis, tumor suppression, and antiviral defense (reviewed in reference 37). By virtue of their RCC1-like domains, HERCs also belong to the phylogenetically widespread RCC1 superfamily of proteins (38,39). The prototypical member of this superfamily is RCC1, characterized by the presence of seven repeats of 51 to 68 amino acids that assume a 7-bladed ␤-propeller structure. RCC1 is localized in the nuclei of eukaryotic cells and activates the GTPase Ras-related nuclear (Ran) protein (40). RCC1 maintains a Ͼ1,000-fold higher level of RanGTP in the nucleus than in the cytoplasm, which is critical for Crm1-dependent nuclear export of macromolecules (41,42). We previously showed that human HERC5 inhibits the Crm1-dependent nuclear export of incompletely spliced human immunodeficiency virus type 1 (HIV-1) RNA, resulting in a severe reduction in the level of intracellular HIV-1 Gag protein and production of virus (29). This mechanism is independent of its E3 ligase activity. Blade 1 of the 7-bladed ␤-propeller RCC1-like domain structure of HERC5 was critical for this inhibition and contained numerous residues predicted to be evolving under positive selection, identifying the region as a potential key antiviral interface between HERC5 and viruses (29). Thus far, antiviral activity has been demonstrated only for human HERC5 and its functional homolog in mice, HERC6 (11,28,30,(32)(33)(34)(35)(36). Here, we investigate the evolutionary origins and antiviral activities of the HERC family members, providing a better understanding of the role these proteins play in intrinsic immunity. RESULTS The small-HERC gene family has an ancient marine origin more than 595 million years ago. With the sequencing of many evolutionarily diverse vertebrate and mammalian genomes, we can approximate the emergence and divergence of gene families throughout evolution. We analyzed the most recent genome assemblies (UCSC Genome Browser [https://genome.ucsc.edu]) and NCBI gene and protein sequence databases for the presence of small HERC gene members. The oldest small-HERC member is HERC4, which is present in one of the only surviving lineages of jawless fish, sea lampreys (originating ϳ595 million years ago [mya]) (43). To better understand the evolution of the small-HERC family, we investigated the emergence and divergence of HERC genes in evolutionarily diverse vertebrates. The elephant shark is among the oldest and most slowly evolving jawed vertebrates and has accumulated a small number of chromosomal rearrangements (44). This allowed us to look for evidence of gene expansion at an early point in vertebrate evolution (ϳ476 mya) (43). A single copy of HERC4 and multiple copies of HERC3 are present in elephant sharks. Two copies of HERC3 are located immediately adjacent to HERC4, likely representing an early point in vertebrate evolution (ϳ476 to 595 mya), just after the small-HERC family expanded with the duplication and divergence of HERC4 (Fig. 1). HERC3 and HERC4 are present in all the jawed vertebrates examined except the platypus, which appears to be the only vertebrate that contains two copies of HERC4. Since the only available assembly for platypus is considered low coverage, future improvements in the assembly are needed to help explain this apparently unique composition of the HERC family in these mammals. Chromosomal rearrangement likely occurred sometime after the divergence of ray-finned fish from cartilaginous fish (ϳ430 mya), giving rise to two different chromosomal HERC loci in most vertebrates, where the HERC3-HERC5-HERC6 locus is flanked by FAM13A and PIGY-PYURF and HERC4 by MYPN and SIRT1 (Fig. 1B). The HERC family expanded further after the divergence of cartilaginous fish (ϳ430 mya), with the emergence of HERC6, which is present in most jawed vertebrates with an apparent absence in platypus and some fish (e.g., zebrafish) and bird (e.g., chicken, turkey, and zebra finch) species (Fig. 1). The last expansion of the HERC family occurred after the divergence of ray-finned fish (ϳ413 mya), with the likely duplication of HERC6, giving rise to HERC5 (Fig. 1B). HERC5 is present in the coelacanth, one of the earliest predecessors of tetrapods, and appears to have been lost in some species of frogs, birds (e.g., chicken), metatherian (marsupial) mammals (e.g., opossum), and rodents (e.g., mouse) ( Fig. 1) (43,45). A partial HERC5-like gene was identified in turkey, and a partial HERC4-like gene was identified in turkey and finch, possibly indicating erosion of the gene family in birds. In species that appear to have lost HERC orthologs, we cannot rule out the possibility that the orthologs are present but were missed due to low sequence homology and/or incomplete genome annotation. Together, these findings indicate that the small-HERC family has an ancient marine origin at least 595 mya, before the emergence of jawed vertebrates, and has undergone chromosomal rearrangement, gene duplication, and potential gene loss events during vertebrate evolution. Evolutionarily distant RCC1-like domains and HECT domains are well conserved. Phylogenetic analysis of HERC sequences showed segregation of the small HERC genes into four major clusters consisting of HERC3, HERC4, HERC5, and HERC6 ( Fig. 2A). Most HERC orthologs have high sequence homology, ranging from ϳ70 to 100% amino acid identity, whereas most HERC paralogs have low homology, ranging from ϳ34 to 57% (see Fig. S1 in the supplemental material). Notably, coelacanth and lizard HERC5 sequences clustered on their own, showing more sequence similarity to HERC6 genes than to other HERC5 genes, possibly indicating that these genes are actually HERC6 or a hybrid of HERC5 and HERC6. Similar tree topologies regarding the main branching were predicted using several tree-generating algorithms (maximum likelihood, minimal evolution, unweighted pair group method using average linkages [UPMGA], and neighbor-joining methods). Consistent with the approximate emergence times of the small-HERC family members shown in Fig. 1, HERC4 is the oldest of the HERC paralogs, followed by HERC3, HERC6, and then HERC5. The small HERCs are believed to have arisen from a gene fusion event between an RCC1-like domain and a HECT domain (46). Although the approximate date of this event is unknown, the presence of HERC4 in jawless fishes (e.g., lampreys) suggests that , and HERC6 (green) sequences were searched in genome assemblies using the UCSC Genome Browser (https://genome.ucsc.edu) and NCBI gene and protein sequence databases. The presence of HERC orthologs in species is the fusion event occurred more than 595 mya. Typically, the primary amino acid sequences of RCC1-like domains have low sequence homology in the RCC1 superfamily; however, their tertiary structures are highly conserved (38,39). To assess how conserved the predicted tertiary structures are among the different small-HERC members, we generated a phylogenetic tree based on the Q H structural measure derived from alignment of the predicted tertiary structures of RCC1-like domains (Fig. 2B to F; see Table S1 in the supplemental material). The models were predicted using 3D-Jigsaw v2.0 (https://bmm.crick.ac.uk/~populus/) and showed that each of the RCC1-like domains of HERC3 to -6 adopted the characteristic ␤-propeller structure of the superfamily, despite their low sequence homology (47)(48)(49)(50)(51). Alignment of the structures was carried out using the program Structural Alignment of Multiple Proteins (STAMP), which is a tool for aligning protein sequences based on their three-dimensional structures (52). The STAMP algorithm minimizes the C␣ distance between aligned residues of each molecule by applying globally optimal rigid-body rotations and translations. STAMP analysis revealed that the RCC1-like domains of the HERC orthologs are well conserved overall, with HERC3 and HERC4 being the most conserved ( Fig. 2B to F). Several paralogs showed greater similarity to each other than to their orthologous counterparts (e.g., lizard HERC4 and human HERC6). Chimpanzee HERC3 differed substantially from the other HERCs in that it lacks blade 3 of the ␤-propeller. Notably, the amino-terminal blade 1 of each RCC1-like domain is the least conserved region and adopts a more extended conformation than the other blades. Analysis of the predicted HECT domain structures showed that they all adopted the typical bilobal structure of HECT domains ( Fig. 2G to K). STAMP analysis showed that the different orthologs are well conserved overall, with HERC3 and HERC4 being the most conserved. Some HERC5 and HERC6 paralogs shared more similarity to each other than to their respective orthologs. Notably, coelacanth and mouse HERC6 proteins were more similar to HERC5 paralogs than to other HERC6 orthologs, potentially indicating conservation in structure and function of these HECT domains (Fig. 2K). This is consistent with the finding that mouse HERC6 is the functional homolog of human HERC5; these are the main cellular E3 ligases for ISG15 in mice and humans, respectively. Together, these data show that despite low sequence homology, evolutionarily divergent HERC genes share remarkable similarity in the predicted structures of their RCC1-like domains and HECT domains. Human HERC3 to -6 differentially inhibit HIV-1 particle production. Given the remarkable similarity in the predicted structures of the HECT and RCC1-like domains, we asked whether human HERC3, HERC4, and HERC6 inhibited HIV-1 particle production like HERC5. To test the effect on HIV-1 replication of knocking down endogenous HERC protein levels, we first screened different HERC short hairpin RNA (shRNA) constructs for the ability to knock down endogenous HERC mRNA and protein levels. As shown in Fig. 3A, several of the shRNA constructs knocked down HERC mRNA levels by 2-to 5-fold. For each shRNA construct used, no significant differences in mRNA levels were detected for any of the other related small HERCs, demonstrating specificity (Fig. 3B). Unfortunately, we were unable to readily measure endogenous levels of HERC proteins using several different commercially available antibodies. This could be due to poor recognition of endogenous HERC proteins by the antibodies and/or tightly controlled cytosolic levels of HERC proteins, as previously observed (34). As such, HERC protein knockdown efficiencies of the shRNA constructs were instead determined using exogenously expressed Flag-tagged HERC constructs (Fig. 3C). ShRNAs that knocked indicated by colored lines. The approximate dates of divergence among the organisms are indicated by the time line on the left from the perspective of humans, as previously described by Hedges et al. (43). The Bayesian tree was obtained from a multiple-sequence alignment of 251 genes with a 1:1 ratio of orthologs in 22 vertebrates, rooted on cartilaginous fish (support was 100% for all clades but armadillo and elephant, with 45%), as described previously (45). HOS-CD4-CXCR4 cells were cotransfected with plasmids carrying HERC shRNA and HIV-1 R9 (a full-length replication-competent NL4-3 derivative). After 72 h of replication (ϳ2 or 3 rounds), quantitative Western blot analysis of cell lysates or virions produced from cells showed that cells knocked down for HERC5 expression exhibited a significant increase in Gag particle production (ϳ8-fold), whereas HERC3, -4, and -6 released modestly more virions into the supernatant than the control cells (ϳ2-fold) (Fig. 3D). The amount of infectious HIV-1 in the supernatant was also measured by infecting the TZM-bl indicator cell line, which enabled quantitative analysis of HIV-1 using luciferase as a reporter (53)(54)(55)(56)(57). Cells knocked down for endogenous HERC5 expression failed to inhibit production of infectious HIV-1, whereas cells knocked down for HERC3, HERC4, or HERC6 expression released levels of infectious virions similar to those of the control cells (Fig. 3E). To test the effect of increased HERC expression on single-round HIV-1 particle production, human 293T cells, which do not support multiround replication, were cotransfected with plasmids carrying HIV-1 (R9) and either empty vector, HERC3, HERC4, HERC5 or HERC6. As expected, HERC5 potently inhibited HIV-1 particle production ( Fig. 3F). HERC3 or HERC4 also significantly inhibited particle production, but not as potently as HERC5. In contrast, HERC6 modestly inhibited HIV-1 particle production but did not achieve statistical significance. As expected, HERC5 also potently inhibited the production of infectious HIV-1, whereas HERC3, HERC4, and HERC6 modestly inhibited production of infectious HIV-1 (Fig. 3G). Each transfected HERC construct exhibited robust mRNA expression, although HERC3 and HERC4 levels were less than those of HERC5 and HERC6 (Fig. 3H). Taken together, these data show that upregulated expression of HERC3 to -6 inhibited HIV-1 particle production and replication to varying degrees, with HERC5 exhibiting the most potent activity and HERC6 the weakest activity. Notably, only endogenous levels of HERC5 significantly inhibited production of infectious HIV-1. Human HERC3 to -6 differentially inhibit nuclear export of incompletely spliced RNA. We previously showed that human HERC5 blocked nuclear export of Revdependent HIV-1 RNA (29). To determine if HERC3, HERC4, and HERC6 also blocked nuclear export of Rev-dependent HIV-1 RNAs, we cotransfected 293T cells with plasmids carrying full-length HIV-1 R9 and either empty vector, HERC3, HERC4, HERC5, or HERC6. A plasmid encoding enhanced green fluorescent protein (eGFP) was also cotransfected to serve as a transfection control. Total RNA was harvested from the total cell extract or the cytoplasmic extract only and subjected to quantitative PCR (qPCR) with primers specific for either Gag (unspliced HIV-1 genomic RNA), Rev (fully spliced RNA), ␤-actin (loading control), or eGFP. Each of the small HERC proteins exhibited significant reductions in the amount of HIV-1 genomic RNA present in the cytoplasm, with HERC5 exhibiting the strongest activity (Fig. 4A). In contrast, no significant reductions in the export of fully spliced HIV-1 transcripts were observed (Fig. 4B). To further support this finding, we tested the abilities of the small HERCs to inhibit Gag expression from Rev-dependent (e.g., GagPol-RRE) and Rev-independent (e.g., histories of the taxa analyzed. Branches corresponding to partitions reproduced in less than 50% of the bootstrap replicates were collapsed. The percentages of replicate trees in which the associated taxa clustered together in the bootstrap test (100 replicates) are shown next to the branches. Initial trees for the heuristic search were obtained automatically by applying neighbor-joining and BioNJ algorithms to a matrix of pairwise distances estimated using a JTT model and then selecting the topology with a superior log likelihood value. The analysis involved 91 amino acid sequences. All positions containing gaps and missing data were removed. There were a total of 433 positions in the final data set. Evolutionary analyses were conducted in MEGA7 (83). H3, HERC3; H4, HERC4; H5, HERC5; H6 HERC6. (B to F) Evolutionary conservation of HERC RCC1-like domains. (B to E) Predicted structures of the RCC1-like domains were generated using 3D-Jigsaw (v2.0). Multiple structural alignments were generated based on the Q H structural measure using the program STAMP, a plug-in in the MultiSeq interface of the Visual Molecular Dynamics (VMD) software (v1.9.2). (F) Three-dimensional representation of the structural data colored by structural conservation. Each amino acid is colored according to the degree of conservation within the alignment: blue, highly conserved; white, somewhat conserved; and red, very low or no conservation. The structure-based cladogram was derived from sequence and structural alignments of the predicted tertiary structures of multiple mammalian HERC RCC1-like domains. (G to K) Evolutionary conservation of HERC HECT domains. (G to J) Three-dimensional representations of the predicted HECT domain structural data colored by structural conservation. (K) Structure-based cladogram derived from sequence and structural alignments of the predicted tertiary structures of multiple mammalian HERC HECT domains. See Table S1 in the supplemental material for the parent structure scaffolds used to generate all the models. GagPol-4ϫCTE) constructs, as previously described (29,58). HIV-1 Rev promotes nuclear export of incompletely spliced HIV-1 mRNAs by binding to a specific cis-acting element called the Rev-response element (RRE) located within an HIV-1 intron (Fig. 4C). HIV-1 mRNA containing four copies of the Mason-Pfizer monkey virus constitutive export 293T cells were cotransfected with pR9 and peGFP (transfection control) and either empty vector, pHERC3, pHERC4, or pHERC5. Forty-eight hours after transfection, total RNA was extracted and reverse transcribed into cDNA from whole-cell lysates or from the cytoplasmic fraction only. Quantitative PCR was performed on each fraction using primers specific for unspliced HIV-1 genomic RNA (e.g., Gag), fully spliced RNA (e.g., Rev), ␤-actin (loading control), or eGFP (transfection control). The proportion of unspliced or fully spliced HIV-1 RNA in the cytoplasmic fraction compared to the total amount of HIV-1 RNA (nuclear plus cytoplasmic) was determined for control cells and cells expressing HERC after normalization to ␤-actin and eGFP levels. The fold changes in copy numbers relative to control cells are shown. The data shown represent the averages (plus SEM) from the results of four independent experiments. (C) Schematic depicting the different Gag-Pol constructs used in the experiment shown in panel D. CMV, cytomegalovirus; SA, splice acceptor; SD, splice donor. (D) 293T cells were cotransfected with increasing amounts of plasmids encoding HERC3, HERC4, or HERC5 and either Rev-dependent GagPol-RRE (plus pRev) or Rev-independent GagPol-4ϫCTE. The total DNA transfected was kept equal with the empty-vector plasmid. Gag levels within the cell lysates were analyzed by quantitative Western blotting using anti-p24CA and anti-␤-actin as a loading control. (E and F) Densitometric quantification of Pr55Gag bands from the lanes containing the largest amount of each HERC in the Western blots from panel D was performed. Shown are the average fold changes (plus SEM) in Pr55Gag levels relative to the empty-vector control after normalization to ␤-actin levels. Statistical significance was determined using ANOVA with Dunnett's multiple-comparison test. *, P Ͻ 0.05; **, P Ͻ 0.01, ***, P Ͻ 0.001; ns, not significant. element (4ϫCTE) in place of the RRE is not dependent on Rev for nuclear export and thus serves as a Rev-independent control (59). Successful export of incompletely spliced RNA can be assessed by Western blotting for Gag protein expression. 293T cells were cotransfected with a plasmid encoding Rev and increasing concentrations of plasmids encoding HERC, with or without pGagPol-RRE or pGagPol-4ϫCTE. As shown in Fig. 4D and E, each of the small HERCs differentially inhibited nuclear export of Rev-dependent RNA, where HERC5 exhibited the highest level of inhibition and HERC6 the weakest inhibition. In contrast, none of the HERCs significantly inhibited nuclear export of Rev-independent RNA ( Fig. 4D and F). Together, these findings indicate that the small-HERC members differentially inhibit nuclear export of Rev-dependent RNAs. Antiviral activity of HERC5 evolved more than 413 million years ago. We next asked whether the antiretroviral activity of human HERC5 has an evolutionarily ancient origin. Since the coelacanth was the oldest vertebrate in which we identified HERC5, we tested the ability of coelacanth HERC5 to inhibit HIV-1 virus production. To assess potential virus-specific effects, we also tested the antiviral activity toward another, related nonhuman retrovirus, simian immunodeficiency virus (SIV) (SIVmac239, a fulllength rhesus macaque derivative lacking the 5= long terminal repeat [LTR]), which is thought to be at least 32,000 years older than HIV-1 (60). For comparison, we included human HERC5 and human HERC6, which exhibited the strongest and weakest anti-HIV-1 activities, respectively. Human 293T cells were cotransfected with plasmids carrying SIVmac239 or HIV-1 R9 and increasing concentrations of either empty vector, coelacanth HERC5, human HERC5, or human HERC6. Forty-eight hours after transfection, virus released into the supernatant was measured by Western blotting. As expected, human HERC5 exhibited strong inhibition, whereas both coelacanth HERC5 and human HERC6 exhibited little inhibition of HIV-1 (Fig. 5A to C). In contrast, each of the HERCs inhibited SIVmac239 virus production, with human HERC5 being the most potent ( Fig. 5D to F). Together, these results demonstrate that the antiretroviral activity of HERC5 has an ancient marine origin at least 413 mya and that HERC5 and HERC6 exhibit species-and virus-specific antiviral activity. HERC6 is evolving under positive selection. We previously showed that HERC5, especially blade 1 of its RCC1-like domain, has been evolving under positive selection for Ͼ100 million years (29). We performed a similar analysis for each of the small-HERC members to determine if the other members of the small-HERC family have been evolving under positive selection. HERC evolution in mammals was evaluated under several standard models of sequence evolution using the Server for the Identification of Site-Specific Positive Selection and Purifying Selection (Selecton) program (61)(62)(63)(64). This comprised two nested pairs of models (M8a and M8; M7 and M8), in which the first model of each pair is nested in the second model. The M8 model, but not the M8a or M7 model, allows sites to evolve under positive selection. A nonnested-pair (M8a and MEC) model comparison was also performed. The MEC model differs from the other models in that it takes into account the differences between amino acid replacement rates (61). The nested models were compared using the likelihood ratio test. Analysis of 12 evolutionarily diverse HERC sequences using Selecton revealed that HERC6, but not HERC3 and HERC4, is evolving under positive selection ( Fig. 6; see Table S2 in the supplemental material). Allowing sites to evolve under positive selection (M8) gave a significantly better fit to the HERC6 sequence data than the corresponding model without positive selection (M8a and M7) (Fig. 6B). The MEC model, which allows positive selection, was compared with the M8a null model, which does not allow positive selection. Comparison of the Akaike Information Criterion (AICc) scores (M8a, 25,806; MEC, 25,557) revealed that the MEC model fits the HERC6 data better than the M8a model. The results of the MEC analysis were projected onto the primary sequence of human HERC6 (Fig. 6C). Notably, ϳ23% (23 of 102) of the codons cluster within the first 80 amino acids of the amino terminus of the RCC1-like domain, encompassing blade 1 of its predicted ␤-propeller structure. Another ϳ32% (33 of 102) cluster at the carboxyl terminus of the spacer region (amino acids ϳ630 to 680). These results show that strong positive selection is operating on HERC6, with a large number of codons in blade 1 of the RCC1-like domain and the carboxyl terminus of the spacer region evolving under positive selection. Blade 1 of the RCC1-like domain of human HERC6 is an important determinant of anti-HIV-1 activity. Given the evolutionary similarities between human HERC5 and human HERC6, we asked why they differed in their antiviral activities. Since we previously showed that blade 1 of HERC5 is required for its anti-HIV-1 activity and that blades 1 of both HERC5 and HERC6 contain numerous residues evolving under positive selection, we asked if blade 1 of HERC5 can confer antiviral activity on HERC6. We replaced either the entire RCC1-like domain (H6:H5RLD) or blade 1 (H6:H5blade1) from human HERC5 with the corresponding region in human HERC6. We then measured the abilities of these HERC6 mutants to inhibit HIV-1 particle production. As shown in Fig. 7A, the H6:H5RLD and H6:H5blade1 mutants potently inhibited HIV-1 particle production similarly to wild-type HERC5. This inhibition occurred despite levels of H6:H5RLD and H6:H5blade1 protein expression slightly lower than that of wild-type HERC5 (Fig. 7B). This result indicates that blade 1 is an important determinant of antiviral activity. DISCUSSION We showed here that the small-HERC family has an ancient marine origin, where HERC4 emerged at least 595 mya and expansion of the family occurred sometime after FIG 5 Coelacanth HERC5 restricts SIV, but not HIV-1, particle production. 293T cells were cotransfected with a plasmid carrying HIV-1 (pR9) (A to C) or SIV (pSIVmac239) (D to F) and increasing amounts of plasmids encoding either coelacanth HERC5, human HERC5, or human HERC6. Forty-eight hours posttransfection, virus released into the supernatant was measured by quantitative Western blotting of Gag proteins using monoclonal anti-p24CA (183-H12-5C) or anti-SIVp17 (KK59). The average relative fold changes (plus SEM) in HIV p24CA or SIV p24CA protein levels compared to the control cells after densitometric quantification of 3 independent Western blot images are shown. Statistical significance was determined using one-way ANOVA with Dunnett's multiplecomparison test with the control cells. **, P Ͻ 0.01; ***, P Ͻ 0.001; ****, P Ͻ 0.0001. the divergence of jawed vertebrates from jawless vertebrates (ϳ476 to 595 mya). Elephant sharks are among the oldest and most slowly evolving jawed vertebrates and have accumulated a small number of chromosomal rearrangements (44). Thus, analysis of their genome allows us to gain insight into the early evolution and expansion of gene families. The presence of a single copy of HERC4 and multiple copies of HERC3 in Evolutionary analysis for positive selection in HERC6 used various models of evolution, where M8 and MEC allowed sites to evolve under positive selection and M7 and M8a did not. L represents the likelihood of the model given the data, p represents the number of free parameters, and N represents the sequence length. The lower the AIC C score, the better the fit of the model to the data, and hence, the model is considered more justified. (C) Schematic showing the results of a Bayesian analysis approach identifying positively selected sites with a ratio of the number of nonsynonymous substitutions per nonsynonymous site (Ka) to the number of synonymous substitutions per synonymous site (Ks) (Ka/Ks) with a value of Ͼ1.5 and a 95% confidence interval larger than 1 and therefore considered statistically significant. The HERC6 reference sequence accession number is NM_017912.3. the elephant shark likely represents an early time in vertebrate evolution when the HERC4 ancestral gene duplicated and evolved into HERC3. Although the evolutionary pressures in vertebrates triggering expansion of the small-HERC family are unknown, it is plausible, given their antiretroviral activity, that this expansion involved retroviruses. Endogenous retroviruses (ERVs) comprise a substantial portion of vertebrate genomes and appear to have an ancient marine origin, with evidence of ERV sequences found in the genomes of elephant shark, coelacanth, and possibly lamprey (65)(66)(67)(68). A recent panvertebrate comparative genomic analysis showed that retroviruses have an unprecedented capacity for rampant host switching among distantly related vertebrates, undoubtedly exerting substantial evolutionary pressure on their hosts (68). Pressures like these can trigger antiviral gene duplication and neofunctionalization events in the hosts, allowing them to evolve more rapidly in order to maintain evolutionary dominance over viruses. Several examples where gene duplication/neofunctionalization has given rise to restriction factor families in primates are MX1, IFITM, TRIM5, and APOBEC3 (5, 6, 69-74). These genes, including HERC5 and HERC6, exhibit strong signatures of positive selection, which is consistent with repeated exposure to such evolutionary pressures. An interesting feature of the small-HERC family is the highly conserved topology of the RCC1-like domain, despite limited sequence homology. By allowing numerous amino acid substitutions while maintaining the overall protein configuration and antiviral activity, these HERC proteins may be able to interfere with the binding of diverse viral antagonists. Evidence of such an evolutionary battle lies in the strong signatures of positive selection in both human HERC5 and HERC6, especially blades 1 of their RCC1-like domains, which we have shown to be important determinants of antiviral activity. HERCs are not the only restriction factors likely to have played an important antiviral role early in vertebrate evolution. BST2/tetherin has also been shown to have an ancient marine origin, emerging Ͼ450 mya, before the separation of cartilaginous fish from bony vertebrates (2). Like the HERC family, the general topology of BST2/tetherin orthologs are also highly conserved despite low sequence homology, and it is the overall protein configuration that is important for its antiviral activity (2,75,76). This evolutionary strategy may help BST2/tetherin and HERC proteins maintain evolutionary dominance over viruses. We observed that some small HERC genes have been lost in some species, most notably birds and rodents. For birds, this is not too surprising, given that their genomes have been subjected to lineage-specific erosion of repetitive elements, large segmental deletions, and gene loss (Ͼ1,000 genes), resulting in a smaller repertoire of immune genes than in humans (77,78). Although HERC5 and HERC6 are missing in most bird Evolutionary and Functional Analysis of Small HERCs Journal of Virology species, HERC3 and/or HERC4 is present. Given their modest antiretroviral activity in humans, it will be interesting to learn if HERC3 and HERC4 play an antiviral role in birds, perhaps compensating for the loss of HERC5 and HERC6. Other potent antiviral restriction factor genes, such as BST2/tetherin, TRIM22, TRIM5, and APOBEC3G, are also notably absent from birds; however, they do possess other members of the BST, TRIM, and APOBEC families, whose antiviral activities remain largely uncharacterized in birds. Rodents also have HERC3 and HERC4 but also possess at least one of the HERC5 and HERC6 genes. For example, mice, rats, and hamsters possess HERC6 but lack HERC5, ground squirrels possess HERC5 but lack HERC6, and guinea pigs possess both HERC5 and HERC6. Since no rodent lacks both HERC5 and HERC6, it is likely that one of these genes has assumed the role of the main cellular E3 ligase for ISG15 in the absence of the other, potentially adding a new antiviral defense for rodents. For instance, this is the case in mice, which possess only HERC6, which encodes the main cellular E3 ligase for ISG15 and serves as a critical antiviral defense mechanism in mice (79,80). Our phylogenetic analysis, where we showed that the predicted structures of the HECT domains of mouse HERC6 and human HERC5 (the main cellular E3 ligase for ISG15 in humans) share a high degree of similarity, also supports this finding. One possibility for the loss of HERC5 or HERC6 in rodents is that retroviruses or other viral pathogens have not provided constant selective pressure to maintain both genes in these species. A similar dynamic history of gene expansion and loss is evident for other restriction factors, such as TRIM22, TRIM5, and BST2/tetherin (2,7,81). As genes duplicate, neofunctionalize, and diverge in response to evolutionary pressures, reliance on the activities of the ancestral genes may diminish, or they may be replaced altogether by their more advantageous descendants (reviewed in reference 82). Therefore, it is not surprising that the human small-HERC family exhibits differential antiviral activity. Despite still possessing antiviral activity when overexpressed in vitro, it is unlikely that HERC3 and HERC4 play significant biological roles as antiviral proteins in humans, since they are not IFN induced, nor have they been evolving under positive selection. This role was likely assumed by HERC5 and HERC6 after the divergence of ray-finned fish from cartilaginous fish (ϳ430 mya). However, HERC3 and HERC4 do exhibit differential tissue-specific expression (reviewed in reference 37). Therefore, it is possible that HERC3 and HERC4 play more dominant antiviral roles in tissues where their basal expression is already much more elevated, such as in the brain, heart, and stomach for HERC3 and the brain, lung, and testis for HERC4. Moreover, species such as elephant shark, coelacanth, and platypus that contain duplicated copies of HERC3 or HERC4 genes may also express higher levels of these HERC proteins due to increased gene copy numbers. Although we did not test the antiviral activities of other evolutionarily diverse HERC3 or HERC4 proteins, our phylogenetic analyses demonstrated that the HECT and RCC1-like domains of these proteins show remarkable structural similarity to their human counterparts, which do exhibit antiviral activity at elevated levels. It will be interesting to determine if the antiviral function of HERC3 and HERC4 is conserved in these ancient vertebrates, before the emergence of HERC5 and HERC6. Interactions between host antiviral proteins and their viral-protein targets can be a critical requirement for their antiviral activity. When viral proteins mutate to evade such interactions, the antiviral protein frequently develops rapid amino acid replacements at the protein-protein interface in an attempt to restore those interactions and maintain evolutionary dominance over the virus. HERC5 is known to interact with evolutionarily diverse viral proteins and, like HERC6, is evolving under strong positive selection (11, 28-30, 32, 35, 36). Therefore, it is highly likely that these proteins contain one or more protein-protein interaction interfaces between viral and host proteins. Our findings that blades 1 of the RCC1-like domains of HERC5 and HERC6 contain numerous positively selected residues and that these residues are important determinants of antiretroviral activity indicate that blade 1 is likely one such interface. Although there is currently no evidence that retroviruses have driven positive selection of blade 1, it is interesting that blade 1 of HERC5 is sufficient to confer antiretroviral activity on HERC6. It is possible that the topology of blade 1 from HERC5 is such that it promotes interaction with a cellular protein required for activity that blade 1 of HERC6 prevents. However, our finding that wild-type HERC6 potently inhibited SIVmac239, but not HIV-1, in the same cell type suggests that virus-specific differences are more likely to account for the observed differential antiviral activity between HERC5 and HERC6. Additional structurefunction studies are needed to differentiate between these possibilities and others. In conclusion, the small-HERC family has an evolutionarily ancient origin more than 595 mya, with the latest expansion of the family occurring more than 413 mya. We showed that the structural topologies of the HECT and RCC1-like domains are highly conserved despite low sequence homology and that the antiretroviral activity of HERC5 has an ancient marine origin. HERC5 and HERC6 are evolving under strong positive selection, and a patch of positively selected residues in blade 1 of the RCC1-like domain is a strong determinant of antiviral activity. Altogether, our study highlights the potential importance of the HERC family in intrinsic immunity. MATERIALS AND METHODS Cell lines. 293T cells were obtained from the American Type Culture Collection. HOS-CD4-CXCR4 and TZM-bl cells were from the NIH AIDS Reagent Program. The cells were maintained in standard growth medium (Dulbecco's modified Eagle's medium [DMEM]) supplemented with 10% heat-inactivated fetal bovine serum (FBS), 100 U/ml penicillin, and 100 g/ml streptomycin at 37°C with 5% CO 2 . Analyses of sequences and synteny. The MEGA 7.0 package was used for phylogenetic analysis (83). The amino-terminal end of the small HERCs varies in length among the different members and was not included for phylogenetic analysis. The first 30 amino acids were omitted from HERC3 and HERC4 and the first 23 amino acids from HERC5 and HERC6. The accession numbers used were as follows: for HERC3, Positive selection. Positive-selection analysis was performed as previously described (29). HERC sequences were aligned, and a phylogenetic tree was generated using COBALT (constraint-based alignment tool) (http://www.ncbi.nlm.nih.gov/tools/cobalt/) (84). The following HERC sequences were obtained from GenBank: for HERC3, Homo sapiens (human) (NP_055421.1), Gorilla gorilla gorilla (gorilla) (XP_004039158.1), Pan troglodytes (chimpanzee) (XP_517337. 3 . At least 2 independent sequences were available for human, sheep, baboon, marmoset, gibbon, and squirrel monkey. The following sequences were not independently validated: cat, dog, cow, horse, sheep, and giant panda. The identification of site-specific positive selection and purifying selection was generated using the Selecton server (http://selecton.tau.ac.il/index .html). The HERC5 phylogenetic tree was used in the Selecton analysis. Nested pairs of models (M8a and M8; M7 and M8) and a nonnested pair (M8a and MEC) were compared using the likelihood ratio test implemented in the Selecton program. To generate pH 6:H5RLD, the HERC5 RCC1-like domain was PCR amplified from pHERC5 using the following primers: forward, 5= GGA TGA CGA TGA CAA GAT GGA GCG CCG CAG CC 3=, and reverse, 5= TAT GTT CCA GCA AAA ATT ATT AAC TCC TTT TCT GAG GTA TGG CTT TCA AG 3=. The backbone of pHERC6 was PCR amplified using the following primers: forward, 5= TTT TTG CTG GAA CAT ATG CCA ACT TTG 3=, and reverse, 5= CTT GTC ATC GTC ATC CTT GTA ATC GAT G 3=. The two amplified fragments were cloned using the fast cloning technique (85). For the construction of pSIVmac239 (pREC_nfl_SIV239), the SIVmac239 Spx vector was obtained from the NIH AIDS Reagent Program. We previously constructed pREC_nfl_HIV and pCMV_cplt vectors for Saccharomyces cerevisiae-based cloning of diverse HIV-1 strains (88). We developed a similar method for SIV cloning. To generate pREC_nfl_SIV239, the 5= half of the HIV genome in the pREC_nfl_HIV vector was first replaced with URA3 and then with the 5= half of the SIV239 genome through the yeast recombination technique described below. Yeast colonies were selected on C-leu plates supplemented with uracil but lacking leucine for selection of pREC_nfl_HIVΔ5=HIV/URA3 and on C-leu supplemented with 5-fluoro-1,2,3,6-tetra-hydro-2,6-dioxo-4-pyrimidine carboxylic acid (5-FOA) for selection of pREC_nfl_ 5=SIV239/3=HIV. C-leu plates allow growth only when a plasmid containing the leucine gene is transformed into the yeast. The 3= half of SIV239 was introduced using the same procedure to form the vector pREC_nfl_SIV239; this vector contains nearly the full-length SIV239 genome and lacks the 5= repeat (R) and unique (U5) regions. Approximately 95% of the FOA-resistant yeast colonies harbored pREC_nfl_ SIV239. A crude yeast lysate was then used to transform bacteria and to amplify these ampicillin-resistant DNA plasmids for purification, as described previously (88). For yeast recombination, S. cerevisiae Hanson (MYA-906) (MAT␣ ade6 can1 his3 leu2 trp1 URA3) was obtained from the American Type Culture Collection (ATCC). The yeast was grown at 30°C in appropriate medium (yeast extract peptone dextrose [YEPD] or complete [C] minimal medium C-LEU-URA3, C-LEU, or C-LEU/5-FOA), depending on the cloning step. Transformations/recombinations were performed using the lithium acetate (LiAc) method. Briefly, the linearized vector DNA (ϳ1 g) and PCR product (ϳ3 g) were added to competent cells at a 1:3 ratio, along with 50 g of single-stranded salmon sperm carrier DNA (BD Biosciences/Clontech, Palo Alto, CA) and sterile polyethylene glycol (50%)-TE (10 mM Tris-Cl, 1 mM EDTA)-LiAc (100 mM). Following agitation for 30 min at 30°C, the yeast was heat shocked at 42°C for 15 min and plated on C-leu agar plates containing the appropriate selection. Quantitative PCR. Total RNA was extracted using the PureLink RNA minikit (Ambion, Life Technologies). Three micrograms of RNA was reverse transcribed to cDNA using Moloney murine leukemia virus (MMLV) reverse transcriptase and oligo(dT) primers (Life Technologies). Prior to qPCR, the cDNA samples were diluted 1:5 with water. Each PCR mixture consisted of 10 l of SYBR green master mix, 2 l of gene-specific primers (1 l of 10 M forward primer and 1 l of 10 M reverse primer), 5 l of diluted cDNA, and water to a total volume of 20 l. For quantification of incompletely and fully spliced HIV RNAs, qPCR was run on the Rotor-Gene 6000 qPCR machine (Corbett Life Science) under the following cycling conditions: 10 min at 95°C and 40 cycles of 10 s at 95°C, 15 s at 60°C, and 20 s at 72°C. The Rotor-Gene 6000 series software (version 1.7) was used to determine the cycle threshold (C T ) for each PCR. The gene-specific forward and reverse primer sets used were as follows: Gag-(forward, 5= CAT ATA GTA TGG GCA AGC AGG G 3=; reverse, 5= CTG TCT GAA GGG ATG GTT GTA G); Rev (forward, 5= GAG CTC ATC AGA ACA GTC AGA C 3=; reverse, 5= CGA ATG GAT CTG TCT CTG TCT C 3=). Quantification of endogenous HERC mRNA was run on the QuantStudio5 qPCR machine (Applied Biosystems) under the following cycling conditions: 2 min at 95°C and 40 cycles of 5 s at 95°C, 10 s at 60°C, and 20 s at 72°C. QuantStudio design and analysis desktop software (version 1.4) was used to determine the C T for each PCR. The primer pairs were as follows: HERC3 (forward, 5= CAG TGC CCA GGT TAA TAC AAA AG 3=; reverse, 5= GAA CTC CTT CCC TAA GCC AAG 3=), HERC4 (forward, 5= TTC ATG TGG AGA AGC TCA TAC G 3=; reverse, 5= CAT CAG AAT CGA GAC CCC AAG 3=), HERC5 (forward, 5= ATG AGC TAA GAC CCT GTT TGG 3=; reverse, 5= CCC AAA TCA GAA ACA TAG GCA AG 3=), HERC6 (forward, 5= GCG TCA ATT AAG TCA AGC TGA AGC 3=; reverse, 5= GAA ACC ACA TGC AGG AAC CC 3=), GAPDH (forward, 5= CAT GTT CGT CAT GGG TGT GAA CCA 3=; reverse, 5= AGT GAT GGC ATG GAC TGT GGT CAT 3=), and eGFP (forward, 5= GACAACCACTACCTGAGCAC 3=; reverse, 5= CAGGACCATGTGATCGCG3=). To ensure no carryover of DNA into each total purified RNA sample, 3 g of the purified RNA was used directly as the template without reverse transcription for qPCR, using the primer sets described above. Statistical analyses. GraphPad Prism v6 was used for all statistical analyses mentioned in the text. The P values and statistical tests used are mentioned where appropriate. P values of less than 0.05 were deemed significant.
2018-04-27T04:32:21.546Z
2018-04-18T00:00:00.000
{ "year": 2018, "sha1": "f4fae9253817eac228f27afaf0533d52566bd656", "oa_license": "CCBY", "oa_url": "https://jvi.asm.org/content/jvi/92/13/e00528-18.full.pdf", "oa_status": "HYBRID", "pdf_src": "PubMedCentral", "pdf_hash": "f4fae9253817eac228f27afaf0533d52566bd656", "s2fieldsofstudy": [ "Biology", "Medicine" ], "extfieldsofstudy": [ "Biology", "Medicine" ] }
222109002
pes2o/s2orc
v3-fos-license
Comparison and Validation of Deep Learning Models for the Diagnosis of Pneumonia As a respiratory infection, pneumonia has gained great attention from countries all over the world for its strong spreading and relatively high mortality. For pneumonia, early detection and treatment will reduce its mortality rate significantly. Currently, X-ray diagnosis is recognized as a relatively effective method. The visual analysis of a patient's X-ray chest radiograph by an experienced doctor takes about 5 to 15 minutes. When cases are concentrated, this will undoubtedly put tremendous pressure on the doctor's clinical diagnosis. Therefore, relying on the naked eye of the imaging doctor has very low efficiency. Hence, the use of artificial intelligence for clinical image diagnosis of pneumonia is a necessary thing. In addition, artificial intelligence recognition is very fast, and the convolutional neural networks (CNNs) have achieved better performance than human beings in terms of image identification. Therefore, we used the dataset which has chest X-ray images for classification made available by Kaggle with a total of 5216 train and 624 test images, with 2 classes as normal and pneumonia. We performed studies using five mainstream network algorithms to classify these diseases in the dataset and compared the results, from which we improved MobileNet's network structure and achieved a higher accuracy rate than other methods. Furthermore, the improved MobileNet's network could also extend to other areas for application. Introduction Pneumonia is an acute respiratory infection of the lungs, and it has a high incidence, accounting for about 12% of the total population. Nowadays, the incidence of pneumonia is still increasing due to the social population aging, increased immune-impaired hosts, pathogen changes, difficult pathogenic diagnosis, and increased bacterial resistance. A chest radiograph analysis (CXR) is the most commonly used method for X-ray examination to diagnose and differentiate the type of pneumonia. However, because of a lack of professional radiologists, pneumonia has alarming mortality rates in some limited resource areas. erefore, addressing the issue of how to improve the accuracy of pneumonia detection and reducing the cost of pneumonia detection has great help for the treatment and prevention of pneumonia. In recent years, deep learning technologies have developed rapidly. Deep learning is a widely used tool in research fields such as computer vision, speech analysis, and natural language processing. is method is particularly suitable for those fields that need to analyze large amounts of data and human intelligence. A major advantage of using deep learning methods is that complex features can be learned directly from the raw data. is allows us to define a system that does not rely on manual operations, which is unique among other machine learning technologies. e use of deep learning as a machine learning and pattern recognition tool is also becoming an important aspect in the field of medical image analysis. At present, the deep learning technology has played an important role in medical image processing, computer-aided diagnosis, image interpretation, image fusion, image registration, image segmentation, and imageguided therapy. It can help doctors diagnose and predict disease risk accurately and quickly. Cicero Medicine presented a method for automatic classification of pneumonia using ultrasound imaging of the lungs and pattern recognition in 2018 [8]. Knok et al. from Polytechnic of MeCimurje used an already defined convolution neural network architecture to develop a model of an intelligent system that receives X-ray image of the lung as an input parameter and based on the processed image returned the possibility of pneumonia as an output in 2019 [9]. Rajaraman et al. proposed a CNN-based decision support system to detect pneumonia in pediatric CXRs, and it effectively learned from a sparse collection of complex data with reduced bias and improved generalization in 2018 [10]. Anwar et al. from University of Engineering and Technology, Taxila, reviewed the medical image analysis using convolutional neural networks in 2019 [11]. Professor Razzak et al. from King Saud bin Abdulaziz University for Health Sciences discussed the overview, challenges, and the future of the deep learning for medical image processing in 2018 [12]. Maruyama et al. from Gunma Prefectural College of Health Sciences used three types of machine learning methods to compare their accuracy of medical image classification; their conclusions showed the CNN is more accurate than conventional machine learning methods that utilize the manual feature extraction in 2018 [13]. Gabruseva et al. presented an algorithm that automatically locates lung opacities on chest radiographs by using squeeze-and-excitation CNNs, augmentations, and multitask learning; it demonstrated one of the best performances in the Radiological Society of North America (RSNA) Pneumonia Detection Challenge for pneumonia region detection hosted on the Kaggle platform [14]. Although some of the aforementioned studies use transfer learning methods to solve the limitations of insufficient training data, they have achieved better recognition results in pneumonia image recognition than other studies. However, because of the large difference between the ImageNet dataset and the pneumonia dataset, they did not make corresponding improvements to the existing migration learning model to make it more suitable for the pneumonia image dataset in order to obtain higher recognition accuracy. In addition, all children with pneumonia in the dataset of the previous study are patients with lobar pneumonia, which means that the performance of the algorithm may be affected. In this case, the expected sensitivity is low. In addition, there is currently no algorithm that can determine other types of lung diseases, such as an algorithm that distinguishes interstitial infiltration or bronchiolitis from lobar pneumonia. In our study, we analyzed the structural advantages of different deep learning models and concluded that Mobi-leNet is a suitable model for clinical image diagnosis of pneumonia. Also, we used the improved MobileNet's network structure for higher accuracy. To validate the theoretical results, we utilize a regular convolution and four other mainstream network models to classify and identify the same pneumonia X-ray datasets acquired in reality. After comparing their accuracy and other performance indicators, the results turn out that improved MobileNet does get better results than other CNNs. In the end, the conclusion and future work are illustrated based on our study. Depthwise Separable Convolution. A regular convolution is performed in one step by filtering and merging inputs into a new set of outputs (in Figure 1). e depthwise separable convolution divides it into two layers: one layer for filtering and the other layer for merging. e influence of this factorization is to reduce the amount of computation and model size greatly [15]. For the depthwise separable convolution, the input images have three channels: red, green, and blue. After several convolutions, the images may have multiple channels. Each channel imaging can be a specific interpretation of the image. For example, the "red" channel explains "red" for each pixel, the "blue" channel explains "blue" for each pixel, and the "green" channel explains "green" for each pixel. An image with 64 channels has 64 different interpretations of the image. Being a distinct regular convolution, a depthwise separable convolution comprises a depthwise convolution (DW) and a pointwise convolution (PW). ere, DW deals with spatial relationship modeling with 2D channelwise convolutions (in Figure 2(a)), while PW deals with crosschannel relationship modeling with 1 × 1 convolution across channels (in Figure 2 is factorization form is expressed by DW + PW (in Figure 3). Next, a depthwise separable convolution will be proved to have better performance by comparing the computational cost of a depthwise separable convolution with a regular convolution. First, the D F × D F × M feature map F of the regular convolution layer is taken to input, and then a D F × D F × N feature map G (in Figure 4) is generated. D F represents the spatial width and height of the square input feature map 1, M represents the number of input channels (input depth), D G represents the spatial width and height of the square output 2 Computational Intelligence and Neuroscience Computational Intelligence and Neuroscience feature map, and N represents the number of output channels (output depth). e regular convolutional layer is parameterized by convolution kernel K of size D K × D K × M × N. D K represents the spatial dimension of the kernel assuming a square, M represents the number of input channels, and N represents the number of output channels. e output feature map for regular convolution assuming stride 1 and padding is computed as e computational cost of the regular convolution C R is where the computational cost is determined by the number of input channels M, the number of output channels N, the kernel size D F × D F , and the feature map size D F × D F . Before, it is introduced that a depthwise separable convolution comprised two layers: a depthwise convolution (in Figure 5) and a pointwise convolution (in Figure 6). e depthwise convolution is used to apply a single filter per each input channel (input depth). en, we use pointwise convolution, a simple 1 × 1 convolution, to create a linear combination of the output of the depthwise layer. erefore, we can define the depthwise convolution with one filter per input channel (input depth) as where K is the depthwise convolutional kernel of size D F × D F × M; the m th filter in K is applied to the m th channel in F to produce the m th channel of the filtered output feature map G. e computational cost of the depthwise convolution C D is e computational cost of the pointwise convolution C P is So, the computational cost of the depthwise separable convolution C DP is which is the sum of the depthwise and 1 × 1 pointwise convolution. By splitting convolution into a 2-step process of filtering and merging, we obtain a reduction R in computation of erefore, it is concluded that the depthwise separable convolution can greatly reduce the amount of computational cost [16]. Moreover, we experimented with reducing the number of filters to reduce redundancy. Howard's network model using 32 filters in a full 3 × 3 convolution is used to build initial filter banks for edge detection. rough the analysis of the experiment results, we found that reducing the number of filters to 16 could maintain the same accuracy as 32 filters, which saves an additional 2 ms. Model Evaluation Metrics. In order to evaluate the performance of the deep learning model, we refer to the confusion matrix (in Table 1), which is a standard format for expressing accuracy evaluation. Based on this confusion matrix, evaluation is performed using the following criteria: (1) Accuracy represents the ratio of the number of samples correctly classified by the classification model to the total number of samples for a given test data set. It can be expressed by the following formula: (2) Recall represents the positive sample of the original sample, the probability that the classification model correctly predicted a positive sample. It can be expressed by the following formula: Dataset and Training. In this paper, we use the Mobi-leNet which applies 3 × 3 depthwise separable convolutions, a regular CNN, ResNet-18, and two mainstream CNN models pretrained on ImageNet [17]. ey are ResNet-50 and VGG19. e dataset was chest X-ray images for classification made available by Kaggle with a total of 5216 train and 624 test images (in Figure 7). e dataset is organized into 3 folders (train, test, and file type) and contains subfolders for each image category (pneumonia/normal). ere are 5840 X-ray images (JPEG) and 2 categories (pneumonia/ normal). e chest X-ray images were selected from pediatric patients of one to five years old from Guangzhou Women and Children's Medical Center, Guangzhou. e characteristics of the data and their distribution could be organized (see Table 2). All chest X-rays were performed as part of patients' routine clinical care. is dataset is quality controlled by screening chest X-rays to remove unreadable and low-quality X-rays and is managed by several experts to avoid grading errors. Above all, we should carry out some data analysis and preprocessing. So, we convert the images gotten from the dataset into a NumPy array (see Figure 8). en, we change the sizes of the images to 226 × 226 in order that we can have more data (images) to train on (in Figure 9). In addition, in order to facilitate comparison, we uniformly set the number of epochs to 20. e experimental environment is an Ubuntu Linux server with GeForce GTX 1050 Ti GPU, and all models are implemented using Python. Results Corresponding to these five CNN models, we put the training set accuracy, the training set loss, the validation set accuracy (Val_accuracy), and the validation set loss (Val_loss) in four line charts for comparison (as shown in Figures 10-13). And then, we calculated the average of training set accuracy, training set loss, Val_accuracy, and Val_loss. e results are shown in Table 3. Here, the purpose of setting epoch to 25 is to compare the accuracy of different algorithms under the same number of iterations. is can reflect the speed of training between algorithms. If you encounter a new type of pneumonia and the time is urgent, researchers need to train the lung images of the new type of pneumonia in time, so achieving a higher accuracy rate while saving time is also a factor we need to consider. After that, we applied the five trained models to the test set for experiments and recorded their accuracy and recall. e result indicates that the accuracy of pneumonia recognition using MobileNet is up to 92.79% and the recall is 98.90% (see Table 4). In addition, we can see from Discussion From the experimental results, it can be seen that MobileNet, as a lightweight network, not only has a smaller amount of calculations than most CNNs but also has a better classification effect than other types of CNN models when the number of parameters is almost on an order of magnitude. is benefits from using the depthwise separable convolution. Since the development of deep learning, most image recognition models have large parameters and a large amount of calculations, which are not suitable for use in embedded devices. For the identification of pneumonia, a common disease, we must also consider how to quickly and accurately identify pneumonia in areas where equipment and doctors are scarce. is is one of the reasons why we recommend using MobileNet for pneumonia recognition. Conclusions In this paper, five mainstream deep learning models are used to diagnose clinical data on a dataset consisting of X-ray images of the lungs with pneumonia and normal lungs and the accuracy of these methods is compared. Among them, because of the superior performance of MobileNet, we focus on the network structure of MobileNet. e results demonstrated that all five network structures have the ability to recognize pneumonia and the accuracy of MobileNet is higher than other network structures. In addition, the application of artificial intelligence technology in the medical field is not sufficient, and the dataset in this field should be improved in terms of types. As the amount of pneumonia image data increases and the network structure continues to improve, the performance of CNN-based pneumonia diagnosis algorithms will also continue to improve. In the future, the application of clinical image diagnosis of pneumonia X-rays can reduce the workload of clinicians and enable patients to obtain early diagnosis and timely treatment, thereby reducing the mortality rate of pneumonia. Data Availability e dataset used in this study was chest X-ray images by Kaggle, please visit https://www.kaggle.com/ paultimothymooney/chest-xray-pneumonia. Conflicts of Interest e authors declare that there are no conflicts of interest regarding the publication of this paper.
2020-10-03T05:07:15.063Z
2020-09-18T00:00:00.000
{ "year": 2020, "sha1": "7f11ab2e6087815c7829f3bc98711022f1770a2a", "oa_license": "CCBY", "oa_url": "https://downloads.hindawi.com/journals/cin/2020/8876798.pdf", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "7f11ab2e6087815c7829f3bc98711022f1770a2a", "s2fieldsofstudy": [ "Medicine" ], "extfieldsofstudy": [ "Computer Science", "Medicine" ] }
21154907
pes2o/s2orc
v3-fos-license
Molecular and biochemical characterization of the NS1 protein of non-cultured influenza B virus strains circulating in Singapore In this study we compared the NS1 protein of Influenza B/Lee/40 and several non-cultured Influenza B virus clinical strains detected in Singapore. In B/Lee/40 virus-infected cells and in cells expressing the recombinant B/Lee/40 NS1 protein a full-length 35 kDa NS1 protein and a 23 kDa NS1 protein species (p23) were detected. Mutational analysis of the NS1 gene indicated that p23 was generated by a novel cleavage event within the linker domain between an aspartic acid and proline at amino acid residues at positions 92 and 93 respectively (DP92–93), and that p23 contained the first 92 amino acids of the NS1 protein. Sequence analysis of the Singapore strains indicated the presence of either DP92–93 or NP92–93 in the NS1 protein, but protein expression analysis showed that p23 was only detected in NS1 proteins with DP92–93.. An additional adjacent proline residue at position 94 (P94) was present in some strains and correlated with increased p23 levels, suggesting that P94 has a synergistic effect on the cleavage of the NS1 protein. The first 145 amino acids of the NS1 protein are required for inhibition of ISG15-mediated ubiquitination, and our analysis showed that Influenza B viruses circulating in Singapore with DP92–93 expressed truncated NS1 proteins and may differ in their capacity to inhibit ISG15 activity. Thus, DP92–93 in the NS1 protein may confer a disadvantage to Influenza B viruses circulating in the human population and interestingly the low frequency of DP92–93detection in the NS1 protein since 2004 is consistent with this suggestion. Introduction Both Influenza A and B viruses contribute to the annual epidemic infecting a large percentage of children and the immunologically compromised, resulting in an annual death toll of up to 500,000 deaths (World Health Organization, 2014). While not studied as extensively as Influenza A, Influenza B virus infection can also lead to severe symptoms (Derlet, 2010;Kim et al., 2009;Wu et al., 2009;Michael et al., 1980;Baine et al., 1980). The Influenza B virus nonstructural protein 1 (NS1) counters the host's antiviral response and therefore represents a determinant of pathogenicity Talon et al., 2000;Sridharan et al., 2010;Yuan & Krug, 2001). Influenza B virus induces interferon signalling following infection, leading to the induction of several interferon stimulatory gene proteins that induce an antiviral state in the infected cell (Randall & Goodbourn, 2008). Among these is the interferon-induced ubiquitin-like ISG15 protein, which covalently attaches via its C-terminal glycine to several target proteins, inducing the degradation of these cellular proteins (reviewed in Zhao et al., 2013). The Influenza virus B virus NS1 protein inhibits the conjugation of the ubiquitin-like ISG15 protein to its target proteins (Yuan & Wang, 2001), suggesting that this is a factor in overcoming the host antiviral response. Interestingly, the interaction between NS1 and the ISG15 protein is species-specific, binding only human and non-human primate ISG15 (Sridharan et al., 2010). The NS1 protein is 281 amino acids (aa) long (Briedis & Lamb, 1982;S, 2013) and consists of two domains. The first 90 amino acid residues at the N-terminal end of this protein constitute the RNA binding domain (RBD) (Guan et al., 2011). The RBD has been shown to bind to double-stranded RNA (Dauber et al., 2006), contains the nuclear localization signal (NLS) (Schneider et al., 2009), and is crucial for the dimerization of the NS1 protein (Wang & Krug, 1996). Although the full structure of Influenza B NS1 is yet to be elucidated, the structure of the first 120 amino acids has been determined (Li et al., 2011). The effector domain (ED) of the NS1 protein is formed by amino acids 120-281, but no specific function has been assigned to this domain (Donelan et al., 2004;Wang & Krug, 1996). These two domains are connected via a stretch of amino acids that constitutes the linker (L) region (Guan et al., 2011). The nature of this interaction between the NS1 and ISG15 proteins has been established biochemically. Although ISG15 binding activity in the NS1 protein resides in the RBD and part of the L region, inhibiting the conjugation activity of the ISG15 protein requires the first 145 amino acid residues (Yuan & Krug, 2001). This includes a large portion of the ED. Interestingly, Influenza virus B isolates in which the carboxyl terminal domain of the NS1 protein is deleted (e.g. Tobita et al., 1990) are unable to inhibit the ubiquitination activity of the ISG15 protein in virusinfected cells and replicate less efficiently than viruses containing a full-length NS1 sequence (Yuan & Krug, 2001). Therefore, the presence the RBD, L and part of the ED is required for the ISG15 inhibitory activity of the NS1 protein. In this study, we examined the expression of the NS1 protein in several non-tissue-culture-adapted Influenza B viruses that were detected during routine influenza surveillance in Singapore over the period . Since these virus strains could not be cultured in tissue culture we have relied on recombinant expression of the cloned sequences. In this context, we have compared the expression profiles of the NS1 proteins derived from these clinical strains proteins with the recombinant NS1 protein of the established influenza virus B laboratory isolate B/Lee/ 40 and the NS1 protein expressed in Influenza B/Lee/40infected cells. We provide the first report, to our knowledge, of the identification of a novel sequence motif (DP 92-93 ) in the NS1 protein of the influenza virus B laboratory isolate B/Lee/40 that leads to a post-translational cleavage event. Furthermore, this modification is also observed in some Influenza B virus strains that are circulating in Singapore, suggesting clinical relevance. Methods Cloning of NS1 genes from throat swabs. Throat swabs were collected from Singapore Armed Forces (SAF) servicemen displaying respiratory symptoms and were tested for Influenza B infection as described previously . Out of 88 throat swabs, 46 tested positive for Impact Statement We report a novel cleavage event observed in a subset of Influenza B NS1 proteins. This cleavage is dependent on the presence of amino acids DP at positions 92-93. This event cleaves the NS1 protein in the linker domain, separating the RNA binding domain from the effector domain. The smaller protein containing the RNA binding domain has all the residues crucial for RNA binding and the nuclear localization signal, but this cleavage activity interrupts the amino acid sequences which are responsible for ISG15 inhibition. We believe that viruses with the cleavage motif would have altered viral fitness, evidenced by the temporal expression of such viruses over the last 30 years. We propose the possibility of the NS1 protein undergoing autoproteolytic cleavage as this cleavage was uninhibited by host cell proteases. The data presented in this study suggests a novel post-translational modification of the Influenza B NS1 protein, which may have significant effects on viral replication and the inhibition of host cell response. Influenza B virus. To attain the sequence of the NS1 gene, gene segment 8 was amplified twice using four separate primers, producing overlapping reads, targeting the open reading frame of NS1 . Sequences were assembled using Seq-Man Program (DNASTAR). Out of these 46 strains, 36 NS1 complete gene sequences were obtained. These sequences were grouped according to sequence similarity (GenBank accession numbers KC 844161-KC844196). Representative strains from each of these groups (DSO_090136_2004: KC844195.1, DSO_040117_2006: K U500812, DSO_010147_2007: KC844165.1, D SO_020132_2007: KC844167.1 and DSO_0070_2009: KC 844163.1) were PCR-amplified using the primers NS1-F (Sac1) and NS1-R (XhoI) (Table S1, available in the online Supplementary Material) and ligated into the mammalian expression vector pCAGGS between the Sac1 and XhoI restriction sites (Patch et al., 2007). Individual domains were PCR amplified similarly, by using the primers specific for each domain (Table S1). Two strains were chosen to represent specimens from 2007; DSO_020132_2007 and DSO_010147_2007, as DSO_020132_2007 represent the majority of the sequences isolated in 2007 and DSO_010147_2007 was unique to the other sequences isolated in 2007 . Site-directed mutagenesis. DNA substitutions were introduced into the cloned gene segments by the use of the Quik-Change Site-Directed Mutagenesis Kit (Stratagene) and in accordance to manufacturer's instructions. Primers were designed with the substitutions incorporated through the use of the manufacturer's online primer design tool (http:// labtools.stratagene.com/QC). Table S2lists the primers used for mutagenesis experiments in this study. Generated mutants were sent to 1 st Base (Singapore) for DNA sequencing to confirm for the incorporation of the desired mutation. Antibodies. The anti-FLAG and anti-Myc antibodies were purchased from Sigma-Aldrich and Cell Signaling respectively. The Influenza B NS1 antibody was a gift from Professor Thorsten Wolff from the Robert Koch Institut . Influenza B Infection. Human embryonic kidney (HEK) 293T cells were infected with B/Lee/40 strain (ATCC VR101) at a multiplicity of infection (MOI) of 3, at 37˚C for 1 h. The inoculum was replaced with DMEM (GIBCO) + 2 % FBS (GIBCO) for 20 h before further analysis. Expression of NS1 constructs. HEK 293T cells were transfected with the NS1-pCAGGS constructs using Lipofectamine 2000 reagent (Invitrogen) following the manufacturer's instructions. The media was replaced at 4 h after transfection and continued to be incubated at 37˚C for 16 h before further analysis. Western blotting. Cells expressing the protein of interest were scraped in 50 ml of 1Â Laemmli Buffer and boiled at 95˚C for 10 min. Samples were then sonicated briefly and equal volumes were loaded onto SDS-PAGE gels. Protein bands were then transferred onto PVDF membranes (Immobilon-P, Millipore). Protein bands were then probed with anti-FLAG, cMyc and Influenza B NS1 proteins as appropriate. The proteins bands were visualised using ECL (GE Healthcare). Molecular size estimations were established by plotting the R f of the All Blue (BioRad) molecular weight standards (distance migrated by specific band/ distance migrated by dye front) by the Log value of the molecular weight (Log MW). A best-fit line was obtained for each blot and the equation of the line was used to estimate the Log MW of the protein bands observed. Global frequencies of residues. Protein sequences of human NS1 from influenza B viruses were downloaded from GISAID's EpiFlu and NCBI's Genbank databases. Sequences from both databases were merged and repeated strains were removed. MAFFT (http://www.ncbi.nlm.nih. gov/pubmed/15661851) was used for sequence alignment and global frequencies of residues were confined to strains from 1980 onwards. Geographical locations of strains were parsed from their strain names and the Google Static Maps API was used to generate the global geographical distribution of strains on a yearly basis. Detection of a novel NS1-related protein species in influenza B virus-infected cells We first examined the expression of the B/Lee/40 NS1 protein in virus-infected cells and in cells expressing the recombinant B/Lee/40 NS1 protein. MDCK cells were infected with B/Lee/40 and at 4, 8, 12 and 16 h post infection (h.p.i.) cell lysates were prepared in Laemmli buffer and the NS1 protein analysed by immunoblotting using Influenza B NS1 antibody (aNS1) (Fig. 1a). At 8 h.p.i. a 35 kDa protein species of the expected size for the NS1 was detected, together with a smaller 23 kDa protein species. The two protein species were detected concomitantly, and the levels of both proteins increased with increasing incubation time. This suggested the presence of a posttranslational modification that led to the generation of a second NS1 protein species. To determine if this was specific to virus-infected cells, we also examined the expression of the recombinant B/Lee/ 40 NS1 protein. The B/Lee/40 NS1 gene was cloned (with a FLAG-tag coding sequence introduced at the 3 ¢ end of the NS1B gene) and inserted into the mammalian expression vector pCAGGS to generate pCAGGS/NS1B(LEE)-FLAG. HEK 293T cells were either infected with B/Lee/40 or transfected with pCAGGS/NS1B(LEE)-FLAG and at 16 h.p.i. or 16 h post transfection (h.p.t.) respectively cell lysates were prepared and examined by immunoblotting using aNS1 (Fig. 1b). In pCAGGS/NS1B(LEE)-FLAG transfected cells, two species of approximately 35 kDa and 24 kDa were observed, which migrated slightly more slowly than the corresponding protein species detected in the virus-infected cells. This difference in migration was due to the presence of the FLAG epitope tag added to the C-terminus on the recombinant expressed NS1 protein. No protein bands were observed in either the mock-infected or mock-transfected cell lysates, thus demonstrating the specificity of aNS1. Similarly, HEK 293T cells were transfected with pCAGGS/NS1B (LEE)-FLAG and at intervals between 4 and 16 h.p.t. cell lysates were prepared and immunoblotted using anti-FLAG ( Fig. 1c). A concomitant increase in the appearance of both protein species was observed from 8 h.p.t., which is consistent with the kinetics of NS1 gene expression observed in virus-infected cells described above. Although the second smaller NS1 protein species is observed during Influenza B virus infection, it is also seen during NS1 protein expression in the absence of virus infection. This suggests that this is an intrinsic property of the B/ Lee/40 NS1 protein, and does not arise due to an antivirus response during influenza B virus infection. In this study, the full-length and smaller NS1 protein species will be referred to as the NS1 and p23 proteins respectively. p23 is composed of the effector domain and part of the linker region Mutational analysis of recombinant B/Lee/40 NS1 protein was performed to determine the identity of p23. A series of plasmid constructs were produced in which both full-length and individual domains of the B/Lee/40 NS1 protein were expressed with N-terminal cMyc-or C-terminal FLAG epitope tags as appropriate (Fig. 2a). These were full-length ED-FLAG (EDf), and cMyc-L-ED-FLAG (mLEDf) (Fig. 2a). Cells expressing each of these protein species were examined by immunoblotting with anti-cMyc or anti-FLAG as appropriate. In cells expressing NS1f, immunoblotting with anti-FLAG showed protein species corresponding in size to NS1-FLAG and p23-FLAG (Fig. 2b). In cells expressing mNS1f and immunoblotted with anti-FLAG, only p23-FLAG and NS1-FLAG was detected, while immunoblotting with anti-cMyc revealed cMyc-NS1-FLAG and a protein species of approximately 15 kDa (p15). The p15 was also detected in cells expressing mRBLf and immunoblotted with anti-cMyc but was not detected when immunoblotted with anti-FLAG (Fig. 2b). The RBD, composed of 90 amino acids (Guan et al., 2011), was estimated in silico to be 13 kDa (by BioEdit 7.2.5 software), therefore the immunoblotting analysis suggested that this domain is present within p15 (Fig. 2b). Immunoblotting using either anti-cMyc or anti-FLAG revealed a major protein species of 29 kDa. A smaller product was observed by immunoblotting with anti-FLAG that was not observed by immunoblotting with anti-cMyc, suggesting that this smaller product may have lost the cMyc tag. In comparison, the cells expressing EDf and immunoblotted with anti-FLAG revealed a major protein species of 22 kDa. A comparison of the sizes of these individual domains with the p23 observed in cells expressing NS1f and mNS1f suggested that p23-FLAG is composed of part of the linker region and the whole of the ED, while p15 is composed of the RBD and part of the linker region. Previous examination of the predicted protein sequences of the NS1 proteins of Influenza B viruses detected in Singapore between 2004 and 2009 revealed that the specimens phylogenetically clustered according to their year of isolation . These clinical specimens were sequenced directly without prior culturing in either eggs or tissue culture and hence avoided mutations due to culture selection . We were therefore, interested to determine if sequence variation between clusters would result in differences in the corresponding biochemical properties in the NS1 protein. The expression of the NS1 gene of representative strains from each cluster was therefore examined, and five representative virus strains were selected for further analysis since these were predicted to exhibit sequence variation in the linker region of the corresponding NS1 protein. These were DSO_090136_2004 (136), DSO_040117_2006 (117), DSO_020132_2007 (132), DSO_0070_2009 (70) and DSO_010147_2007 (147). The NS1 genes from these strains were cloned into pCAGGS containing a FLAG tag at the C-terminus of the NS1 protein to aid protein detection (Fig. 3). Sequence analysis indicated that p23-FLAG was detected in clinical strains that had an aspartic acid at position 92 (D 92 ), while p23-FLAG was not present in NS1B(132)-FLAG and NS1B(147) that contained an asparagine at this position (N 92 ) (Fig. 4a). This suggested that the presence of D 92 was a major determinant for the presence of p23. To confirm the role of D 92 in generating p23, mutational analysis of the linker region in the B/Lee/40 NS1 protein sequence was performed. A series of mutants in the linker region in NS1 protein coding region within pCAGGS/NS1B(LEE)-FLAG, was constructed to generate D92A and D92N. In addition, the amino acid residues immediately adjacent to D 92 were also mutated to M91A and P93A. These site-directed mutations gave rise to the mutated NS1(D92A), NS1(D92N), NS1 (M91A) and NS1(P93A) proteins (Fig. 5a). p23-FLAG could not be detected in cells expressing either NS1(D92A) or NS1B(D92N), confirming that D 92 was required to generate p23 (Fig. 5b). Examination of the clinical strains NS1B(132)-FLAG and NS1B(147)-FLAG showed that the presence of N 92 in the 132-NS1 sequence correlated with the absence of p23-FLAG. Therefore, a reverse mutation (N92D) was introduced in 132-NS1 and the protein examined by immunoblotting within anti-FLAG. This amino acid substitution correlated with the presence of p23-FLAG in this mutant (Fig. 5c). In cells expressing Lee-NS1(D92N), p23-FLAG was not detected even after longer expression times, suggesting that the D92N mutation prevented the appearance of p23-FLAG rather than delaying its appearance (results not shown). Leaky scanning of the ATG leading to protein variation has been described for several Influenza virus proteins (Wise et al., 2009;Tauber et al., 2012;Muramoto et al., 2013). To eliminate the possibility that p23 could arise due to leaky scanning of the ATG at M 91 , we substituted the methionine with an alanine, Lee-NS1 (M91A). p23 was still detected in cells transfected with Lee-NS1 (M91A) (Fig. 5d). This suggested that p23-FLAG did not arise to leaky scanning since there are no other methionine residues present in the linker domain of Lee-NS1 (Fig. 3). The adjacent proline residue at position 93 is required for p23 generation Interestingly, the P93A substitution in Lee-NS1 sequence correlated with the absence of p23-FLAG detection, in a western blotting. This substitution resulted in a reduced appearance of p23-FLAG (Fig. 5e). Similarly, mutating the Lee-NS1 sequence to introduce a proline at position 94 [Lee-NS1 (S94P)] resulted in the increased appearance of p23-FLAG (Fig. 5d). Collectively this analysis defined the amino acid sequence within the NS1 protein linker domain that gave rise to p23. The presence DP 92-93 is required for NS1 protein cleavage, while a proline at position 94 facilitates the formation of p23. The NS1 p23 protein is only found in the cytoplasm We fractionated B/Lee/40 NS1-infected cells and pCAGGS/ cMYC-NS1B(LEE)-FLAG-transfected cells (Fig. 6) to determine where in the cell the p23 was generated. This would allow us to identify where in the cell the biological activity that gave rise to p23 was located. Examination of cell fractions prepared from virus-infected cells by immunoblotting using aNS1 revealed that p23 was only present in the cytoplasmic fraction (Fig. 6a). Examination of pCAGGS/cMYC-NS1B(LEE)-FLAG-transfected cells by immunoblotting using anti-FLAG and anti-cMyc revealed that p23 and p15 were only detected in the cytoplasmic fraction (Fig. 6b). A similar analysis of cells expressing the NS1 clinical strain also showed that p23 was detected in the cytoplasmic fraction (Fig. 6c). These observations suggest that the event leading to the transformation of the NS1 protein into p23 occurs in the cytoplasm. Evidence for negative selection of the N92D change in the NS1 protein of circulating Influenza B viruses Examination of the data obtained in our previous analysis showed that all strains sequenced had either an N or D at position 92, of which N 92 and D 92 accounted for 14 % and 86 % of the strains respectively . However, our analysis was confined to Influenza B virus strains circulating in Singapore, and we were interested to determine the presene of this sequence motif in viruses circulating in other regions. We investigated the frequency of the Influenza B virus NS1 N92D and P93S sequence mutations in the global surveillance of Influenza B viruses to determine the incidence of this modification in circulating viruses. There was limited availability of influenza B virus NS1 sequences prior to the 1980s, and our analysis was therefore confined to sequences that were deposited since 1980 (Fig. 7a) (Fig. 7b). The temporal occurrence of N92D was not random but was characterised by different peaks of occurrence with the most prominent being in 1997 where 97 % of strains carried the N92D mutation. Analysis of the geographical distribution of strains with this mutation in 1997 suggested frequent occurrence in the Americas and Asia, but interestingly less so in Europe. There were also minor occurrences in 1990, 1992, 1993 with a bias towards Europe and North America, although the numbers of total sequences for these time periods are rather low. Another larger peak of occurrence for N92D was noted in 2005 where it accounted for 27 % of all deposited sequences, with a strong geographical bias towards South East Asia and Oceania (mainly Australia and New Zealand). Despite the larger number of total sequences being sampled, there has been a gradual decline of N92D up to 2015. In summary, the temporally and geographically biased appearance patterns and the clear decline in frequency despite increased surveillance sampling suggest that strains with the NS1 N92D sequence motif are not particularly successful in the environment when compared with NS1 N 92 -containing variants. Discussion In this study, we report a novel post-translational modification of the Influenza B virus NS1 protein that leads to the formation of a truncated NS1 protein in the cytoplasm of infected cells. Although the biological mechanism that leads to the generation of p23 is still uncertain, our analysis suggested that this event occurs within the linker region of the NS1 protein. An RNA splicing event within the viral mRNA that encodes the NS1 protein is unlikely since this would lead to the removal of the FLAG-tag recognition motif. Similarly, the involvement of an alternative reading frame within the linker region is unlikely, since removal of the single methionine residue within the linker region failed to prevent the generation of p23. However, we currently cannot rule out the possibility that p23 is generated by a mechanism that involves an alternative reading frame elsewhere within the NS1 mRNA. The mutational analysis provided evidence that p23 may be generated by proteolysis, and we had suggested that this may involve a protease activity in the cytoplasm. In this context we tested a range of different protease inhibitors (e.g. serine protease, aspartic protease and metalloproteinase); however, none of these were able to inhibit p23 formation (Jumat and Sugrue, unpublished observations). Interestingly, protein sequences that contain a DP amino sequence have been reported to be cleaved by caspase enzymes (Alnemri et al., 1996;Luthi & Martin, 2007), and caspases cleave a range of Influenza virus proteins (e.g. NP and M2; (Richard & Tulasne, 2012;Zhirnov et al., 1999Zhirnov et al., , 2002). This hypothesis was tested using the pan-caspase inhibitor (Z-VAD-FMK), however, at all concentrations used the drug treatment failed to block p23 formation (Fig. S1). This suggests that that cleavage of the NS1 protein may involve either a novel unidentified cellular protease or that the NS1 protein may undergo autoproteolysis (i.e. it is independent of any host cell proteases). The capsid protein of Flock House Viruses (FHV) and Black Beetle Viruses (BBV) undergoes autoproteolytic cleavage mediated by an aspartic acid initiating a nucleophilic attack on an asparagine residue; resulting in hydrolysis of the peptide bond (Wery et al., 1994;Hosur et al., 1987). Understanding this process has been facilitated by the availability of the entire structure of the capsid proteins (Speir et al., 2010). Although structural information is only available for the first 103 amino acids of the Influenza B NS1 protein (Guan et al., 2011), this does however suggest that the linker region may be exposed as a surface loop structure and could be prone to proteolysis. It has been established that the NS1 from Influenza B virus is a major factor in overcoming the host antiviral response by inhibiting the conjugation of the ubiquitin-like ISG15 protein to its target proteins (Yuan & Wang, 2001). Although ISG15-binding activity in the NS1 protein resides in the RBD and part of the linker region, a large portion of the ED is also required to facilitate this biological activity (Yuan & Krug, 2001). Influenza virus B isolates in which the carboxyl terminal domain of the NS1 protein was deleted were unable to inhibit the ubiquitination activity of the ISG15 protein in virus-infected cells (Yuan & Krug, 2001). In addition, viruses in which the ED is removed from the NS1 protein also replicate less efficiently than virus containing a full-length NS1 protein (Yuan & Krug, 2001). The clinical strains that we examined were not passaged through tissue culture or embryonated eggs, ensuring that the sequence motif leading to the formation of p23 did not arise due to either egg-or tissue-culture-adaptation. It is predicted that removal of the ED may have a significant effect on the biological activity of the NS1 proteins in clinical strains exhibiting NS1 cleavage. We would, therefore predict that several of the clinical strains detected in our study would exhibit impaired biological activity in the NS1 protein and may differ in their replication capabilities in the host. However, in these previous studies the ED was not expressed, while in the strains we examined the NS1 protein is cleaved and both the RBD and ED are present. It is, therefore unclear if the cleaved ED can act interact with the RDB to establish the effect on ISG15. Unfortunately, we were unable to recover the specific Influenza B viruses detected in our study, and so have not been able to define the biological activity of these NS1 variants in the context of an infectious virus. Although our analysis suggested a higher frequency of the D92 in the strains that we examined, the analysis of the Influenza B virus NS1 sequences that have been deposited in the NCBI database indicate that the incidence of D 92 has declined. Although there are several factors that determine the fitness of virus to be maintained in the environment, our sequence analysis suggests that cleavage of the NS1 protein does not provide a competitive advantage to Influenza B viruses in the natural environment. Interestingly, we have thus far failed to detect a similar excision of the NS1 protein of Influenza A laboratory isolates, suggesting that this phenomenon may be specific to influenza B viruses. Analysis of the NS1 protein sequences from our sequence collection would not have provided any obvious prediction that the NS1 protein would be cleaved into two protein species. Therefore, although sequence analysis can provide useful information about virus evolution, our analysis highlights the utility of examining the biological properties of individually expressed proteins from non-cultured clinical strains to provide a fuller understanding of virus evolution in the natural environment. The biological significance of the NP 92-93 in the NS1 protein will require further examination using recovered clinical strains.
2018-04-03T01:10:30.183Z
2016-08-01T00:00:00.000
{ "year": 2016, "sha1": "5fe4cb143cde100d481da629a61c456651d995a1", "oa_license": "CCBY", "oa_url": "https://mgen.microbiologyresearch.org/deliver/fulltext/mgen/2/8/mgen000082.pdf?isFastTrackArticle=&itemId=/content/journal/mgen/10.1099/mgen.0.000082&mimeType=pdf", "oa_status": "GOLD", "pdf_src": "MergedPDFExtraction", "pdf_hash": "6623db0177cd3a7c7b0c04eabf679fb5aba1ef16", "s2fieldsofstudy": [ "Biology" ], "extfieldsofstudy": [ "Biology", "Medicine" ] }
214341236
pes2o/s2orc
v3-fos-license
The Rise of Climate-Driven Sediment Discharge in the Amazonian River Basin The occurrence of hydrological extremes in the Amazon region and the associated sediment loss during rainfall events are key features in the global climate system. Climate extremes alter the sediment and carbon balance but the ecological consequences of such changes are poorly understood in this region. With the aim of examining the interactions between precipitation and landscape-scale controls of sediment export from the Amazon basin, we developed a parsimonious hydro-climatological model on a multi-year series (1997–2014) of sediment discharge data taken at the outlet of Óbidos (Brazil) watershed (the narrowest and swiftest part of the Amazon River). The calibrated model (correlation coefficient equal to 0.84) captured the sediment load variability of an independent dataset from a different watershed (the Magdalena River basin), and performed better than three alternative approaches. Our model captured the interdecadal variability and the long-term patterns of sediment export. In our reconstruction of yearly sediment discharge over 1859–2014, we observed that landscape erosion changes are mostly induced by single storm events, and result from coupled effects of droughts and storms over long time scales. By quantifying temporal variations in the sediment produced by weathering, this analysis enables a new understanding of the linkage between climate forcing and river response, which drives sediment dynamics in the Amazon basin. Introduction The transfer of sediment and organic carbon from the terrestrial biosphere to the oceans via erosion and riverine transport constitutes an important component of the global carbon sequestration [1][2][3] and nutrient cycling [4,5], resulting in flow transfer among Earth's reservoirs [6]. The South American continent is a region with particular climate-biosphere interactions [7,8]. It also presents high vulnerability to water erosion [9], with average soil loss rates that are significantly higher than the world average [10]. In fact, with south-eastern Asia, the Amazon basin is a land area under the greatest effect of erosive precipitation [11]. However, especially for historical times, surface processes and soil erosion related to the hydrological cycle are still not sufficiently well understood [12,13]. As the spatial and temporal resolutions of multi-proxy records have increased in recent years, some hydro-climatic shifts have been recognized as important aspects of environmental applications [14]. These hydrological changes have a multi-scale nature, operating over annual-to-multidecadal time scales that is where water can be viewed as both a resource and a land disturbing force [15]. climate change. For instance, at regional and sub-regional scales, climatic factors such as precipitation and rainstorms represent a notable kinetic energy causing erosive splash and runoff [52] as a function of its amount and intensity, while vegetation represents an opposite force to this release of kinetic energy at the soil surface. Environmental Setting Amazonia is a continental area ranging from about 10 • N to 20 • S latitude and from 50 • W to 80 • W longitude, with an altitudinal gradient from 0 to >6000 m a.s.l. (Figure 1a). Garstang et al. [53] showed that some of the largest squall lines in the world occur over the Amazon region. Precipitation distribution for this region is characterized by a large amount of rain concentrated within six months (December-May), reflecting a typical monsoon climate (tropical climate). This region receives an important contribution from the South Atlantic Convergence Zone, which consists in a moisture strip extending from North to Southeastern Brazil, and adds to the influence of frontal systems and convective activity [54]. The occurrence of rainfalls of high magnitude constitutes a primary natural cause of erosion hazard for this part of the world (Figure 1b). Atmosphere 2019, 10, x FOR PEER REVIEW 3 of 15 regional climate and RSD, identifying the mechanisms that influence sediment discharge, and discuss how the RSD is likely to change in response to variations in precipitation regime associated with climate change. For instance, at regional and sub-regional scales, climatic factors such as precipitation and rainstorms represent a notable kinetic energy causing erosive splash and runoff [52] as a function of its amount and intensity, while vegetation represents an opposite force to this release of kinetic energy at the soil surface. Environmental Setting Amazonia is a continental area ranging from about 10° N to 20° S latitude and from 50° W to 80° W longitude, with an altitudinal gradient from 0 to >6000 m a.s.l. (Figure 1a). Garstang et al. [53] showed that some of the largest squall lines in the world occur over the Amazon region. Precipitation distribution for this region is characterized by a large amount of rain concentrated within six months (December-May), reflecting a typical monsoon climate (tropical climate). This region receives an important contribution from the South Atlantic Convergence Zone, which consists in a moisture strip extending from North to Southeastern Brazil, and adds to the influence of frontal systems and convective activity [54]. The occurrence of rainfalls of high magnitude constitutes a primary natural cause of erosion hazard for this part of the world (Figure 1b). The largest and greatest rainfall erosivity values were found across Colombia and northwest Brazil regions, where erosivity may exceeds 15,000-20,000 MJ mm (ha h) −1 year −1 (megajoules millimeter per hectare per hour per year), corresponding to high rainfall erosivity potential for sediment production. High volumes of rain can explain these values, the mean annual rainfall being The largest and greatest rainfall erosivity values were found across Colombia and northwest Brazil regions, where erosivity may exceeds 15,000-20,000 MJ mm (ha h) −1 year −1 (megajoules millimeter per hectare per hour per year), corresponding to high rainfall erosivity potential for sediment production. High volumes of rain can explain these values, the mean annual rainfall being greater than 3000 mm in the Amazon Forest. Because of this pluvial regime, convective storm events are frequent throughout the year producing high values of rainfall aggressiveness. Climatic events such as droughts and storms are sometimes clustered into short-term groups [56,57], with some stormy years nested in dry periods. Atmosphere 2020, 11, 208 4 of 15 The implication is that recently observed rainfall changes may be an indicator of changes that occur in hydrological processes across Amazonia. This implies that river sediment rates are sensitive to the hydro-climatic forcing, which facilitates the application of a parsimonious framework for sediment yield time-series prediction. Data Sources Eighteen years of continuous annual values of river suspended-sediment loads for the Amazonia basin are available for the period 1997-2014. Yearly sediment data were derived from HYBAM Program [50], which are based on a 10-day sampling at Óbidos hydrological station [58]. At Óbidos, the Amazon watershed covers 4.8 × 106 km 2 and the river mouth is located 900 km downstream ( Figure 1a). The study period shows a relatively stable river discharge. The observed sediment load for the Magdalena River basin over the period 1972-1998 was used for an independent assessment. For monthly climate dataset, we used the CRU-ST3 via Climate Explorer [59] to obtain monthly values of precipitation both for the period of calibration (1997-2014) and partial reconstruction . The dataset of monthly rainfall derived from National Oceanic and Atmospheric Administration's (NOAA) reanalysis data [60] was used to go back in time until 1851 (data from previous years are less reliable and were not used). Modelling Approach Espinoza Villar [61] showed for the Amazonia basin that changes in discharge extremes are related to the regional plurennial rainfall variability and the associated atmospheric circulation as well as to tropical large-scale climatic indicators. Based on river suspended sediment loads from Amazonia, the period with more erosivity was assigned from December to April [62], although large amounts of rainfall fall each month. In this way, the RSDA model offers a suite of major monthly-based hydro-geomorphological events that jointly contribute to the annual sediment storage at both basin and catchment (sub-basin) scales ( Figure 2) from sub-decadal time scales. Based on this understanding, rainfall power was captured by monthly rainfall amounts and variability in different years as follows: The model drives a number of soil loss and transport events along the river sub-basins to estimate yearly RSD (Mg km −2 yr −1 ), where: Y = 0 is the current year, σ and µ are the standard deviation and the mean of monthly precipitation (P, mm), respectively, CGE is the catchment gross erosion, and BGE is the basin gross erosion. These last two factors are multiplied by a function of σ/µ ratio, calculated each year on the monthly precipitation amounts in the period June-December (J-D). The higher the ratio the more aggressive is rainfall, and vice versa, after Aronica and Ferro [63], and Diodato et al. [64]. In addition, ∆ represents the sediment amount trapped in the basin; α is a conversion factor. In Equations (1)-(3), variable subscripts and superscripts take the negative values set to bound any time window of years (Y) antecedent to the year for which the estimate is made (Y = 0), and over which the 85th and 90th percentiles (prc85 and prc90, respectively) of cumulative monthly precipitation (P (J-D) , mm) are calculated. In particular, the CGE component of Equation (2) was designed to represent a simplified catchment-scale process, where antecedent precipitation events are assumed important for the transport of soil through the catchment. Equation (2), which uses the 85th percentile of precipitation, is intended Atmosphere 2020, 11, 208 5 of 15 to capture cumulative rainfall and the associated effects on soil erosion at sub-basin scale transport within the precedent years (Y = −1 to Y = −4). In Equation (3), the BGE component of the model was designed to capture the long-term memory interaction between precipitation and basin, according to the 90th percentile at plurennial regime. The precipitation percentiles have been successfully used elsewhere to yield relationships between precipitation and erosivity factor [65], and between precipitation and soil erosion [66]. In the conceptual 2-D scheme (Figure 2), the role played by mesoscale rainstorms (extending over contiguous areas) in initiating sediment transport is accounted for the basin scale (BGE, Basin Gross Erosion). This scheme also assumes the importance of local precipitation events for driving surface and sub-surface flows, and soil losses within the individual catchments ad the web of streams (CGE, Catchment Gross Erosion) that comprise the basin. Thus, storm events are grouped based on their scale and then hierarchized according to communication delays between each component of the spatio-temporal hydrological integration. This dynamic is in agreement with the large variability in amplitude and temporal dynamics from one year to another, which is linked to the interannual variation of climatic controls (after [67]). Atmosphere 2019, 10, x FOR PEER REVIEW 5 of 15 In particular, the CGE component of Equation (2) was designed to represent a simplified catchment-scale process, where antecedent precipitation events are assumed important for the transport of soil through the catchment. Equation (2), which uses the 85th percentile of precipitation, is intended to capture cumulative rainfall and the associated effects on soil erosion at sub-basin scale transport within the precedent years (Y = −1 to Y = −4). In Equation (3), the BGE component of the model was designed to capture the long-term memory interaction between precipitation and basin, according to the 90th percentile at plurennial regime. The precipitation percentiles have been successfully used elsewhere to yield relationships between precipitation and erosivity factor [65], and between precipitation and soil erosion [66]. In the conceptual 2-D scheme (Figure 2), the role played by mesoscale rainstorms (extending over contiguous areas) in initiating sediment transport is accounted for the basin scale (BGE, Basin Gross Erosion). This scheme also assumes the importance of local precipitation events for driving surface and sub-surface flows, and soil losses within the individual catchments ad the web of streams (CGE, Catchment Gross Erosion) that comprise the basin. Thus, storm events are grouped based on their scale and then hierarchized according to communication delays between each component of the spatio-temporal hydrological integration. This dynamic is in agreement with the large variability in amplitude and temporal dynamics from one year to another, which is linked to the interannual variation of climatic controls (after [67]). Model Assumptions Following Callède et al. [69] and Espinoza Villar et al. [70], the RSDA model takes into account both long-and short-term rainfall variability that leads to a better understanding of soil movement by the storm and transport to the main stream of the Amazonia River, particularly with respect to extreme storms that occur in remote areas of the basin. A third component (inter-monthly variability) also appears to be significant when we consider a function of the inter-monthly coefficient of variation, computed as (1 + √σ/μ), based on a concept translated by Aronica and Ferro [63]. This would reflect the erosion activity associated with changes in storm-drought cycles at intra-decadal scale (from Y = −1 to Y = −8). The CGE factor operates in any catchment of the basin as soil detachment from raindrop splash erosive forces, driven by seasonal rain showers. At the annual scale, the fluvial transportation process cannot always take the necessary pathways or links to convey the sediment to the outlet of basin. In this way, the plurennial rainfall accounted for in the RSDA model acts to redistribute sediment across the drainage basins (Equation (2) Model Assumptions Following Callède et al. [69] and Espinoza Villar et al. [70], the RSDA model takes into account both long-and short-term rainfall variability that leads to a better understanding of soil movement by the storm and transport to the main stream of the Amazonia River, particularly with respect to extreme storms that occur in remote areas of the basin. A third component (inter-monthly variability) also appears to be significant when we consider a function of the inter-monthly coefficient of variation, computed as (1 + √ σ/µ), based on a concept translated by Aronica and Ferro [63]. This would reflect the erosion activity associated with changes in storm-drought cycles at intra-decadal scale (from Y = −1 to Y = −8). The CGE factor operates in any catchment of the basin as soil detachment from raindrop splash erosive forces, driven by seasonal rain showers. At the annual scale, the fluvial transportation process cannot always take the necessary pathways or links to convey the sediment to the outlet of basin. In this way, the plurennial rainfall accounted for in the RSDA model acts to redistribute sediment across the drainage basins (Equation (2)). Afterward, the percentile of the antecedent rainfall within Y = −5 to Y = −8, as in Equation (3), includes the streamflow and water level long-memory processes associated with the massive storage capacity of the Amazon basin [67]. The last component of the model (∆, Mg year −1 ) in Equation (1) is a sink term. It represents the amount of sediment involved in the re-sedimentation process, which is a fraction of the gross erosion (GE, Mg year −1 ). Based on Diodato and Grauso [71], the term ∆ can be expressed as: where SDR (sediment delivery ratio) is the ratio of sediment yield at the catchment outlet to the total (gross) erosion in the catchment. The concept is an analogue to the connectivity ratio (the amount of sediment reaching a stream over the amount of sediment eroded), which refers to slope-channel transfers (e.g., [72]). Model Calibration For the time series of available annual River Sediment Discharge (RSD) data, a recursive procedure was performed to obtain the best fit of a regression equation y = a + b·x, where y = observed RSD and x = predicted RSD data, according to the following criteria: The first condition is to minimize the distance between modelled and observed data, by minimizing the mean absolute error (0 ≤ MAE < ∞, Mg year −1 , [73]). The second condition is to maximize the goodness-of-fit (0 ≤ R 2 ≤ 1) that is the variance explained by the model (also supported by an ANOVA test of the relationship between observed and predicted data). The third condition approximates the unit slope (b) of the straight line that would minimize the bias of the linear function estimates versus observations. Poor models have high MAE, low R 2 and b far from unity. The calibration work was performed through a trial-and-error process comparing the model predictions with observational data. We iteratively added in predictors, one-at-a-time until modelling solutions with small mean absolute error and large R 2 value were obtained. Then, for the final selection, the third criterion-|b−1| = min-was additionally involved. Each predictor was repositioned over >50 iterations until convergence was achieved. The analysis of variance (ANOVA) was subsequently applied to find out if all predictors were necessary (and not redundant) for the modelling purpose. The Durbin-Watson statistic [74] was also performed to test for autocorrelated residuals because strong temporal dependence may induce spurious correlations [75]. Spreadsheet-based statistical analyses were performed with the graphical support of STATGRAPHICS [76] and WESSA routines [77]. Model Parameterization and Evaluation Equation (1) was parameterized with α = 0.469, ∆ =−132.660 Mg km −2 year −1 . At the same time, we derived the parameters of Equation (2) with the percentile equal to 85 and the time window equal to four years, and that of Equation (3) with the percentile equal to 90. The solutions obtained constitute a satisfactory performance for the output variable according to the criteria of Equation (5). Since one-way ANOVA computed a p-value < 0.05, there is a statistically significant relationship between observed and predicted RSD. The R 2 statistic indicates that the fitted model explains 70% of the variability in y (Figure 3a). over the period 1972-1998. Except for the absolute values of sediment, which are logically different because the model needs to be recalibrated for Magdalena River, the relative coevolution is satisfactory (both blue curve and brown bars in Figure 3c follow the same trend). In fact, the correlation coefficient equals 0.52, indicating a moderately strong relationship between the variables. The ANOVA p-value is less than 0.05, and the Durbin-Watson statistic is equal to 1.94596 (p-value = 0.3997). To further evaluate the RSDA model based on hierarchic monthly rainfall data, we compared its performance with three well-known approaches developed for sediment rate (SR) estimation that use water discharge (WS), SRWD(Y) [79], the Fournier Index (FI), SRFI(Y) [30], and precipitation characteristics (SAR: soil antecedent rainfall), SRSAR(Y) [80], as main explanatory drivers of basinwide sediment yield: where WD is the annual water discharge (m 3 s -1 ), P is the total precipitation (mm) of the year Y, p is the maximum monthly precipitation (mm) in each year Y, a (scale coefficient) and b (shift coefficient estimating sediment rate when the precipitation input is equal to zero in Equations (7) and (8)) are empirical parameters used for model calibration in the Amazonian basin. The comparison to the prediction equations of water discharge, Fournier Index and antecedent rainfall-based models, revealed that RSDA performed better, as the residuals of each alternative model were larger and the explained variability considerably lower than with RSDA ( Figure 4). The mean absolute errors were 2.10 Mg km −2 year −1 for RSDA, and 10.37, 10.62 and 9.27 Mg km −2 year −1 for water discharge, Fournier index and antecedent rainfall-based models, respectively. The better performance of RSDA model compared to the water discharge-based model of Equation (5) is important because the latter is frequently used for basin-wide estimates of sediment yield (e.g., [81]) The standard deviation of the residuals was equal to 7.4 Mg km −2 year −1 . The mean absolute error (MAE) of the parametrized model was 5.8 Mg km −2 year −1 , compared with an annual mean value of 131±13 Mg km −2 year −1 over the study period (1997)(1998)(1999)(2000)(2001)(2002)(2003)(2004)(2005)(2006)(2007)(2008)(2009)(2010)(2011)(2012)(2013)(2014). Except for two records that are aligned over the 90% prediction limits, negligible differences of the data-points from the theoretical 1:1 line are observed (Figure 3a). The quasi-Gaussian pattern of model residuals (Figure 3b) indicates that these data are bias-free. The Durbin-Watson (DW) statistic (1.83671) provided no indication of serial autocorrelation in the residuals (p = 0.2736). The Figure 3c presents an independent validation of the RSDA model, obtained by comparing the predicted sediment discharge with the observed sediment load for the Magdalena River basin over the period 1972-1998. Except for the absolute values of sediment, which are logically different because the model needs to be recalibrated for Magdalena River, the relative coevolution is satisfactory (both blue curve and brown bars in Figure 3c follow the same trend). In fact, the correlation coefficient equals 0.52, indicating a moderately strong relationship between the variables. The ANOVA p-value is less than 0.05, and the Durbin-Watson statistic is equal to 1.94596 (p-value = 0.3997). To further evaluate the RSDA model based on hierarchic monthly rainfall data, we compared its performance with three well-known approaches developed for sediment rate (SR) estimation that use water discharge (WS), SRWD(Y) [79], the Fournier Index (FI), SRFI(Y) [30], and precipitation characteristics (SAR: soil antecedent rainfall), SRSAR(Y) [80], as main explanatory drivers of basin-wide sediment yield: where WD is the annual water discharge (m 3 s −1 ), P is the total precipitation (mm) of the year Y, p is the maximum monthly precipitation (mm) in each year Y, a (scale coefficient) and b (shift coefficient estimating sediment rate when the precipitation input is equal to zero in Equations (7) and (8)) are empirical parameters used for model calibration in the Amazonian basin. The comparison to the prediction equations of water discharge, Fournier Index and antecedent rainfall-based models, revealed that RSDA performed better, as the residuals of each alternative model were larger and the explained variability considerably lower than with RSDA ( Figure 4). The mean Atmosphere 2020, 11, 208 8 of 15 absolute errors were 2.10 Mg km −2 year −1 for RSDA, and 10.37, 10.62 and 9.27 Mg km −2 year −1 for water discharge, Fournier index and antecedent rainfall-based models, respectively. The better performance of RSDA model compared to the water discharge-based model of Equation (5) is important because the latter is frequently used for basin-wide estimates of sediment yield (e.g., [81]) but its use in scenario studies is often hindered by the unavailability of long-term water discharge data [82,83]. Atmosphere 2019, 10, x FOR PEER REVIEW 8 of 15 but its use in scenario studies is often hindered by the unavailability of long-term water discharge data [82,83]. Discussion on the RSDA Model An increase in sediment discharge occurred in the Amazon basin may be attributed to stronger erosion processes caused by either a regional change (rainfall) or changes in land cover (e.g., resulting from deforestation) or both. Borrelli et al. [10] estimated a notable increase of soil erosion in the study area due to deforestation and increased conversion to agricultural land. Callède et al. [49] observed a rather stable river discharge at Ó bidos in the period 1997-2007. This indicates that hydrological discharge cannot be a suitable proxy for estimating the RSD, since river sediment seems to increase in the same period. Stronger rainfall variability upstream may support a more efficient production and transport of sediments downstream. Thus, a change in rainfall pattern may account for sediment discharge variations [51]. In this respect, the interacting factors σ, μ, CGE, BGE and Δ in Equation (1) involved in the temporal response of RSD, reflect the magnitude and frequency of events nested within longer-term patterns of climate change at different timescales [84]. In particular, given the occurrence of multiple processes, the long-term constant Δ cannot be easily calculated. The Δ value estimated for the ARB with Equation (1) [85][86][87], which is typical for moderate to high elevation ranges and slope gradients. The ARB has widely varying climatic and topographic features, with precipitations patterns ranging from about 1500 (in the lower basin near the outlet) to 6000 mm year −1 (in the south-western part of the basin near the Andes), and elevations ranging from sea level (the river's mouth) to ~6500 m a.s.l. in the Andes [88]. However, only the extreme western part has steep gradients. Crossing the low interior basin of Brazil, the Amazon River flows along gentle gradients of about 5-20 cm per km [89]. The estimated average gross erosion for the simulated period is about 245 Mg km −2 year −1 , which gives sediment delivery ratio (Equation (4)) equal to 0.46, indicating that high amounts of soil are mobilized by erosion (gross erosion) but a relatively high fraction is retained in the basin area. This result matches the lookup values given by Pelletier [46] for the Amazonian region. River Sediment Discharge Historical Reconstruction We applied the RSDA model to reconstruct the sediment discharge from 1859 to 2014. In this time span, the mean value of sediment discharge is 113 ± 11 Mg km −2 year −1 . The overall trend of the long-term reconstructed hydro-climatic forcing for sediment discharge series is increasing within sudden shifts, featured by decadal-to-multidecadal patterns of variability (Figure 5a). Discussion on the RSDA Model An increase in sediment discharge occurred in the Amazon basin may be attributed to stronger erosion processes caused by either a regional change (rainfall) or changes in land cover (e.g., resulting from deforestation) or both. Borrelli et al. [10] estimated a notable increase of soil erosion in the study area due to deforestation and increased conversion to agricultural land. Callède et al. [49] observed a rather stable river discharge at Óbidos in the period 1997-2007. This indicates that hydrological discharge cannot be a suitable proxy for estimating the RSD, since river sediment seems to increase in the same period. Stronger rainfall variability upstream may support a more efficient production and transport of sediments downstream. Thus, a change in rainfall pattern may account for sediment discharge variations [51]. In this respect, the interacting factors σ, µ, CGE, BGE and ∆ in Equation (1) involved in the temporal response of RSD, reflect the magnitude and frequency of events nested within longer-term patterns of climate change at different timescales [84]. In particular, given the occurrence of multiple processes, the long-term constant ∆ cannot be easily calculated. The ∆ value estimated for the ARB with Equation (1) [85][86][87], which is typical for moderate to high elevation ranges and slope gradients. The ARB has widely varying climatic and topographic features, with precipitations patterns ranging from about 1500 (in the lower basin near the outlet) to 6000 mm year −1 (in the south-western part of the basin near the Andes), and elevations ranging from sea level (the river's mouth) to~6500 m a.s.l. in the Andes [88]. However, only the extreme western part has steep gradients. Crossing the low interior basin of Brazil, the Amazon River flows along gentle gradients of about 5-20 cm per km [89]. The estimated average gross erosion for the simulated period is about 245 Mg km −2 year −1 , which gives sediment delivery ratio (Equation (4)) equal to 0.46, indicating that high amounts of soil are mobilized by erosion (gross erosion) but a relatively high fraction is retained in the basin area. This result matches the lookup values given by Pelletier [46] for the Amazonian region. River Sediment Discharge Historical Reconstruction We applied the RSDA model to reconstruct the sediment discharge from 1859 to 2014. In this time span, the mean value of sediment discharge is 113 ± 11 Mg km −2 year −1 . The overall trend of the long-term reconstructed hydro-climatic forcing for sediment discharge series is increasing within sudden shifts, featured by decadal-to-multidecadal patterns of variability (Figure 5a). Atmosphere 2019, 10, x FOR PEER REVIEW 9 of 15 In particular, the break point that occurred around 1931 (bold grey vertical line in Figure 3a), implies a discharge increase after that date. Until 1931, the average sediment discharge was about 105 ± 5 Mg km −2 year −1 , followed by an increase between 1932 and 2014, with a mean value around 120 ± 10 Mg km −2 year −1 . The last decade also reveals a stronger inter-annual variability in the sediment rate as compared to previous decades, with outliers in 2010 (144 Mg km −2 year −1 ), 2012 (146 Mg km −2 year −1 ) and 2013 (143 Mg km −2 year −1 ). Table S1) with over-imposed mean values (orange dotted lines) before and after the first change point of series in 1931 (bold red vertical line) as found by cumulative deviation-Buishand test, and the annual evolution of Niño-4 (grey curve, from [90] with its smoothed long-term trend (black curve). (b) the rainfall rate change occurred during the wet season across Southern America as derived from CRU dataset (arranged from [91]). The change point in 1931 corresponds to the beginning of the increasing phase of the Niño-4 index (black curve), which is related to the variability of sea surface temperatures occurring in the central region of the Southern Pacific Ocean [92]. A change in circulation as part of a tropical-wide climate reorganization was observed during recent decades, most likely triggered by rapid tropical Atlantic warming [93]. A physical link may exist between the overturning Walker circulation [94] connecting the tropical ocean basins and having an ascending branch right over Amazonia, and intensified deep convection and flooding in the region [37]. During the positive phase of ENSO (El Niño years) in the central equatorial Pacific, in particular, above-normal precipitation can be observed over the Amazonia in autumn, winter and spring [95], resulting in significantly higher water erosion rates and an undesirable lengthening of the sedimentation period [96]. This is in agreement with the results reported by Aalto et al. [97], who found transient processes driven by the ENSO cycle to control and modulate downstream delivery of sediments to the Amazonian foodplains. With ENSO, also the Atlantic Multidecadal Oscillation (AMO) affects the decadal and multidecadal fluctuations of precipitation in the Amazon basin [98]. This in turn affects the sediment mobilization associated with erosive rainfall events. In particular, Mello et al. [99] found a correlation between erosive events in Brazil and sea surface temperature of the Equatorial Pacific region (El Niño-3.4). This means that in years with significant ENSO events, rainfall regime tends to increase in this region, which implies increased rainfall erosivity and, so, sediment rates. Accordingly, Figure 5b shows the positive change in amount rate of the precipitation during the six different months (DJFMAM) from 1979-2015 on most of the Amazon basin, and with spatial range from 30 to 100 mm per season [91]. Marengo and Espinoza Villar [36] also found that hydrological data show trends towards more extreme events across the Amazonia region during the 20th century. However, it is remarked as during these recent decades a significant positive trend in river discharge was not observed (e.g., [100]). This agrees with the mechanism proposed by Cohen et al. In particular, the break point that occurred around 1931 (bold grey vertical line in Figure 3a), implies a discharge increase after that date. Until 1931, the average sediment discharge was about 105 ± 5 Mg km −2 year −1 , followed by an increase between 1932 and 2014, with a mean value around 120 ± 10 Mg km −2 year −1 . The last decade also reveals a stronger inter-annual variability in the sediment rate as compared to previous decades, with outliers in 2010 (144 Mg km −2 year −1 ), 2012 (146 Mg km −2 year −1 ) and 2013 (143 Mg km −2 year −1 ). The change point in 1931 corresponds to the beginning of the increasing phase of the Niño-4 index (black curve), which is related to the variability of sea surface temperatures occurring in the central region of the Southern Pacific Ocean [92]. A change in circulation as part of a tropical-wide climate reorganization was observed during recent decades, most likely triggered by rapid tropical Atlantic warming [93]. A physical link may exist between the overturning Walker circulation [94] connecting the tropical ocean basins and having an ascending branch right over Amazonia, and intensified deep convection and flooding in the region [37]. During the positive phase of ENSO (El Niño years) in the central equatorial Pacific, in particular, above-normal precipitation can be observed over the Amazonia in autumn, winter and spring [95], resulting in significantly higher water erosion rates and an undesirable lengthening of the sedimentation period [96]. This is in agreement with the results reported by Aalto et al. [97], who found transient processes driven by the ENSO cycle to control and modulate downstream delivery of sediments to the Amazonian food-plains. With ENSO, also the Atlantic Multidecadal Oscillation (AMO) affects the decadal and multi-decadal fluctuations of precipitation in the Amazon basin [98]. This in turn affects the sediment mobilization associated with erosive rainfall events. In particular, Mello et al. [99] found a correlation between erosive events in Brazil and sea surface temperature of the Equatorial Pacific region (El Niño-3.4). This means that in years with significant ENSO events, rainfall regime tends to increase in this region, which implies increased rainfall erosivity and, so, sediment rates. Accordingly, Figure 5b shows the positive change in amount rate of the precipitation during the six different months (DJFMAM) from 1979-2015 on most of the Amazon basin, and with spatial range from 30 to 100 mm per season [91]. Marengo and Espinoza Villar [36] also found that hydrological data show trends towards more extreme events across the Amazonia region during the 20th century. However, it is remarked as during these recent decades a significant positive trend in river discharge was not observed (e.g., [100]). This agrees with the mechanism proposed by Cohen et al. [101], who found that regions with high reliefs and soft lithology amplify the effect of higher than average precipitation by producing an increase in sediment yield that greatly exceeds increases in water discharge. These results are also in agreement with the increased sediment found by Martinez et al. [51]. In addition, the changes in precipitation and discharge associated with the Amazonian deforestation demonstrate a potential for significant vegetation shifts and further feedbacks to climate and discharge [102]. Conclusions This paper documents recent progresses in the study and understanding of extreme seasonal events in the Amazon region, focusing on the effects of pulsed floods on soil erosion. In fact, fluvial responses may be dominated by the climatic shift in such circumstances, and climate change may also induce land-use changes, by making for instance agriculture either possible or impossible. This is why an explicit separation of the effects of climate and land-use changes on river sediments is complicated. Change in land use likely plays more important roles at the centennial time-scale while climate change may have had a strong impact and exerted important feedbacks on erosional processes and sediment transport in the last decades [103]. When only rainfall data are available on monthly basis, the use of models based on percentiles of the monthly precipitation distribution is desirable for long-term reconstructions [104]. The newly developed parsimonious model RSDA provides satisfactory estimates of sediment discharge as the only function of hierarchic antecedent rainfall data for the Amazon basin. It invokes a combination of monthly-based precipitation factors associated with rainfall amount and variability to explain the sustained sediment rates of the Amazon basin in the recent decades. Our model better estimates sediment discharge, as generated by hydrological and climatic forcing, than other competing parsimonious models of large use worldwide (e.g., [105,106]). Without the claim of providing information about the hydrology of the region, this study demonstrates the importance of the antecedent rainfall distribution and memory precipitation-runoff interaction for the prediction of the hydrological forcing of basin-wide sediment discharge. This suggests several hydrological implications, which must be taken into account when, for instance, the consequences of hydropower dams, mining, plantation expansion, and deforestation are monitored. Though lack of long-term records of sediment yields has hampered a complete evaluation of the RSDA model, the rationale of the model is that erosive events are reflected by year timespan memory of precipitation events and their antecedent monthly variability. We add that the seasonal windows and percentile values, over which erosion processes dominate and are relevant for sediment export rates, remain critical and may require review in the future (as the sample data size increases) to ensure the reliability of RSDA estimates at sites where detailed pluviometric series are missing. Then, our approach does not distinguish between channel and floodplain erosion, which would require a mechanistic approach. Even with these limitations, our results provide useful insights. First, they demonstrate the appropriateness of a semi-empirical (parsimonious) hydro-climatic model as a way to represent long-term erosive dynamics in an environmentally sensitive target area such as the Amazon basin. Second, they suggest that extreme hydrological events have been more frequent in the last decades in the study area. Third, the methodology used (which links variations in sediment discharge to changes in ocean circulation) generates climatologically interpretable sediment series. In fact, our results are consistent with changes in the variability of the hydrometeorology of the basin and add to a complementary body of literature that elucidates the mechanisms by which large-scale (ocean) phenomena drive soil erosion in Amazonia. In particular, some recent intense rainfalls and subsequent floods were associated (though not exclusively) with El Niño events occurring in the central equatorial Pacific Ocean. The latter point is promising for novel studies aimed to enhance research capabilities on hydrologic modelling and forecasting in the Amazon basin.
2020-02-20T09:04:38.413Z
2020-02-18T00:00:00.000
{ "year": 2020, "sha1": "8cbda4a4a754a8e8129da277b7dd64ddcff7b13f", "oa_license": "CCBY", "oa_url": "https://www.mdpi.com/2073-4433/11/2/208/pdf", "oa_status": "GOLD", "pdf_src": "Adhoc", "pdf_hash": "d5bf1d279e678814f0f8c14952c6a5df15f0aa3d", "s2fieldsofstudy": [ "Environmental Science" ], "extfieldsofstudy": [ "Environmental Science" ] }
4830936
pes2o/s2orc
v3-fos-license
Towards a more objective evaluation of modelled land-carbon trends using atmospheric CO 2 and satellite-based vegetation activity observations Terrestrial ecosystem models used for Earth system modelling show a significant divergence in future patterns of ecosystem processes, in particular the net land– atmosphere carbon exchanges, despite a seemingly common behaviour for the contemporary period. An in-depth evaluation of these models is hence of high importance to better understand the reasons for this disagreement. Here, we develop an extension for existing benchmarking systems by making use of the complementary information contained in the observational records of atmospheric CO 2 and remotely sensed vegetation activity to provide a novel set of diagnostics of ecosystem responses to climate variability in the last 30 yr at different temporal and spatial scales. The selection of observational characteristics (traits) specifically considers the robustness of information given that the uncertainty of both data and evaluation methodology is largely unknown or difficult to quantify. Based on these considerations, we introduce a baseline benchmark – a minimum test that any model has to pass – to provide a more objective, quantitative evaluation framework. The benchmarking strategy can be used for any land surface model, either driven by observed meteorology or coupled to a climate model. We apply this framework to evaluate the offline version of the MPI Earth System Model’s land surface scheme JSBACH. We demonstrate that the complementary use of atmospheric CO2 and satellite-based vegetation activity data allows pinpointing of specific model deficiencies that would not be possible by the sole use of atmospheric CO 2 observations. Introduction The terrestrial and oceanic biospheres currently absorb almost half of the fossil-fuel emissions, and thereby buffer the atmospheric CO 2 increase and reduce the rate of climate change (Cox et al., 2000;Raupach et al., 2008;Le Quéré et al., 2009).Because of the strong interactions between the biosphere net carbon (C) uptake and climate in particular on land (Cox et al., 2000;Friedlingstein et al., 2006, Arora et al.2013), projections of future climate changes from Earth system models (ESMs) need to accurately simulate the processes that control the evolution of the terrestrial net C balance.However, despite a seemingly common behaviour of C cycle models for the contemporary period, estimates of the future C land balance by different terrestrial biosphere models (TBMs) diverge significantly.This divergence contributes strongly to the overall uncertainty in the future evolution of the global carbon cycle (Friedlingstein et al., 2006;Sitch et al., 2008;Arora et al., 2013).The apparently contradictory behaviour underlines the difficulty of constraining future projections of terrestrial models with current observations.This calls for an in-depth model evaluation that focusses on the model's capacity to simulate key features of Ccycle-related processes rather than simply ensuring that the easily diagnosed simulated net land-atmosphere C exchange agrees with estimates inferred from observations.Several global model evaluation analyses have been published in the last decades with respect to land model performances of the carbon cycle (Anav, et al., 2013;Cadule et al., 2009;Blyth et al., 2009;Randerson et al., 2009;Heimann et al., 1998).However, they differ with respect to reference dataset used, selection of the observational traits as well as D. Dalmonech and S. Zaehle: Towards a more objective evaluation of modelled land-carbon trends their computation, and mathematical formulations used to quantify the data-model mismatch.These differences cause uncertainty when it comes to ranking several land surface models or to analyse the outcome from different evaluation works.Recent model benchmarking initiatives (Randerson et al., 2009;Luo et al., 2012) have therefore underlined the need for the development of a standard set of tests and metrics applicable to any land surface model at different spatial and temporal scale. In addition to a lack of standards, a key challenge in evaluating global biosphere models comes from the uncertainties in observations.From a perspective of data-model mismatch quantification, given uncertainties in data and observation, operators to link model and data exist.However, data error and structural errors are often not known or provided quantitatively (e.g.Raupach et al., 2005). This study is an attempt to move toward a more robust and a more objective evaluation framework by defining novel tests/diagnostics and quantitative model performance measures that are robust against these mentioned unquantifiable uncertainties.We first selected a parsimonious number of reference datasets that are as much as possible direct observations.In first instance, upscaled products such as that from Beer et al. (2009) were not used as the fraction of gap-filled information is not quantified.Atmospheric CO 2 and remote sensing data of vegetation activity were selected to take advantage of their spatial and temporal coverage and the complementarity of their information content. Atmospheric CO 2 measurements and transport modelling that links surface fluxes to these measurements are a valuable approach to evaluate TBMs since the atmospheric CO 2 retains the signature of terrestrial ecosystem response to climate variability (Heimann et al., 1998;Randerson et al., 2009;Cadule et al., 2010).However, atmospheric CO 2 observations alone do not allow inference of the contribution of vegetation and soil components to the observed signal, such that a good fit might hide compensating model errors.Remote-sensing observations of vegetation activity may provide complementary information as they reflect the climateand disturbance-related seasonal and interannual trends of vegetation greenness (Peñuelas et al., 2009;Richardson et al., 2009). Rather than comparing average quantities, the analyses presented here analyse how much relevant and robust information, which helps constraining model projections, can be extracted from observations.Hence, we select traits, in particular with respect to vegetation activity, that are based on the information of changes with time, correlations with covariates, and the sign of the changes, as well as based on metrics that are sensitive to difference in sign and phase.Phasing and extent of the climate variability simulated by Earth system models (ESMs) often differs from observed climate because of unforced variability (Deser et al., 2010).To circumvent the resulting mismatch from a direct comparison of ESM simulations and modern observations, and to make key characteristics of the observations useful for the evaluation of ESMs, priority was given to traits and metrics that describe the relationship between climate variables and carbon cycle processes rather than direct comparison of observed and modelled time series. The second innovation of our studies is that we impose a lower acceptable model performance measure (baseline benchmark) based on the assumption of a null model, i.e. a model that does not show any trend in the quantity under investigation.This lower boundary for each metric helps to avoid misleading interpretation of the number returned by the scores, and to provide a more informative and intuitively interpretable analysis of the model performance.With respect to the atmospheric CO 2 traits, the aim is to quantify how much information the land surface model adds to the signal of ocean and anthropogenic fossil-fuel emissions and thus to quantify how good the model is relative to the null hypothesis (null model).The working line is thus as follows: the analyses were performed on a seasonal and a de-seasonalized signal to better identify C-cycle patterns and the relationship between C-cycle-related processes and climate variability.As detailed in Sect.2, we selected several characteristics (traits) of the observational data that are relevant to the biosphere's response to climate variability in terms of terrestrial C cycling patterns.We focussed on the the last three decades since this is the period with the best data availability (Tables 2 and 3).For the selected tests, a list of comprehensive metrics was selected to quantify model performances according to the information content of identified traits.We then compared this metric to the reference value of the metric obtained according to the baseline benchmark to arrive at a final score for the model. In Sect. 3 we discuss the potential strengths and limitations of the evaluation framework at the example of the the JSBACH land surface model of the MPI-ESM (Raddatz et al., 2007;Giorgetta et al., 2013) driven by reconstructed meteorology. Atmospheric CO 2 Atmospheric CO 2 concentration recorded at remote measuring stations were obtained from the flask data/continuous measurements provided by different institutions (e.g.flask data of NOAA/CMDL's sampling network, update of Conway et al., 1994, Japan Meteorological Agency (JMA), Meteorological Service of Canada (MSC), and many others; see Rödenbeck, 2005).Simulated net land-atmosphere CO 2 fluxes for the period 1980 to 2009 were transported together with estimated net ocean CO 2 fluxes (Jacobson et al., 2007;Mikaloff Fletcher et al., 2006, 2007 -one of the best available products based on Takahashi ocean dataset and involving several biogeophysical ocean models) and fossil-fuel fluxes (EDGARv.4.0, Olivier et al., 2001, http://edgar.jrc.ec.europa.eu/faq.php)by means of an atmospheric transport model (TM) to estimate atmospheric CO 2 record at the measuring stations.For our analysis, we used the TM3 model, version 3.7.22(Rödenbeck et al., 2003), with a spatial resolution of 4 • × 5 • and driven by interannually varying wind fields of the NCEP reanalysis (Kalnay et al., 1996). The model-based time series of CO 2 at the measuring stations were based on sampling simulated CO 2 abundance at the same time in which measurements were available in order to reduce the representation bias.The temporal resolution of CO 2 data is the original resolution as recorded at the monitoring stations (hourly to daily/weekly) and dependent of the specific station. Stations were selected in order to cover representatively a latitudinal gradient (Table 1).Latitudinal and vertical transport of CO 2 differs among TMs (Yang et al., 2007), but these differences are difficult to quantify and attribute to particular model features (Gurney et al., 2003;Peylin et al., 2005).In remote stations with simple topography, different TMs tend to agree better and are expected to have less error.The selection of monitoring stations takes account of this by including mainly oceanic/island stations as these remote stations have a lower uncertainty and are only marginally influenced by local C sources or sinks (MPI Biogeochemistry, technical reports 5-6: http://www.bgc-jena.mpg.de/bgc-systems/pmwiki2/pmwiki.php/Publications/TechnicalReports). Two estimates of the net land-atmosphere CO 2 flux obtained from inverting the observed atmospheric concentrations using atmospheric transport modelling (hereafter referred to as standard fluxes) were also transported using the same protocol as for the simulated TBM fluxes.These fluxes were taken from the Jena inversion system, which relies on the same TM3 transport model (Jena inversion version 3.7.22,available at http://www.bgc-jena.mpg.de/∼ christian.roedenbeck/download-CO2/, update of Rödenbeck et al., 2003;Rödenbeck, 2005Rödenbeck, , covering the periods 1996Rödenbeck, -2008Rödenbeck, and 1981Rödenbeck, -2008, respectively), respectively).The standard fluxes were not used to derive an absolute benchmark sensu strictu but as reference to compute additional traits as reported in Sects.2.4.1 and 2.4.5. Vegetation activity datasets To characterize seasonal and interannual changes in vegetation activity, we rely on two satellite-based products: the SeaWiFS-FAPAR (Gobron et al., 2006a, b), the fraction of photosynthetically active radiation absorbed by vegetation, and the longer GIMMS-NDVI collection g (http://glcf.umd.edu/library/guide/GIMMSdocumentationNDVIg GLCF.pdf), which is the normalized difference vegetation index, retrieved from the AVHRR sensor records (Tucker et al., 2005;Beck et al., 2011).Both FAPAR and NDVI provide a measure of greenness integrating canopy functioning.It has been previously shown that these quantities are nearly linearly related (Myneni and Williams, 1994).The selected FAPAR data were provided as 10-dayaggregated time series from September 1997 until June 2006 at a nominal spatial resolution of 2 km and were used to analyse the seasonal cycle of vegetation activity (Table 2).The GIMMS dataset contains biweekly data at a spatial resolution of 8 km from 1981 until 2006 and was used to estimate long-term changes in vegetation activity (Table 3). Satellite data were aggregated at the spatial resolution of the TBM, including grid cells that are partially covered by bare soils.With this approach, the aggregated signal indirectly accounts for changes in vegetation activity and density.A simple gap-filling procedure based on 2nd degree polynomial interpolation in time was applied to replace bad-quality flag data.All data were aggregated at the monthly temporal resolution.In the case of GIMMS-NDVI, the maximum value composite (MVC) method was used (Holben, 1986).It is assumed that the process of temporal and spatial aggregation of satellite-based vegetation activity smoothes out noise in the data, and the uncertainty induced by the aggregation might be considered negligible for our purpose.Tropical areas were excluded from the analysis due to the high uncertainty in the interpretation of the satellite signal (Asner and Alencar, 2010) and high uncertainties in NDVI datasets in these regions (Huete et al., 2002;Brown et al., 2006). The JSBACH model JSBACH is the land surface model of the Max Planck Institute's Earth System Model (MPI-ESM) (Raddatz et al., 2006;Giorgetta et al., 2013) In this study we use the version that was used for the CMIP5 activity (JSBACH version 2.0).JSBACH considers 11 plant functional types, which occupy annually varying fractions (tiles) of a model grid cell, prescribed from land-use data (see Sect. 2.2).Phenology and C cycling is simulated explicitly for each tile, while the halfhourly fluxes of energy and water are calculated for each grid cell, based on the relevant average properties of vegetation and soils across the tiles.The land-use emissions are computed according to the method reported in Reick et al. in and were aggregated via conservative regridding to the T63 resolution of the MPI-ESM grid at daily resolution.These data were used as model forcing as well as for the climate correspondence analysis.The standardized precipitation index SPI was computed from the precipitation record of the CRU observational dataset (Mckee et al., 1993;Lloyd-Hughes and Saunders, 2002).SPI is suitable as indicator of both dry and wet soil conditions.Irrespective of biomes or region, the 6-month cumulated precipitation data was used to compute the SPI for each grid cell (see Appendix A for more details).Land-cover and land-use change transition maps were derived from Hurtt et al. (2006). Evaluation methodology The analyses in this study focus on seasonal and interannual/decadal time scales.To identify these components from the observed and simulated atmospheric CO 2 , as well as vegetation activity and climatic drivers, a seasonal component (up to annual time scale) and an interannual time scale component were isolated using a filter implemented in the Fourier space.We followed the method and the cut-off values presented in Thoning et al. (1989), using Gaussian spectral weights (Rödenbeck et al., 2003).The outcome of the filtering is (i) a seasonal component with a mean of zero, which retains information up to the annual frequency with the very high frequency (daily to biweekly) removed, and (ii) a de-seasonalized signal, which includes all the frequencies lower than the annual cycle -i.e. the interannual to decadal time scales.In terms of interannual variability, this approach of filtering is more advantageous than consideration of monthly anomalies since a de-seasonalized signal provides a better measure of the strength and persistence of interannual variability related to climatic and natural events as El Niño events and volcanic eruptions.The analysis of seasonal patterns aims not only at the relative phasing of vegetation growth and ecosystem respiration and modelled phenology that affects the seasonal phasing of the net land-atmosphere C exchange (Prentice et al., 2000) but also at biogeophysical effects such as the water and energy exchanges (Notaro et al., 2007;Peñuelas et al., 2009).Interannual variability and long-term trends of net land-C exchanges and vegetation activity are an important and crucial aspect of the terrestrial ecosystem in a climate change context.Changes of vegetation activity might have implications to long-term potential for retaining more C in the system, contributing hence to the biosphere-atmosphere feedbacks and internal plant-soil feedbacks (Bonan, 2008). In the following sections, we describe key features of the atmospheric CO 2 and vegetation activity obtained from the decomposed signals (Table 2: seasonal time scales; Table 3: interannual time scale).These traits are used to assess the capacity of the model to reproduce climate-variability-induced effects on terrestrial ecosystems.In addition, traits characterizing the co-variability of vegetation features/atmospheric CO 2 and land climatic patterns are defined.Some of the selected traits were analysed separately in three time intervals (1982-1991, 1992-1997, and 1998-2006) according to two breakpoint events: the Mount Pinatubo eruption in 1991, and the El Niño event in 1997 -two of the most relevant natural events occurred in the last three decades. The systematic quantitative assessment of the correspondence of anomalies and trends in simulated vegetation activity and net C exchange is performed using normalized metrics (see Appendix B for the mathematical description).The proposed selected traits and metrics are suitable to be applied to land surface models run in either offline or fully coupled mode because they are based on reproducing variability and/or statistical relationships with the driving climate rather than focusing on the absolute correspondence of the variables.This strategy reduces potential biases in the assessment due to uncertainty in the predicted climatic variability (Deser et al., 2010). Geographical regions at the continental scale consistent with the regions used for the Transcom3 project (Gurney et al., 2002;Fig. 1) were used to determine the influence of net land-atmosphere CO 2 fluxes from a particular region to the signal at the monitoring stations following the procedure reported in Cadule et al. (2010).The characterization of vegetation activity was performed at grid-cell level and at regional level according to the same Transcom3 regions.The Transcom3 region maps were further intersected with the dominant vegetation map obtained from the Synmap vegetation classification of Jung et al. (2006) (see Appendix D).Grid cells with dominance of bare soil or ice, as well as grid cells with no valid observations, were excluded from the analyses. Seasonality of Atmospheric CO 2 The model's capacity to simulate phase and amplitude of the mean seasonal cycle of atmospheric CO 2 (MSC) was evaluated using the Taylor score (Taylor, 2001).The selected metric gives more weight to the correspondence in phase instead of amplitude (Taylor, 2001), which is the more reliable feature of transport models (Stephens et al., 2007).Additional information on the land net C exchange is contained in the latitudinal gradient of the amplitude of the mean seasonal cycle (MSClg), which increases from the South Pole northwards because of the relatively higher land masses fraction in the Northern Hemisphere (NH).A metric based on the variance of the amplitude data was used to assess the model performance (Table 2 and Appendix B). The relative contribution of the C fluxes from land (and ocean) Transcom3 regions to the seasonal cycle amplitude (MSCc) was computed using the atmospheric CO 2 record obtained by transporting the standard fluxes constrained on the period 1996-2008 as reference.This choice was made so as to overlap with the time period for which the SeaWiFS-FAPAR data are available (see Sect. 2.4.2).The relative contribution of each region to each single monitoring station in both standard fluxes and modelled fluxes was compared using the Pearson correlation coefficient.This trait checks thus also for the existence of potential inconsistencies between the regional and seasonal distribution of net land-C fluxes from the model and estimated by the inversion of atmospheric observations. Changes in the seasonal cycle over time, referred to as the monthly CO 2 trend (MT), are quantified as the year-to-year change in CO 2 concentration for each month.Previous works analysed solely the change in amplitude of the seasonal cycle in Mauna Loa as response to land surface warming (Myneni et al., 1997;Angert et al., 2005;Buermann et al., 2007), while we focus on decadal trends in long-term northern stations, which exhibit a clearer signal.This trait summarizes the seasonal change in the trend of land-C sink/sources in response to climatic drivers and natural disturbances in the extratropical latitudinal band.The model-data correspondence is analysed using the Pearson correlation coefficient. The trend in the seasonal onset of net land-C uptake (C-dd) was computed as follows: for each year, the algorithm looks for the downward zero-crossing point of the seasonal time series of atmospheric CO 2 .The trend is thereinafter computed on the extracted dates.This feature characterizes in particular the observed high-latitude ecosystem responses to recent land surface warming and it is indirectly linked to the beginning of the growing season (Keeling et al., 1996;Myneni et al., 1997).Because the years 1991-1993 -i.e. the years following the Mount Pinatubo eruption -are an anomaly in this trend (Lucht et al., 2002), these three years were excluded from the analysis.The analyses for the MT and C-dd traits focus on the stations in the extratropical latitudinal band with a clear signal from land and low contamination of the trends due to uncertainties in the fossil-fuel emissions (Table 2). Seasonality of vegetation activity A direct comparison of absolute values of remote sensing data such as NDVI or FAPAR and their corresponding modelled variable might be not a viable strategy, first and foremost because of different retrieval and post-processing algorithms used to compute the final estimated FAPAR/NDVI in different satellites products, and to remove, for example, cloud contamination and atmospheric corruption, etc. (e.g. the intercomparison study of Dahlke et al., 2013).This implies that the outcome of a direct model-data comparison is dependent on the reference dataset used.In addition, the radiances recorded by satellites differ in the way that radiation extinction is computed at the land surface in land surface models.This difference does not allow a priori for a perfect match between data and model.the TransCom-3 regions (Gurney et al., 2002) over which the estimated fluxes a at neighbouring land and ocean regions actually overlap each other beyond th being done separately for the individual flux components, according to whethe he apparent coast lines shown here correspond to a land/water map at the stan 43 Fig. 1.Map of the land regions used for the regional benchmark of phenology and the analysis of the biosphere fluxes, as defined in the TransCom intercomparison studies (Gurney et al., 2002).The map shows the regions at TM3 resolution.Code: North American Boreal (NAB), North American Temperate (NATe), South American Tropical (SATr), South American Temperate (SATe), Northern Africa (NA), Southern Africa (SA), Eurasian Boreal (EAB), Eurasian Temperate (EATe), Tropical Asia (TrA), Australia (AUS), Europe (EUR).The ocean was considered as a single region. However, as shown in Dahlke et al. (2013) for the seasonal information, the temporal evolution of the recorded signal is likely to be a robust feature among datasets and the temporal evolution of the modelled signal should resemble the reference dataset such that they can be evaluated by a metric that is independent from the absolute values of the time series.Because of the aforementioned reasons, we focused on metrics based on information on time and sign of changes as indicated by the satellite data. With respect to the seasonal signal, as a first step, grid cells with only one detected growing season per year were selected by analysing the autocorrelation of the seasonal record and its significance.The shape of the seasonality of vegetation activity was then characterized by two robustly identifiable and meaningful phases of the phenological cycle: the time of the beginning of the vegetative growing season, hereafter referred to as time of onset (t-onset), and the time of the maximum FAPAR signal (t-max) (Randerson et al., 2009).Data and model signals characterized by mean amplitude of the seasonal record within 1 % of total FAPAR range were ex-cluded from the analyses.The definition of the beginning of the growing season is a subjective matter and a direct and precise link to ground-level observation is difficult to identify (Lucht et al., 2002;Maignan et al., 2008;Verstraete et al., 2008).Analogously to the method of estimating the beginning of the net CO 2 uptake reported in Sect.2.4.1, the proxy of the time of onset of vegetation activity is calculated on the seasonal signal, and corresponds to the point in time of the upward zero-crossing point of the seasonal curve (see Fig. A1). Linear differences of the most frequent month of time of onset or maximum of FAPAR were computed between model and data.Consequently this metric ranges between one (no difference) to zero (6-month difference).The length of the growing season was not used as additional trait because it is poorly defined from satellite data as autumnal leaf colouring and the simultaneous presence of living and dead leaves confounds the satellite signal, in particular in temperate regions (Estrella and Menzel, 2006;Menzel et al., 2006). Interhemispheric gradient and trend of atmospheric CO 2 The long-term trend in atmospheric CO 2 (C-LTT), given known fossil fuel, land-use change emissions and net ocean carbon fluxes, is an indication of the long-term net C balance of the terrestrial biosphere (Prentice et al., 2000;le Quere et al., 2009).The trend was computed from the mean annual values of the de-seasonalized signals and compared directly to the observations for stations covering the period 1982-2008(Table 1).The interhemispheric gradient in atmospheric CO 2 abundance (IHG) measures the north-south differences in atmospheric CO 2 caused by changing balance of the increasing fossil-fuel emissions in industrialized regions and the net ocean and land-C uptake.For each year, this trait was computed by subtracting the observed and modelled annual CO 2 concentration at the South Pole station (SPO) from the respective station concentrations, as in Cadule et al. (2010). The metric was based on the comparison of the standard deviation of modelled and observed data. Trend of vegetation activity Similar to atmospheric CO 2 , vegetation activity trends were computed from modelled and reference data.Beck et al. 2011 indicated that the GIMMS-NDVI dataset is suitable for assessing temporal changes of vegetation activity.However, due to the unknown uncertainty of the absolute NDVI values, the selected trait does not compare numerical trends.Instead, the selected metric focusses on the robust trends in the data and determines the spatial patterns of positive, negative, or no significant trend in vegetation signal from the GIMMS-NDVI dataset and compares this to the pattern in modelled FAPAR (Table 3).For each grid cell, the metric calculation was performed on annual values of the de-seasonalized vegetation time series.The non-parametric Mann-Kendall test was used to determine whether a positive (greening), negative (browning) trend or no significant trend was detected (two-tailed statistic).The advantage of this approach is that it is robust against satellite drift and high-model internal variability that is, for instance, induced by high variability in the climate simulated by an Earth system model.At the grid-cell level, the metric is a binary score which measures whether the model and data show a significant trend of the same sign.The global-scale metric is then a ranking of a percentage agreement for cells of a particular trend class. Quantification of interannual variability: atmospheric CO 2 and vegetation activity relationship with land climate pattern The relationship between the seasonality of phenology and local climatic drivers at grid-cell level was explored using the annual variations of the time of beginning of the growing season (t-onset; Table 2).The time series for the SeaWiFS-FAPAR data is too short to allow for a trend analysis.Therefore the correlation of the t-onset with the annual temperature, given the annual SPI as conditional variable, was taken as a proxy.A ranking metric, analogous to the vegetation activity trend metric, was computed according to cell-by-cell agreement in terms of sign of the statistics, hence according to significantly positive, negative, or non-existent correlation. Interannual variability in vegetation activity was assessed using de-seasonalized signals obtained from the GIMMS-NDVI/modelled FAPAR aggregated to the Transcom3 land region.Cross correlations between monthly records of vegetation activity and regional climatic variables, temperature and SPI, were computed with lags up to 24 months (Table 3).The South American Tropical region, Tropical Asia regions, and grid cells with dominance of tropical forests in Africa are excluded by the analysis (see Sect. 2.1.2). The same approach was used to measure the relationship between atmospheric CO 2 growth rate and land surface climate (Table 3).The atmospheric CO 2 growth rate is well known to provide information on the interannual variability of the biospheric response to climate variability and in particular land response at the ENSO time scale (Keeling et al., 1995;Le Quere et al., 2003;Peylin et al., 2005).However, most of the land surface climate shows some coherence with this large-scale climatic feature (Buermann et al., 2003), such that the CO 2 signal in the atmosphere could be perfectly correlated, instantaneously or lagged, with climate over most of the land regions.To reduce this problem, an empirical orthogonal function (EOF) decomposition of the atmospheric CO 2 records, obtained by transporting the "inverted fluxes" from each land region, was computed.The three most contributing land regions (to at least 80 % of the variability in the observed total signal) for selected monitoring stations were determined and only these were used in the analysis (see Appendix C). The obtained statistically significant cross correlations from data and model (vegetation and atmospheric CO 2 growth versus regional climate) were compared with a correlation metric in order to test if the model is able to return the coupled patterns with time lags (see Appendix C). The use of inverted fluxes to determine the most contributing regions at interannual time scale and for the EOF decomposition does not affect significantly the results in terms of model behaviour evaluation.However, it changes the degree to which the observations can effectively constrain the model if in the model domain a region contributes less than inferred from the inverted fluxes. The last selected feature of the carbon cycle uses the CO 2 growth rate to compute an apparent land-C cycle sensitivity to global temperature anomalies, defined as the slope of the annual CO 2 growth rate versus the aggregated annual land surface temperature.The record at the station of Mauna Loa (MLO) was used as proxy of evolution of globally averaged atmospheric CO 2 concentration (Zeng et al., 2005).D. Dalmonech and S. Zaehle: Towards a more objective evaluation of modelled land-carbon trends The baseline benchmark and the final scores The reference minimum (baseline benchmark) concept applied in this study compares the skill of the model under investigation with the score of the metric obtained assuming a land biosphere that does not systematically contribute to any signal.For the C-cycle analyses, the baseline benchmark is set to be a biosphere without a terrestrial C-cycle ecosystem, implying that the signal or trend in the observations is driven by fluxes of fossil fuel and net ocean only (no-land case).Since this lower benchmark is applied based on the same TM for all the simulations, this further reduces the potential errors introduced by transport modelling uncertainties.Scaling the metric to the lower benchmark highlights the contribution of modelled land fluxes to match the observed trait of the data under consideration.In other words, the final score number is the metric for an individual trait, cleaned by the contribution of other CO 2 source/sink other than the modelled land fluxes.Only for the CO 2 drawdown test (C-dd; Table 2) is the baseline benchmark set as zero trend (i.e.there is no trend on land).A similar concept is applied for the vegetation activity traits: the lower benchmark is provided by the case with constant vegetation (no-change case).Only in the case of timing of the vegetation onset and maximum (t-onset and t-max traits; Table 2) is the baseline benchmark set as the maximal possible difference (6 months). The final global model metrics M for each trait are computed as follows: first, the metrics are computed for the CO 2 signal in each monitoring station and in correspondence of each Transcom3 region for the vegetation-related traits (M or in Eq. 1).The same statistic is also applied for the null-model case to return the numerical metric value of the trait for the baseline benchmark case (M base in Eq. 1).The original metric is then scaled to a new, normalized metric (score) between 0 and 1 according to Eq. ( 1), where 1 indicates perfect datamodel match and 0 indicates that the model is not able to perform better than a system without the representation of the land biosphere. Secondly, the model performances are summarized in a polar plot that goes radially from 0 (less skillful model), in the centre, to 1 (skillful).The global scores are derived as follows: for the satellite-based scores, the global score is the average of the scores computed for each Transcom3 region, with the exception of the ranking-based scores, which are already computed at global scales.For the CO 2 -station-based scores, the scores for each station were first averaged by latitudinal band, and the global score was then derived as the average of the scores computed by latitudinal band. Results and discussion In the following, we discuss the results of the above framework at the example of the JSBACH model.The results for the individual traits are summarized in Fig. 2. Table A1 reports the results of the baseline benchmarking for comparison.Table 4 reports results per latitudinal band with regards to CO 2 traits, and global scores for the vegetation traits. In this section an in-depth analysis of the mechanisms behind data-model mismatch is not performed, but what we can learn from observations and how can we use them to quantify data-model differences is shown, as well as what the benchmarking framework can tell about potential areas of model deficiencies. Seasonality of atmospheric CO 2 The Taylor diagram (Fig. 3a) reports the data-model correspondence in terms of phase and amplitude of the mean seasonal cycle (MSC).JSBACH is in general capable of simulating the phase of the seasonal cycle of CO 2 , with the exception of the stations south of the equator, which tend to be out of phase.At those stations, ocean fluxes dominate the signal, which can be seen in the large difference between the low original and the higher scaled metric (Table 4).The anticorrelation of the model's seasonality might further indicate either (or both) a high contribution of the signal from the Northern Hemisphere or reveal effective out-of-phase seasonal land-C fluxes.The in-depth analysis of the regional contribution to the mean seasonal cycle (the MSCc trait) indicates that the Eurasian Boreal and Eurasian Temperate regions slightly but systematically contribute more to the signal in the stations above the 50 • N than inferred from observations (Fig. A2).At the southern stations, the model signal from the South American Temperate region clearly dominates the ocean signal (Fig. A2), suggesting that this region has a seasonal cycle of net land-atmosphere C fluxes inconsistent with the atmospheric record.This inconsistency leads to the low scores in the southern latitudinal band (Fig. 2, Table 4).The model clearly overestimates the amplitude of the MSC across the global network of stations, as can be seen in the latitudinal gradient of the amplitude of the mean seasonal cycle (Fig. 3b).Although uncertainties in the transport model could partially contribute to this, the steep drop of the CO 2 concentration during the summer months (data not shown) are an indication that an overestimation of spring C uptake (i.e.too large global gross primary productivity) is responsible for the overestimation of the amplitude.2 and 3.The polar plot goes radially from 0 (less skillful model), in the centre, to 1 (skillful).Since we only consider one model here, we refer to the threshold value of 0.5 to indicate the model good/high performances and less good/low performances. Seasonality of vegetation activity Figure 4 shows that JSBACH simulates the time of onset with a systematic lag of 1 to 2 months over large areas of the Northern Hemisphere (NH).A major exception is the east and south of the North American Temperate region, where the model tends to lead the observed growing season.Given the monthly temporal resolution of the analyses, these results in the NH are still in line with the good performance in terms of phase correspondence of the MSC of CO 2 at the northern D. Dalmonech and S. Zaehle: Towards a more objective evaluation of modelled land-carbon trends stations (Fig. 3a).However, results indicate that there is space to improve the modelled phenology. In large parts of the tropical latitudinal band, most of the modelled signal is flat, in contrast to the detected seasonal cycle recorded in the SeaWiFS data in seasonally dry topical areas (Fig. 4).In these areas, which are dominated by raindeciduous vegetation, the occurrence of one growing season is driven by seasonality in rainfall.Similarly, the model signal in the Australian scrubland does not show any clear seasonality in contrast of the observations.The flat tropical signal and the detected differences up to 3-4 months in some southern regions are responsible for the low aggregated global model performance (Fig. 2, Table 4).The vegetation classes contributing most to the lower performances are deciduous broadleaved forests and grassland, probably mostly due to their geographical distribution and presence in drought-prone areas (see Fig. A3). The timing of the maximum analysis (t-max; data not shown) returns similar geographical pattern to the "t-onset" trait, but the differences are generally slightly higher.This aspect partly relates to the less well defined nature of the timing of the maximum in regions with several months of full foliar coverage.At the global scale, there are no discernible differences between the two scores (Fig. 2, Table 4).These results show that the seasonality in the model is slightly lagged in time, but without strong distortions in the signal in the first period of the growing season in the Northern Hemisphere.An improvement in phenology parameterization in areas dominated by raingreen vegetation in the seasonally dry tropical latitudinal band and drought-prone shrublands is necessary.It is unclear from the analysis, however, whether a too low sensitivity of raingreen vegetation to soil moisture stress or a too low seasonal cycle of simulated soil moisture as a result of problems with the modelled soil hydrology is the cause of this phenomenon. Monthly CO 2 trend As example for the trend in the monthly CO 2 signal (MT), Fig. 5a displays the trend computed for observed signal at the Alert station (ALT) together with the contributions of the net land and ocean C fluxes and fossil-fuel emissions.For the selected northern stations, the observational analysis shows that, in particular in the summer months (June-July), the land is the most dominant contributor to the tendency towards a more pronounced seasonal cycle.That is to say, increased monthly land-C uptake rather than changes in ocean fluxes and fossil-fuel emissions are responsible for this trend.This feature is particularly strong in the period 1982-1991 and consistent across the selected stations, although this trend is not always statistically significant for all the months (Fig. 5a).The monthly CO 2 trend in the period 1992-1997 is less clear (data not shown), while the negative trend of the summer uptake occurs in that of 1998-2006, albeit weaker than for 1982-1991.The latter pattern likely reflects the weakening of the positive land warming effect on phenology during the growing season, which was particularly apparent in the 1980s (Myneni et al., 1997). Using an additional TM simulation, we verified that the observed weakening of the negative trend in summer is indeed mainly land induced and not induced by the interannual wind fields used in the transport model.The experimental results with constant wind (data not shown) confirmed that interannually varying transport can contribute but does not overwhelm the land-based trends in monthly CO 2 concentrations.Potential trends in the seasonality of fossil-fuel emissions (Blasing et al., 2005) are unlikely to strongly affect this trend (data not shown). Figure 5b exemplarily shows that JSBACH is able to qualitatively return the seasonal-like shape of the monthly CO 2 trend and the detected land-C uptake weakening, but it is not able to fully explain the observed signal (Fig. 2 and Table 4).Since the selected metric analyses the correspondence of phase of the monthly trend, the non-perfect match could be attributable to divergence in observed and modelled climate sensitivities of photosynthesis and respiration. Interhemispheric gradient and long-term trend of atmospheric CO 2 (Table 3) The interhemispheric gradient trait (IHG), which evaluates the interannual variability of the net land-atmosphere C exchange, agrees well between JSBACH and the observations (results not shown, but see Fig. 2).However, the analysis on the long-term C balance trend (C-LTT) shows that JSBACH substantially overestimates the long-term trend compared to observation (Fig. 6a), such that its score is actually lower than the baseline benchmark at all stations (Fig. 2, Table 4, Table A1).Since this detected data-model difference is unlikely to be due to uncertainties in fossil-fuel emissions or ocean net carbon fluxes (le Quere et al., 2009), this result is due to a substantial underestimation of net land-C uptake. Vegetation activity trend (Table 3) Figure 6b displays the decadal patterns of the normalized annual vegetation activity time series (GIMMS-NDVI and JSBACH-FAPAR), excluding evergreen tropical forests, glaciers, and desert areas.There appears to be a good qualitative global agreement, suggesting that phenological limitations are not likely the cause for the aforementioned too low increase in land C.However, the good agreement of the global vegetation pattern is partly due to the compensation of errors (Fig. 7).The observed, spatially extensive positive trend in vegetation greenness in the period 1982-1991 is not fully captured by the model because several areas have either no trend or even a negative trend (in parts of the South America, Australia, and South East Asia).During the years 1992-1997, no clear geographical pattern is detected (data not shown).observed positive trend in the period 1982-1991 appear to have no or even negative trends.This phenomenon is only partly reproduced by JSBACH: in the northern boreal regions and in the Southern Hemisphere, particularly in the South American Temperate region, the negative trends are simulated. The observed large-scale positive trends in vegetation activity during the period 1982-1991 is consistent with previous results (Myneni et al., 1997;Zhou et al., 2003).However, our analysis underlines that the observed positive warming effect on greening has not been persistent in time, but switched toward a neutral effect in the years 1992- served negative pattern in the SH is generally consistent with the trends in evapotranspiration and in particular soil moisture reported in Jung et al. (2010) even though our analyses ends in 2006, while theirs ends in 2008.Several factors might contribute to the observed overall behaviour following the El Niño event in 1997.These include recurrent drought events, D. Dalmonech and S. Zaehle: Towards a more objective evaluation of modelled land-carbon trends pest outbreaks, and severe fire events over several regions responsible for the detected negative trends in boreal areas and the weakening of the summer C uptake that we reported in Sect. 3.2 (van der Werf et al., 2004;Angert et al., 2005;Goetz et al., 2005). The low final score of JSBACH in this metric (Fig. 2, Table 4) is in particular the result of the recurrent large-scale negative trends in several areas in the SH and in south-east Asia during the years of 1982-1991and 1998-2006 (Fig. 7 (Fig. 7). The non-quantitative nature of this comparison prohibits a too strict interpretation of the mechanisms behind the modeldata differences.It is unclear whether these differences are caused by the phenological scheme of the model, land-use change protocol, or other factors such as the drought response or fire processes.However, the disagreement in the sign of the trend can be attributed to model deficiencies, and the ranking metric provides a quantitative measurement of the detected disagreement. As aforementioned, despite the spatial model-data disagreement, at global scale the errors in the model compensate to return a positive vegetation activity trend.Assuming that vegetation activity is linked to plant productivity, the underestimation of the net land-C uptake in JSBACH (Sect.3.3) is likely the consequence of a too high soil-C turnover rate. Growing season response to local climate (Table 2) The timing of the CO 2 drawdown point (C-dd) and the onset of vegetation greening (t-onset) represent two independent proxies to measure the effects of land warming on spring phenology (Badeck et al., 2004;Menzel et al., 2006).There is a tendency towards earlier CO 2 drawdown at the stations of STM, BRW, and ALT (Fig. 8a), although this trend is statistically significant only for the latter two stations (P < 0.10).Such a negative trend in time is consistent with the advance of spring phenology induced by land surface warming (Fig. 8b): the correlation between climate variability and the timing of vegetation onset is significantly negative with annual temperature.Despite this trait constituting an emerging empirical relationship, the negative correlation mainly in the boreal areas, as clearly shown in Fig. 8b, is consistent to an earlier green-up in warmer years. JSBACH does not show any discernible trend in any of the three stations (Fig. 8a, example for BRW), despite the fact that it returns a similar correlation pattern at the start of the growing season with local temperature (Fig. 8c), in particular in the extratropical northern areas.The final, global score for this trait is very low, despite the good visual matching, because of the low cell-by-cell correspondence (Fig. 2, Table 4).These two analyses underline that the model, although it realistically simulates the beginning of the growing season (Sect.3.1), is likely to respond too weakly to land surface temperature anomalies. Interannual variability of vegetation activity regional climate (Table 3) The vegetation activity is analysed separately for each climatic driver.It is not possible to clearly disentangle temperature and precipitation effects.Nonetheless, the analysis suggests that the NDVI at high latitudes is mainly correlated with surface air temperature, where plant growth is mainly limited by temperature.An exception to this pattern is Eurasia Boreal (EAB), which shows a higher co-variation of vegetation activity with precipitation pattern.NDVI in regions with dominance of shrubs/grassland is mainly driven by precipitation anomalies -in agreement with previous studies (Groeneveld and Baugh, 2007).Figure 9a-b presents exemplary the computed cross correlograms for Eurasia Temperate (EATe) and the North American Boreal (NAB).The pattern returned in NAB, which is common to NATe and EUR, reveals a strong covariation of vegetation activity and temperature in both data and model.However, the model behaviour suggests a strong correlation with temperature even in areas where the observations suggest a stronger covariation with precipitation (measures as SPI), as for instance in the EATe.One notable feature in these regions is that JSBACH shows a larger delay in the response of vegetation activity to SPI than observed, with differences of the order of 2-3 months (EAB, NA, SA).The final JSBACH score is good for this trait (Fig. 2, Table 4) when considering an average performance over all the regions.Low scores are obtained in precipitation-driven areas mainly due to the different time lag of the response, which corresponds well with the aforementioned too low sensitivity of raingreen vegetation to seasonal drought. The selected trait underlines the tendency of the system to respond in a specific way to external forcing/climate, or to respond instantaneously or with some lag, and the metric is selected in order to be sensitive to model-data difference of phase rather than absolute differences of climate and vegetation activity.An important aspect emerging from this simple trait is that the detected delay could hide an incorrect representation of the effects of soil drought on vegetation growth or soil hydrology.The same regions in which the model shows a delayed response to precipitation also show a persistent negative trend in vegetation activity (Sect.3.4, Fig. 7).This pattern is evident in particular in South East Asia, South America Temperate, and Australia, which are mainly dominated by grasslands, shrub lands, or crops.Even if other non-climatic effects at smaller spatial scales (i.e.land degradation and management practices, and fire recurrence) might affect vegetation cover and activity (Foley et al., 2005), the longer lag in the co-variation of vegetation and precipitation might be caused by the same model fault responsible for the mismatch in the vegetation trends.From a biogeophysical point of view, this model feature could also indicate a less reliable capability of the land surface model to return memory effects of the vegetation-precipitation pattern Table 3. List of atmospheric CO 2 and vegetation activity traits used for the analyses at the interannual time scale (higher than annual frequency).A detailed explanation of metrics can be found in Sect. 2 and Appendices B and C). 4 (1982-91/1992-97/1998-2006) negative/no trends Veg.activity ∼ regional V-CL covariance with time lag Pearson correlation r 2.4.5 3.5.2drivers relationships emerging in the real Earth system (Alessandri and Navarra, 2008;Hirschi et al., 2010) in a coupled Earth system model setting. Interannual variability of CO 2 growth rate and regional climate (Table 3) The analysis of the CO 2 growth rate revealed distinctly different behaviour in two latitudinal bands: in tropical latitudes, the correlation structure is similar between observations and model.However, JSBACH performs less well in particular where the CO 2 growth rate is mainly correlated to temperature anomalies, as for instance in North American Boreal and North American Temperate regions (Fig. 9d). It is noteworthy that this model deficiency occurs despite the good correspondence in terms of vegetation temperature (Fig. 9b).One potential reason for this phenomenon might be modelled temperature sensitivities of ecosystem respiration parameterization, particularly soil-C decomposition -inconsistent with the observations.However, it is also possible that the CO 2 signal at the monitoring station is influenced by net land-atmosphere C fluxes in other extratropical regions, Fig. 7. Vegetation activity trend according to the Mann-Kendall statistics for the period of references is reported for GIMMS-NDVI and modelled FAPAR.Red: positive monotonic trend (P < 0.10); blue: negative monotonic trend (P < 0.1); white: no significant trend; grey: areas masked out from the analysis (grid cells with dominance of tropical forests, dominance of desert and ice). Table 4. Final scores of atmospheric CO 2 and vegetation activity.Atmospheric CO 2 scores are reported per latitudinal band.The numerical values prior of the scaling to the baseline benchmark are reported in brackets where they differ from the final scores.For the acronyms refer to Tables 2 and 3. Atmospheric obscuring the local relationship.In general, the observed weak correspondence for the station of BRW is also observed for the station of ALT, while for the stations between 60 • N and 25 • N, no statistically significant co-variations were found in observations (data not shown).In all stations, where most of the contribution to the observed concentrations is from tropical regions (e.g.South American Tropical, Northern and Southern Africa), the results reveal a good correspondence of the pattern of the covariance.However, in contrast to the observations the modelled correlation is weaker and sometimes not significant (Fig. 9c).A comparison of the time series of atmospheric CO 2 and land surface climate (data not shown) reveals that the modelled time series exhibits more variability than observed and explained by, for instance, ENSO-related events.The apparent global land-C sensitivity to land surface temperature anomalies (C-Clsens) computed for the model is not significant and very shallow (Fig. 10), in contrast to the observed sensitivity (4.2 Pg C yr −1 K −1 ) (P < 0.01).It is not possible to determine to what extent the missing fire module in the current version of the model or the use of a specific transport model contribute to the observed-modelled trait mismatch involving the CO 2 growth rates.However, the very low sensitivity returned by the model is comparable to the baseline benchmark (assuming a neutral biosphere; see Table 4), suggesting a deficiency in the model rather than a conceptual error in the methodology.q q q q q q q 2010 q q q q q q q q q q q q q q q q q q q q q ALT BRW STM Y] q q q q q q q 2010 q q q q q q q q q q q q q q Obs JSBACH T q q q q q q q q q q q q q q q q q q q q q q q q q q q 1980 1990 2000 2010 140 150 160 170 180 190 years DOY q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q Obs JSBACH C-dd [doy] BRW a q q q q q q q q q q q q q q q q q q q q q q q q q q q 1980 1990 2000 2010 140 150 160 170 180 190 years DOY q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q Obs JSBACH The CO 2 growth rate is the result of several concurrent biospheric and anthropogenic signals with dominance of land contributions, coming from several areas on the global land.Because of this large contribution from land, and the detailed regional analysis we have performed, the CO 2 -growth-rate- based traits are a useful diagnostic to indicate potential conflicts between model and observations that deserve further investigation, even though a process attribution is not possible without the use of further data streams. As suggested by Rafelski et al. (2009), a similarity in the climate sensitivity of the underlying C processes at interannual and decadal time scales is likely to exist and to be mostly attributable to the land biosphere.This would imply that the poor results obtained from the JSBACH model in the land-C sensitivity trait could also indicate a potential for a model deficiency at longer temporal scales with respect to the net land-C exchange. Common to most of the evaluation schemes, data and model errors are not considered explicitly in the mathematical formulation of the metrics.This constitutes a major limitation of this and other evaluation frameworks.As stated in the introduction, uncertainties in observations or reference datasets are not always provided or quantified, and this poses challenges for the computation of model-data/model-model differences and the significance of these distances.Structural model errors can only be assessed with a dedicated study investigating the effect of alternative model structures on surface fluxes, which is beyond the scope of a benchmarking scheme.Conversely, the scheme proposed here can help to quantify the model structural error if different model variants are available.Where possible, we have minimized the conceptual difference between model and observation by only considering those features (traits) that can be robustly compared and have isolated the contribution of the land versus oceanic and anthropogenic influences.The proposed evaluation framework defines bounded metrics that allows stating whether and how much the model adds information to the simulation of carbon cycle trends and thus whether the model lies in the range of acceptable performances.This provides an additional constraint to the model performance. Concluding remarks Pertinent information on current C-cycle-related processes contained in the atmospheric CO 2 record and the satellitebased records of vegetation activity were compiled and synthesized into easily identifiable traits and a framework of intuitively comparable metrics.The results of the exploratory analysis of C-related processes and climate variability were presented with emphasis on the robustness of the information content of the observations, making use of both atmospheric CO 2 concentration and vegetation activity at the appropriate time-and spatial scale of a global land surface model.The results show that the simultaneous use of the atmospheric CO 2 record and satellite-based vegetation activity as two independent datasets help to identify the sources of datamodel mismatch in terms of regional source of errors, or to detect potential compensation errors.In particular, the separate analysis of the atmospheric CO 2 and vegetation activity circumvent the problem that the atmospheric CO 2 retains the net effect of both vegetation activity (i.e.photosynthesis) and ecosystem C release response. The use of a baseline benchmark with a clear ecological meaning was shown to be a valuable approach to provide a more robust and objective quantification of data-model disagreement.In addition, scaling the metric against a reference case allows more independence by the section of a specific metric and avoidance of misleading interpretation of the numerical score. A key component of the evaluation framework developed here is that it is designed to be suitable and sensitive to evaluate global land surface models both in offline modei.e. when driven by observed climate variability -and fully coupled to Earth system models with a different climate and climate variability.Therefore, in addition to providing metrics for key traits that describe climatological mean variables, we use a range of correlational metrics to analyse the climate sensitivity of key carbon cycle traits.We demonstrate that these metrics provide insight into the realism of the carbon cycle simulation that go beyond an evaluation of mean states and trends.In this paper, we described the framework and applied it to an example model.The next step will be the use of this framework to evaluate online and offline versions of JSBACH.Nonetheless, even application of the benchmarking framework for the evaluation of the JSBACH model in offline mode already allows certain conclusions particular to the model: -The traits at seasonal time scales showed that highlatitude terrestrial ecosystem patterns are a major strength of JSBACH, with good performance both in terms of mean vegetation activity and mean seasonal CO 2 cycle in the high-latitudinal stations.Lower performance of mean pattern of phenology occurs in the Southern Hemisphere, in particular in shrub-dominated areas and in deciduous broadleaved forests in Southern Africa.A systematic overestimation of the seasonal cycle of CO 2 points to a too high magnitude of the seasonal land gross C fluxes. -The observed weakening of the positive warming effect in vegetation in the NH and the trend toward a neutral/negative effect in the SH pronounced in last decade are not fully captured by the model, both in CO 2 and vegetation activity traits.The analysis of vegetationclimate covariance revealed that the modelled ecosystem response is primarily driven by temperature anomalies, suggesting that this discrepancy might be associated with an incorrect sensitivity of vegetation to precipitation anomalies at interannual time scales. -While the analysis of CO 2 growth rate and climate drivers returned a weak covariation of the atmospheric signal with climate on selected regions on land, the model deviates strongly from the observations both in terms of the long-term trend of the atmospheric CO 2 , and therefore the implied net land-C uptake, and the apparent interannual land-C sensitivity to temperature anomalies.The combined analysis of CO 2 with the vegetation trend analysis suggests that a too high soil-C turnover rate might be responsible for the underestimation of net land-C uptake. Appendix A Computation of SPI index The SPI is the transformation of the precipitation time series into a standardized normal distribution (z distribution).First, a gamma distribution is fitted to the cumulative precipitation frequency distribution.The gamma distribution has been used to fit the empirical frequency of data.Since the gamma distribution is undefined for null values of the variables, the cumulative probability has been corrected according to Lloyd-Hughes and Sanders (2002).Using an equiprobable transformation, the cumulative probability function of the gamma distribution is then transformed into the normal distribution function.In addition to the classical statistics as the Pearson correlation coefficient (r), the squared correlation coefficient (r 2 ), cross correlation, and standard deviation statistics (σ ), metrics were selected as combination of some of the previous statistics and built ad hoc for the specific trait analysed. As reported in Taylor (2000): In this metric, more weight is given to the capability of the model to return the right phase of the trait rather than the amplitude.σf = σ m /σ 0 is the ratio between the modelled standard deviation and the observed standard deviation of the trait of interest.R 0 is the maximum correlation achievable and assumed to be 1. Comparison of variability of the signal via standard deviation: (B2) Linear differences metric: where O is the observed value and M is the modelled value. It is applied to the most frequent month of the variable observed (0 when the maximum difference of the variables is six months, 1 when no differences occur).Single value comparison metric: where O is the observed value and M is the modelled value.At exception of the Taylor statistics, all the other metrics are symmetric. Map cell-by-cell comparison metric: The ranking metric specifies the number of agreement cells against the total observed cell belonging to a specific class.The final score is the average over the selected classes.Three classes were used in our framework: no statistically significant relationship (i.e.no correlation, no trend detected), positive relationship (i.e.correlation/trend), and negative relationships (i.e.correlation/trend) detected. In terms of lower benchmark, the case of constant vegetation has been used.This is the equivalent to analysing the returned trend against a null hypothesis of non-changing vegetation.The average score obtained under this setting is equal to 0.3, considering the agreement cell-by-cell to each single class.The score of the model is thereinafter scaled to this lower benchmark. Fig. 2 . Fig. 2. Global atmospheric CO 2 and vegetation activity scores for the JSBACH model according to the list of traits in Tables2 and 3.The polar plot goes radially from 0 (less skillful model), in the centre, to 1 (skillful).Since we only consider one model here, we refer to the threshold value of 0.5 to indicate the model good/high performances and less good/low performances. Fig. 4 . Figure6bdisplays the decadal patterns of the normalized annual vegetation activity time series (GIMMS-NDVI and JSBACH-FAPAR), excluding evergreen tropical forests, glaciers, and desert areas.There appears to be a good qualitative global agreement, suggesting that phenological limitations are not likely the cause for the aforementioned too low increase in land C.However, the good agreement of the global vegetation pattern is partly due to the compensation of errors (Fig.7).The observed, spatially extensive positive trend in vegetation greenness in the period 1982-1991 is not fully captured by the model because several areas have either no trend or even a negative trend (in parts of the South America, Australia, and South East Asia).During the years 1992-1997, no clear geographical pattern is detected (data not shown).For the years 1998-2006, large areas with an Fig. 5 . Fig. 5. Monthly CO 2 trend in the station of Alert (Canada, ALT) for the periods 1982-1991 and 1998-2008.(a) Observations, as well as simulated contribution from fossil-fuel emission and net ocean fluxes (**P < 0.01).(b) Monthly record for observations and modelled data.Negative values for a specific month indicate a decrease of seasonal atmospheric CO 2 , indirectly linked to an increase of biosphere C uptake, and vice versa. Fig. 6.(a) Long-term pattern of atmospheric CO 2 at the station of Mauna Loa (MLO); (b) normalized annual values of vegetation activity (excluding tropical, desert, and ice areas) for GIMMS-NDVI and modelled FAPAR.Period of reference 1982-2006.Dotted lines represent the linear trend computed on the normalized data (qualitative analysis). Fig. 8 . Fig. 8. (a) Atmospheric CO 2 drawdown points (C-dd) as computed at the station of Barrow (BRW) for observations and model.(b) and (c) partial correlation between time of onset and mean annual temperature computed for observations and JSBACH, for the pe-riod 1998-2005.Red: positive correlations (P < 0.1); blue: negative correlations (P < 0.1); white: no significant correlations; grey: areas masked out from the analysis (see text). Fig. 9. (a) Cross correlation between precipitation pattern (SPI) and vegetation activity in Eurasia Temperate (EATe).(b) Cross correlation between temperature and vegetation activity in North American Boreal (NAB).(c) Atmospheric CO 2 growth rate in the station of Mauna Loa (MLO) and temperature pattern in the South American Tropical (SATr).(d) Atmospheric CO 2 growth rate in Barrow (BRW) and temperature patter in NAB.Dotted lines are confidence intervals at significant level of P < 0.05 (two-tailed statistics). Fig. 10 . Fig. 10.Apparent land-C sensitivity: CO 2 growth rate in Mauna Loa (MLO) versus global land surface temperature.Regression is significant at P < 0.01 for observations.Annual data points are omitted for clarity. Fig. A1.Exemplar of determination of time of onset of the seasonal signal, zero centred, of the vegetation activity (SeaWiFS-FAPAR).Time series extracted from one grid cell located in North American Boreal.a b Table 1 . List of selected atmospheric CO 2 monitoring stations and satellite-based vegetation activity datasets used in the analyses, as well as the time period used for elaborations. Table 2 . List of atmospheric CO 2 and vegetation activity traits used for the analyses at the seasonal time scale.A detailed explanation of metrics can be found in Sect. 2 and Appendices B and C). * Trait applied to the stations ALT, BRW, STM. Table A1 . Final scores of atmospheric CO 2 and vegetation activity for the baseline benchmark.Atmospheric CO 2 scores are reported per latitudinal band.
2018-04-14T20:34:35.041Z
2013-06-25T00:00:00.000
{ "year": 2013, "sha1": "a3a34b6a23d8b9612e46ef911dd8d0acf2a8e2a7", "oa_license": "CCBY", "oa_url": "https://www.biogeosciences.net/10/4189/2013/bg-10-4189-2013.pdf", "oa_status": "GOLD", "pdf_src": "Anansi", "pdf_hash": "a3a34b6a23d8b9612e46ef911dd8d0acf2a8e2a7", "s2fieldsofstudy": [ "Environmental Science" ], "extfieldsofstudy": [ "Environmental Science" ] }
249889740
pes2o/s2orc
v3-fos-license
Understanding the $sp$ magnetism in substitutional doped graphene Defect-induced magnetism in graphene has been predicted theoretically and observed experimentally. However, there are open questions about the origin of the magnetic behavior when substitutional impurities with $sp$ electrons are considered. The aim of this work is to contribute to the understanding of impurity-induced spin magnetism in doped graphene systems. Thus, the electronic structure and spin magnetic moments for substitutional doped graphene with impurities from groups IIIA (B, Al, and Ga) and VA (N, P, As, Sb, and Bi) of the periodic table were obtained within the framework of density functional theory. The nature of the magnetic ground state was determined from calculations of the total energy as a function of the spin magnetic moment using the fixed spin moment method. We show that the spontaneous magnetization in the studied systems arises from an electronic instability by the presence of a narrow impurity band at the Fermi level. Furthermore, we found that the emergence of spin polarization requires the impurity to introduce an extra electron to the graphene lattice and that the impurity-carbon hybridization is close to the $sp^3$ geometry. These features reveal that the charge doping sign and the hybridization degree play a fundamental role in the origin of $sp$ magnetism in substitutional doped graphene. that only a few of those impurities induce magnetism (P, N). The first case is P-graphene, which has spontaneous polarization with a spin magnetic moment of 1.0µ B per impurity [9-11, 17, 19]. Whereas the magnetic structure shows a local curvature around the P atom, a paramagnetic planar structure is found with higher energy than the distorted one [9]. A second case is N-graphene which is particularly interesting because DFT calculations with the generalized gradient approximation (GGA) for the exchange-correlation functional show a paramagnetic character [6,7,9], while calculations using a meta-GGA functional reveal magnetism [8]. This raises questions about whether meta-GGA functionals should be used to describe magnetic materials. Interestingly, it was recently shown that the use of meta-GGA does not improve the magnetic description of itinerant magnets over GGA functionals [20]. On the other hand, two representative experimental reports have confirmed that doped graphene with impurities of P [21] and N [22] exhibits ferromagnetism. By thermal annealing of fluorographite in P vapor, samples of doped graphene were synthesized with high P concentration (2.86-6.40 at.%) [21]. Magnetic measurements performed by means of a superconducting quantum interference device magnetometer evidenced a magnetic ordering driven by the presence of P groups. In the case of N impurities, Miao et al. studied samples of doped graphene prepared by a self-propagating high-temperature synthesis at different concentrations (up to 11.17 at.% of N) [22]. Their results revealed ferromagnetism with a Curie temperature greater than 673 K for samples with high N content. Nevertheless, these experimental samples contain different functional groups, which makes it difficult to know precisely the origin of the magnetic order. In addition, as far as we know, experimental evidence of magnetism in a single-substitutional impurity in graphene is still lacking. It is important to note that the nature and origin of impurity-induced sp magnetism in substitutional doped graphene have not been fully explained [8-11, 17, 18]. Well-known arguments based on Lieb's theorem [23] about a sublattice imbalance between π states are insufficient to explain why only some impurities could induce magnetism in graphene. To the best of our knowledge, there are no further reports about spontaneous magnetization in other substitutional doped graphene structures with sp impurities like for P and N [6,7,[12][13][14][15][16]. Therefore, the aim of this work is to contribute to the further understanding of impurityinduced sp magnetism in substitutional doped graphene. In this work, we present results based on first-principles calculations for the electronic and magnetic properties of different doped graphene systems with B, Al, Ga, N, P, As, Sb, and Bi as impurity atoms. First, from the analysis of the electronic structure in the paramagnetic state, we notice the emergence of an impurity band at the Fermi level and the role of its bandwidth in the rise of spontaneous magnetization. A narrow impurity band causes an electronic instability which favors a magnetic state, but the spontaneous magnetization requires the impurity to introduce an extra electron in the graphitic lattice. Second, we analyze the role of the charge-doping sign and the impurity-carbon hybridization in the impurity bandwidth and, consequently, in the emergence of magnetism in substitutional doped graphene. This paper is organized as follows: Sec. II gives a description of the computational details applied in our work. The results are presented and discussed in Sec. III, which is divided into two parts; the first one presents the electronic and magnetic properties of substitutional doped graphene. In the second part, we examine and discuss the role of the impurity-carbon hybridization geometry in the emergence of magnetism. Finally, our main findings are summarized in Sec. IV. II. COMPUTATIONAL DETAILS Total energies, electronic structure and spin magnetic moments were determined by solving self-consistently the Kohn-Sham equations within the framework of plane waves and pseudopotentials as implemented in the QUANTUM ESPRESSO code [24,25]. Core electrons were replaced by ultrasoft pseudopotentials from the GBRV library [26], which has been optimized for use in high-throughput DFT calculations. Valence electron states were expanded in plane waves with a kinetic energy cutoff of 40 Ry and charge density cutoff of 320 Ry. The exchange-correlation functional was treated within the Perdew-Burke-Ernzerhof [27] parametrization of the GGA. In order to simulate an isolated layer, we left at least 15Å of vacuum space between periodic images. We simulated substitutional impurities by replacing one carbon atom of the graphene pristine lattice. A supercell of 8 × 8 unit cells was used in order to simulate an impurity concentration in the dilute limit (c < 1.0 at. % ). We fixed the supercell lattice constant of each doped system at the corresponding ground-state lattice constant of pristine graphene. During all the structural calculations the atomic positions were relaxed with a Broyden-Fletcher-Goldfarb-Shanno quasi-Newton algorithm until the internal forces were less than 0.01 eV/Å. For structural optimization of the supercells, the Brillouin zone was integrated with a Monkhorst-Pack mesh [28] of 3 × 3 k points with a structure and the spin magnetic moments we used a 9 × 9 mesh with a smearing of 0.002 Ry to describe accurately the electronic and magnetic properties. The effect of spin-orbit coupling (SOC) on the electronic and magnetic properties was analyzed for graphene with substitutional N, P, As, Sb, and Bi, which showed no significant difference in the bands close to the Fermi level and their magnetic moments with respect to the case without SOC (see the Supplemental Material [30]). Henceforth, we present only results neglecting the SOC for all the studied systems. Figure 1 shows the paramagnetic band structures of pristine and substitutional doped graphene. At the top of Fig. 1, the folded band structure of pristine graphene shows the characteristic Dirac cone around the Fermi level formed in the K point by the well-known π and π * bands. With doping, the Fermi level is moved to the occupied π band for the holedoped case (B, Al, Ga) or the unoccupied π * band for the electron-doped case (N, P, As, Sb, Bi). The formation of an impurity band at the Fermi level induced by the substitutional doping is highlighted (band in red in Fig. 1). From the band structure, in the hole-doped cases, the impurity band has a similar behavior regardless of the impurity, whereas, for the electron-doped case, the impurity band depends on the specific impurity. This impurity band can be characterized by its bandwidth dispersion W imp , and an examination of the bandwidth for all cases reveals that the electron-doped systems, except for N-graphene, have the smallest bandwidths and a trend to increase with the atomic number (see Table I). These results underline an electron-hole asymmetry in the electronic structure from doping graphene. To have better comprehension of the electron-hole asymmetry in the electronic structure in doped graphene, we calculated the paramagnetic density of states (DOS) of Al-graphene and P-graphene. Figure 2 shows the total DOS as black solid lines, whereas the projected DOSs with σ and π character are shown by red and blue solid lines, respectively. The σ states are a result of the hybridization of s, p x , and p y orbitals, whereas the π states are delocalized states formed by p z orbitals. For comparison, the shaded areas in Figure 2 correspond to the DOS of pristine graphene. The total density of states shows significant differences between the hole-doped and electron-doped cases. For Al-graphene, the electronic states suffer an energy shift with respect to the electronic bands of pristine graphene as expected for hole doping. In contrast, P-graphene shows the emergence of a prominent peak at the Fermi level. An analysis of the projected σ bands of the DOS reveals that the impurity induces new states for the hole-doped and electron-doped cases. For Al-graphene these states are mainly localized just below the Fermi level in the range of -4.0 to -2.0 eV, whereas for P-graphene, these states are found in the range of 3.0 to 6.5 eV in the unoccupied bands. Closer inspection of the projected π bands of the DOS shows that the sharp peak at the Fermi level in P-graphene comes from these states. Al-graphene also presents π states at the Fermi level. The main difference is that the sharp peak in the electron-doped case is narrower than in the hole-doped case, as observed in the band structures. In order to investigate the magnetic ground state of each substitutional doped structure, we have carried out calculations of total energy as a function of the spin magnetic moment M using the fixed spin moment (FSM) method [32]. Table I. It is important to underline that, aside from the observed magnetism in P-graphene, we found that heavier elements from the VA group such as As, Sb, and Bi also induce a net magnetic moment in graphene. The study of sp magnetism in substitutional doped graphene has demonstrated the correlation between the impurity bandwidth and the origin of spontaneous magnetization, where small values of W imp favor the formation of a magnetic moment [19]. In the group of impurities under study, P-graphene has the smallest bandwidth (32 meV), thus showing magnetism, as previously reported [9-11, 17, 19]. Therefore, we can anticipate the turning of magnetism [34]. A quantitative estimation of I S can be obtained from fitting the fourth-order polynomial energy expression in the FSM curves previously described. The coefficient of the quadratic term α is related to the Stoner parameter as I S = 1/N (E F ) − α [34]. For our analysis, we can approximate the impurity band as a narrow rectangular band [19]. This approach allows writing the paramagnetic DOS at the Fermi level as N (E F ) = N 0 /W imp [5,19,35], and consequently, we obtain an alternative expression for the Stoner criterion such as I S N 0 /W imp > 1. This expression allows us to rationalize the origin of the magnetism in substitutional doped graphene with a Stonertype condition in terms of the impurity bandwidth W imp . Figure 4(a) shows the estimated value of the Stoner parameter I S in a rectangular band approximation and the impurity bandwidth W imp obtained from the paramagnetic band structure. Except for the anomalous case of N-graphene, two behaviors are clearly distinguished for the hole-doped and electrondoped systems. Interestingly, under this approach, the parameter I S seems to be dependent on the type of impurity just like W imp . Figure 4(b) shows the result of applying the refor-mulated Stoner criterion for all studied cases in a half-filling band (N 0 = 1.0). Whereas the hole-doped systems and N-graphene do not fulfill the Stoner condition of magnetism, the rest of the electron-doped cases satisfy the relationship I S /W imp > 1. The numerical data for the estimated Stoner parameter and the Stoner-type condition are presented in Table I. From this quantitative analysis, it is clear that the spontaneous polarization in substitutional doped graphene is driven by small values of W imp as long as W imp < I S . The magnetism induced through an electronic instability by the presence of a sharp singularity at, or close to, the Fermi level is a well-known effect which has been observed in graphene [2,3] with adsorbed hydrogen as well as monolayers of GaSe [36], α-SnO [37], InP 3 [38], and In 2 Ge 2 Te 6 [39] by hole doping. Table I for the electrondoped systems, we found that systems which have an impurity bandwidth smaller than the exchange splitting show an integer magnetic moment and those systems which have an impurity bandwidth larger than the exchange splitting have a fractional magnetic moment. Thus, the magnetic moment that exhibits each magnetic system is a result of the relation between W imp and ∆ s , so that we have integer values as long as W imp < ∆ s and fractional values for W imp > ∆ s . In this section, we explore the role of the structural and bonding features in the magnetic behavior in doped graphene. To have more physical insights into the origin of a narrow impurity band that gives rise to spontaneous magnetization, we take as case of studies Bgraphene and N-graphene. This choice follows mainly from the fact that N-graphene is an anomalous system compared with the electron-doped cases since it is non magnetic. Both systems are planar materials with a non magnetic phase. In contrast, the magnetic systems have a trigonal pyramid-type geometry with a bond angle less than 109.5 • (see Table I). This obeys an impurity atomic size larger than the carbon atom, which causes significant distortions in the lattice structure. Meanwhile, the similar atomic radii of B and N compared with the carbon atom keep the planar structure of graphene. These structural differences can be interpreted as different types of hybridization between the impurity and the carbon atoms in graphene, where the B-C and N-C bonds present an sp 2 hybridization, and in the other cases, the impurity-carbon bond corresponds to an sp 3 -type hybridization. With the aim of analyzing the structural effect of the impurity-carbon bond, we conducted simulations of B-graphene and N-graphene systems with different impurity-carbon hybridizations by artificially changing the position of the impurity atom with respect to planar graphene (height h). Thus, by fixing different values for h we can analyze the transition from an sp 2 (h = 0, θ = 120 • ) to sp 3 (h > 0.5Å , θ < 109.5 • ) hybridization. As was discussed above, the bandwidth dispersion W imp of the impurity band plays a fundamental role in the origin of spontaneous magnetization. Therefore, W imp is a useful parameter for analyzing the evolution of the magnetism. Figure 6(a) presents the values of W imp obtained for different h values from 0 to 1.0Å. The evolution of W imp reveals different trends for B-graphene and N-graphene. B-graphene has a slight change in its bandwidth, whereas N-graphene shows a significant decrease followed by an increase of W imp with a critical point at 0.75Å where the smallest bandwidth is found. These results anticipate that in B-graphene, regardless of the impurity-carbon hybridization geometry, the system remains paramagnetic. In contrast, we found that N-graphene could exhibit magnetism as along as W imp is small enough. To confirm these expectations, spin-polarized calculations were performed for both systems. can easily be extrapolated to the results presented in Fig. 6 for N-graphene. The increase in atomic number in the VA column means an increase in the atomic radii, which leads to different impurity heights and their corresponding impurity-carbon hybridization angles and, consequently, changes in W imp and M . As the electron-doped systems have impurity heights and impurity-carbon angles that correspond to cases with sp 3 hybridization, they are magnetic (see Table I). In when h is increasing, the density around the impurity changes slightly, but in N-graphene the electron charge density exhibits a lattice symmetry breaking over sublattice A or B (see Fig. 7). As we see in Fig. 6(b) for N-graphene, for h = 0.5Å the system becomes magnetic. In Fig. 7, it is interesting to note that for this h value, the charge density distribution shows a clear difference over the two different sublattices. Thus, we found that for N-graphene at h = 0.5Å the electron charge density is mainly concentrated in one sublattice, whereas in the adjacent sublattice it is reduced. In the critical point where the spin magnetic moment is maximum for N-graphene at h = 0.75Å, the charge density is concentrated only on the impurity and the carbon atoms which belong to the sublattice adjacent to the impurity. This corresponds to a sublattice imbalance of the π states. Since this effect does not occur in Bgraphene, we can see why the impurity bandwidth is almost insensitive to the change in the hybridization geometry. These features show again the electron-hole asymmetry but in the impurity band charge density as a function of different impurity-carbon bond hybridizations. To conclude, the impurity band charge densities for P-graphene, As-graphene, Sbgraphene, and Bi-graphene are shown in Fig. 8. As previously discussed, P-graphene and As-graphene are systems with fully spin polarized bands, whereas Sb-graphene and Bi-graphene have partially spin polarized bands in the magnetic state. These features are related to the imbalance degree between π states in the impurity band charge density, where for the first case there is a full imbalance over each sublattice and for the second one the imbalance of the π states is partial. From these results, we show that for the electron-doped systems, the magnetism is associated with a sublattice imbalance of the π states. introduces σ states located below the Fermi level. In contrast, in the electron-doped systems, the impurity mainly introduces π states, which are located at the Fermi level. With respect to the characterization of the magnetic states, the FSM curves showed that all the hole-doped cases and N-graphene have a non magnetic behavior, whereas the rest of the electron-doped systems have a net magnetic moment. The magnetic band structures for graphene doped with P and As show fully spin polarized bands, whereas those of Sb and Bi exhibit a partial spin polarization. Interestingly, we found that magnetic behavior is related to the sublattice imbalance degree of the π states, where P-graphene and As-graphene exhibit full sublattice imbalance, while Sb-graphene and Bi-graphene exhibit only a partial one. From the analysis of the electronic structure for N-graphene at different hybridization Isosurface plots for the contribution of the impurity band to the charge density for P-graphene, As-graphene, Sb-graphene, and Bi-graphene. The isosurface is shown at 10 −3 e/Å 3 . geometries, we showed that the electron-doped systems are magnetic as long as they present an sp 3 -type hybridization. The analysis of the impurity band charge density in N-graphene revealed that the different impurity-carbon bond geometries are related to a sublattice imbalance of π states in the emergence of spin polarization. Thus, the impurity-carbon hybridization is closely related to the emergence of magnetism in the electron-doped systems. Finally, using a reformulated Stoner condition of magnetism for a narrow band, we showed that the spontaneous magnetization is driven by an electronic instability associated with a narrow impurity band at the Fermi level. Thus, we found that a narrow impurity band (W imp < 250 meV) is required in order to obtain spin polarization in doped-graphene systems. We note that this feature is present in the studied electron doped systems with an impurity-carbon hybridization close to the sp 3 geometry. From the present analysis, considerable insight has been gained with regard to the origin of the impurity-induced sp magnetism in substitutional doped graphene. tion exhibit values smaller than 0.01 meV/supercell. The reported results in table I for the magnetic case corresponds to the quantization axis in the z-axis direction. There is no significant difference with the magnetic moment results presented in the main text and the general trends are preserved, considering that they were calculated with the same computational parameters but taking into account SOC and using full-relativistic pseudopotentials [1].
2022-06-22T01:16:26.531Z
2022-06-17T00:00:00.000
{ "year": 2022, "sha1": "5b133c874d7b6723585ca2114ae3ea1328b06450", "oa_license": null, "oa_url": null, "oa_status": null, "pdf_src": "Arxiv", "pdf_hash": "5b133c874d7b6723585ca2114ae3ea1328b06450", "s2fieldsofstudy": [ "Physics", "Materials Science" ], "extfieldsofstudy": [ "Physics" ] }
267769999
pes2o/s2orc
v3-fos-license
The large-scale environment of 3CR radio galaxies at z<0.3 The question of whether and how the properties of radio galaxies (RGs) are connected with the large-scale environment is still an open issue. For this work we measured the large-scale galaxies' density around RGs present in the revised Third Cambridge Catalog of radio sources (3CR) with 0.02<z<0.3. The goal is to determine whether the accretion mode and morphology of RGs are related to the richness of the environment. We considered RGs at 0.05<z<0.3 for a comparison between optical spectroscopic classes, and those within 0.02<z<0.1 to study the differences between the radio morphological types. Photometric data from the Panoramic Survey Telescope&Rapid Response System (Pan-STARRS) survey were used to search for"red sequences"within an area of 500 kpc of radius around each RG. We find that 1) RGs span over a large range of local galaxies' density, from isolated sources to those in rich environments, 2) the richness distributions of the various classes are not statistically different, and 3) the radio luminosity is not connected with the source environment. Our results suggest that the RG properties are independent of the local galaxies density, which is in agreement with some previous analyses, but contrasting with other studies. We discuss the possible origin of this discrepancy. An analysis of a larger sample is needed to put out results on a stronger statistical basis. Introduction Studies on active galactic nuclei (AGN) have a major role in astrophysics.Their energetic processes are believed to be fundamental in the evolution of their host galaxy (e.g., Ferrarese & Merritt 2000, Häring & Rix 2004, Vanden Berk et al. 2006) and the general environment they inhabit (e.g., Silverman et al. 2009, Kollatschny et al. 2012).In this context, radio galaxies (RGs) are an ideal laboratory to investigate the link between the activity of the central engine and the large-scale structure in which they live.In particular, it is important to establish whether the properties of RGs, from their radio morphology to their accretion efficiency, are linked to their environment. Extended extragalactic radio sources can be classified as Fanaroff & Riley (1974) edge-darkened FR I, when they are dominated by emission of two-sided jets, or edge-brightened FR II if they are dominated by two lobes.Although it has been proposed that this dichotomy arises from differences in the properties of the central engine (e.g., Baum et al. 1995, Zirbel & Baum 1995), it might also be connected with their environment (e.g., Gopal-Krishna & Wiita 2001). In addition to the radio dichotomy, optical studies have discovered different AGN types considering their spectroscopic appearance.Laing et al. (1994), following the original suggestion by Hine & Longair (1979), found that FR IIs can be put into two subclasses.They proposed a separation into high excitation galaxies (HEGs) -which are defined as galaxies with [O III]/Hα> 0.2 and an equivalent width (EW) of [O III] > 3 Å -and low excitation galaxies (LEGs).Tadhunter et al. (1998) found a similar result from an optical spectroscopic study of the 2Jy sample, in which a subclass of weak-line RGs (sources with an EW of [O III])< 10Å) stands out due to a low ratio between emission line and radio luminosities as well as in the [O III]/[O II] line ratio.These early results have been confirmed by Buttiglione et al. (2010Buttiglione et al. ( , 2011) ) from the study of 3C sources with z < 0.3 and by Capetti et al. (2022) for sources up to z∼ 0.8. The difference between LEGs and HEGs is ascribed to the intrinsic efficiency of the accretion onto the super massive black hole (SMBH): HEGs are characterized by a high Eddington ratio (i.e., L acc /L Edd > 0.01) and LEGs by inefficient accretion (Buttiglione et al. 2010).Along this main classification, a further class called broad line objects (BLOs) includes the sources in which the optical spectrum shows broad permitted lines.However, an AGN classification does not always arise from intrinsic differences.In fact, anisotropy effects (e.g., Antonucci 1993, Urry & Padovani 1995) can be fundamental in changing the spectral characteristics of an AGN.Hence, their diversities are often just due to an orientation with respect to our line of sight, rather than a physical phenomenon.For example, the latter class mentioned (i.e., BLOs) has accretion properties and spectral narrow emission lines typical of HEGs where the lack of broad emission lines is ascribed to the presence of a circumnuclear-obscuring torus (Antonucci 1984).Hence, BLOs and HEGs can be considered as being part of the same class of objects, just seen at a different viewing angle. The study of the environment of various radio-loud AGN is intimately linked to both the morphology and the characteristics of the SMBH activity (such as the accretion rate, radio luminosity, and jet power), which is becoming fundamental in understanding whether or not there is a mutual impact between them (e.g., Gilmour et al. 2007, Ineson et al. 2015).Various studies have already been conducted: all previous works concur with the conclusion that RGs prefer to inhabit large-scale galaxyrich environments (e.g., Best 2004, Tasse et al. 2008, Massaro et al. 2019a) and for this reason they are often associated with galaxy clusters (e.g., McNamara et al. 2005, Giacintucci & Venturi 2009), where they play a significant role in regulating the cooling flow of the intracluster medium (ICM) (e.g., Boehringer et al. 1993, Blanton et al. 2003, McNamara & Nulsen 2007, Mc-Namara & Nulsen 2012).According to this scenario, RGs could be used as beacons of rich environments and ideal laboratories to test the cosmological scenario through the investigation of their formation and coevolution with the large-scale structure they inhabit (e.g., Bahcall 1972). However, early results highlighted differences in large-scale properties of FR Is with respect to FR IIs which tend to be found in galaxy-rich and isolated environments, respectively (e.g., Zirbel 1997).A similar result was obtained by Croston et al. (2019) by cross-correlating optical catalogs of groups and clusters with RGs; they found a systematic correlation between the radio morphology of RGs and the richness of their environment, with a preferential pattern for FR I galaxies to inhabit richer environments than FR IIs.This would be in agreement with the scenario in which the morphological dichotomy depends on the density of the surrounding medium, where FR I jets are disrupted by an impact with a large-scale denser ambient (e.g., Jones & Owen 1979, Blanton 2000).Croston et al. also discovered a correlation between cluster richness and radio luminosity.However, their sample is mostly formed by relatively low-luminosity RGs.Ching et al. (2017) found an analogous relation of galaxies' density with radio luminosity; they also showed that high-luminosity LEGs lie in a denser environment than HEGs and low-luminosity LEGs.A difference in the environment among the LEGs based on their luminosity and morphology was also found by Capetti et al. (2020): the compact (and low-luminosity) FR 0s are usually found in poor groups, while FR Is more often inhabit clusters of galaxies. In contrast, Massaro et al. (2019a) found that radio sources in the local Universe, independently of both their morphological and optical classification, live in environments with a similar richness and characteristics, although their analysis included only a limited number of HEGs.Similarly, Vardoulaki et al. (2021) have found, by studying sources in the COSMOS field, that different types of environments are covered independently of the radio classification. In this work we investigate the environment of RGs listed in the Third Cambridge Catalog of radio sources (3CR) (Bennett 1962a,b) up to z = 0.3.These criteria predominantly select FR II RGs, as the low-luminosity FR Is are more common at low redshifts (i.e., z < 0.1) and include a larger number of HEGs with respect to the sample studied in Massaro et al. (2019b), since HEGs are exclusively FR II galaxies, as mentioned above.This 3CR subsample has radio luminosities spanning over four orders of magnitude, and with an almost complete spectroscopic classification from Buttiglione et al. (2010), allowing us to perform the analysis between various classes.In order to make an environmental comparison between different subclasses, it is important to consider both a morphological and spectroscopic classification.The 3CR sample has all of this information available; moreover, the sky area covered by the 3CR sample is wide enough to include a large number of bright sources, contrary to other larger catalogs. Our analysis will provide robust statistical results when large-scale environments of both optical spectroscopic classes and morphological types are analyzed and compared.In particular, HEGs and LEGs can be used to investigate whether the level of activity of the SMBH and the large-scale properties are correlated, while FR Is and FR IIs can be employed to search for a correlation between the environment as well as the radio morphology and luminosity.Furthermore, we can also test the prediction of the unified model that postulates that no differences should be found between the environment of HEGs and BLOs.Finally, as several of the works described above were also conducted with a subsample of 3C sources, we can compare our conclusions to the ones found in previous analyses. We characterized the environment of RGs by studying their color-magnitude diagrams (CMDs) and counting the number of galaxies located in the so-called red sequence (RS).The RS is a tight relation between the color and magnitude of galaxies belonging to a group or cluster of galaxies.It was discovered thanks to an observation of elliptical galaxies (e.g., Visvanathan & Sandage 1977).RSs are observed in galaxy clusters up to z ∼ 2 (e.g., Gobat et al. 2011) and can be exploited in the search for large-scale structures with photometric data.The RS is described by the following three parameters: a zero point, slope, and photometric dispersion.These parameters are different depending on the redshift considered since the properties of the RS vary with the local galaxy density (Valentinuzzi et al. 2011).In particular, the RS dispersion is larger in lower density regions (i.e., in galaxy groups with respect to clusters).At a lower redshift, galaxies' groups are more abundant than clusters, leading to an increased RS dispersion with respect to more distant regions.However, as shown by Mei et al. (2009), the rest-frame zero point of the RS shows no significant evolution out to redshift z ∼ 1, indicating that the RS was already in place ∼6 billion years ago. This paper is organized as follows: in Sect. 2 we describe the sample of the selected sources and the available optical observations with the Panoramic Survey Telescope & Rapid Response System (Pan-STARRS) (e.g., Chambers et al. 2016) 1 and Sloan Digital Sky Survey (SDSS; e.g., Albareti et al. 2017) 2 .In Sect. 3 we discuss the method we applied to obtain the richness of each RG environment.The results emerging from the comparison of optical and morphological classes are reported and discussed in Sect. 4. A summary is found in Sect. 5 where we also draw our conclusions.We adopted the following set of cosmological parameters: H 0 = 69.7 km s −1 Mpc −1 and Ω m = 0.286 (Bennett et al. 2014). The 3CR sample and adopted data We initially considered the 3CR catalog of Bennett (1962a,b).The original sample comprises 328 radio sources and Spinrad et al. (1985) were able to identify the optical counterpart and redshift of 298 objects among them.Starting from these 298 radio sources, we examined the subsample that formed by 104 RGs at redshift 0.02 < z < 0.3 for which we have the spectroscopic classification from Buttiglione et al. (2009).Their main parameters are listed in Table 1. Nine objects and five objects (highlighted in red in Table 1) do not have the optical or morphological classification, respectively, and they are not included in the analysis.Furthermore, poorly represented groups of sources (the single star-forming galaxy 3CR 198 and the three extreme low excitation galaxies 3CR 314.1, 3CR 348, and 3CR 028) are not used in our statistical study.Finally, four sources (in blue in Table 1) are characterized by either a combination of high Galactic absorption (A v > 2 magnitudes) and relatively high redshift, or just a very high absorption (A v > 5 magnitudes).As a result, our analysis could not be performed at the requested level of absolute magnitude (see Sect. 3 for more details) and these sources have also been excluded. The analysis was performed separately in two redshift ranges: the morphological types (i.e., FR I and FR II) were compared to each other in the 0.02 < z < 0.1 redshift bin, while the optical spectroscopic classes (i.e., HEG, LEG, and BLO) important not only to provide us with a complete uniform sample of sources, but also to allow the same method to be applied to the data consistently. The final samples include 23 LEGs, 29 HEGs and 16 BLO RGs, and 14 FR Is and 24 FR IIs.In these two subsamples, a total of 14 and six sources, respectively, have been excluded because of the reasons stated above. Method We used the optical photometric catalog of Pan-STARRS, with its median point spread function (PSF) of 0.94", which produced images of the whole sky north of −30 • declination in five broadband filters (g, r, i, z, and y) over multiple epochs.This allowed us to produce stacked images with an average coverage of ∼ 8.9 exposures per pixel, reaching apparent magnitude limits of 23.3, 23.2, and 23.1 in the g, r, and i band, respectively. We searched for the photometric data of all sources inside a circular region with a radius of r = 500 kpc centered on each 3CR object.In addition, four background regions of the same size were chosen at a projected distance of 5 Mpc from each source in the north, south, east, and west directions. 3The final backgrounds, used throughout the analysis, have been defined for each RG environment as the average of the four background fields taken into account. As mentioned in the Introduction, galaxies belonging to the same galaxy group or cluster align along an oblique stripe in the CMD, defining a red sequence.The position of the stripe, its thickness, and inclination depend on the group or cluster redshift and mass density.Our work is based on defining the richness of each 3CR RG environment by measuring the excess of sources included in the RS with respect to the background.We adopted the RS relations found by O'Mill et al. (2019) that are defined in the g − r color calculated with the SDSS filters in the following three redshift bins: z ≤ 0.065, 0.065 < z < 0.1, and 0.1 < z < 0.2. To apply the results of O'Mill et al. ( 2019) based on the SDSS filters to the Pan-STARRS photometric system, we calculated the photometric correction to the RS parameters using the coefficients reported in Tonry et al. (2012). To extend the study to galaxies up to z = 0.3, outside the range covered by the O'Mill et al. analysis, we extrapolated the RS parameters, calculating the rest-frame magnitudes and colors by applying the K correction from Chilingarian et al. (2010) to our photometric data. 3.1.Comparison of HEG, LEG, and BLO at 0.05 < z < 0.3 For each source in the redshift range 0.05 < z < 0.3, we produced a CMD of the optical sources within a radius of 500 kpc, adopting the aperture magnitude color indexes g − r.Kron photometry was used for an appropriate estimate of the extended sources' magnitude in the r band.Moreover, we corrected colors for the Galactic reddening using the Fitzpatrick (1999) extinction law. Fig. 1 shows two examples of the CMD.The filter-corrected RS relations extrapolated by O'Mill et al. (2019) are drawn in red in each CMD.In the left panel, we report the case of 3CR 089, where a RS is clearly seen.To further highlight the RS in the diagrams, we also considered (when available, see Tab. 1) the spectroscopic SDSS data.These data can be exploited to search for companions (i.e., objects at the same redshift) that are marked in the respective CMD.The position of these spectroscopic companions into the RS generally confirms the goodness of the relations of O'Mill et al. (2019).Only one of the companions lies below the lower limit of the RS dispersion, at g−r ∼ 0.25, which is likely a star-forming galaxy.Star-forming galaxies have been revealed by previous studies which highlighted the presence of a blue cloud region in the CMDs (e.g., Eales et al. 2018).In this case, the g − r color index of this particular source in the field of 3CR 089 corresponds to the typical values of late-type galaxies found in the blue clouds of Eales et al. (2018). The right panel instead shows the CMD for 3CR 063 where the RS is not readily visible and no SDSS data are available. Fig. 2 shows the distribution of the relative magnitude of the sources within the RS for 3CR 098 (black histogram) compared to the averaged value in the four background fields (red histogram).In the lower panel, the background was subtracted to estimate the excess of sources for which we report the Kcorrected absolute magnitude in the r band. We estimated the excess of sources for each 3CR RG, a parameter directly related to the local galaxies' density.In order to properly compare the results of sources at different redshifts, we adopted a fixed cut in absolute magnitude.Moreover, the comparison between sources of different redshifts could be done only if we considered the same RS area from which the number of sources was extracted.Hence, the same RS dispersion was adopted. The results reported in Table 1 refer to a RS dispersion equal to 0.312 (appropriate for the lowest redshift sources) and to a threshold in K-corrected absolute magnitude of M r = −17.The selected galaxies are ∼3.5 magnitudes fainter than the characteristic luminosity L * of the luminosity function of local earlytype galaxies (see, e.g., Bell et al. 2003).The uncertainties on N RS were calculated by adding, in quadrature, the error on the background fields to the Poissonian noise on the source field.In particular, for each source the background error has been defined as the standard deviation of the four background fields considered for that source.The Poissonian statistic on the background does indeed underestimate the error because it does not take any other effects into account, such as cosmological variations or the possibility of including another group or cluster of galaxies by chance in one of our background fields. Comparison of FR Is and FR IIs: A different approach at 0.02 < z < 0.1 For the comparison of the environment of the FR Is and FR IIs, we extended the analysis to a lower redshift threshold in order to include a sufficient number of FR I sources.We then considered the 0.02 < z < 0.1 redshift range.However, this has a strong impact on the CMDs because the number of sources included increases dramatically, as shown in Fig. 3, where the CMD of the RG 3CR 338 (at z=0.0303) is shown.This is due to the fact Histogram of the number of sources in the RS after background subtraction.The vertical blue dotted line marks the magnitude limit adopted. that the area covered by the 500 kpc radius becomes as large as 20 ′ for a source at z = 0.02.As a result, also the number of sources in the background fields, effectively setting the uncertainty on the measurement of N RS , increases, compromising our ability to estimate the local galaxies' density.We then adopted a different approach than the one used to compare sources at a higher redshift in the attempt to reduce the number of spurious sources. In fact, at these low redshifts, it is possible to separate extended sources (i.e., galaxies) from point-like sources by comparing the PSF (r PS F ) and the Kron (r k ) magnitudes provided by the Pan-STARRS catalog.The difference in magnitude of stars is r PS F − r k ∼ 0.4 On the other hand, galaxies have r PS F > r k .Fig. 4 shows the distribution of the difference between these two magnitudes, that is, (r PS F − r k ) versus r PS F .A branch with r PS F − r k ∼ 0 is readily visible, which formed by the point-like sources.5A large number of sources instead show r PS F > r k and these can be readily separated from the stars down to r PS F ≲ 19.The best separation between the two populations was obtained by adopting r PS F − r k > 0.3.To estimate the excess of sources, we used the same limit in absolute magnitude adopted for the RS, that is, M r < −17, but considering only extended sources.Fig. 5 shows the impact of selecting only the extended sources: the number of sources is drastically reduced and they form a well-defined RS.Similarly, the number of sources in the background fields, selected with the same method, decreases and thus reduces the uncertainty on N RS. .Some low redshift sources (in the range 0.02 < z < 0.05) presented a high uncertainty for N RS associated with a significant fluctuation of the background values.We noticed how this effect was systematically attributed to those sources with low Galactic latitudes, due to the large gradient in Galactic sources.For these RGs we considered four background regions all located at the same Galactic latitude.Their difference is approximately zero for point sources (i.e., stars), while the fluxes of the extended galaxies are under-estimated by the PSF photometry.This creates a sharp separation between them.The two solid lines mark the boundaries of the regions where extended galaxies are located. Results For all RGs we estimated the total number of sources in the RS and we subtracted its respective average of the background fields.We estimated the uncertainty on the source count excess as described in Section 3.1. Results for optical spectroscopic classes We started by considering the results for the sources with 0.05 < z < 0.3, comparing the different optical spectroscopic classes. The results are reported in Tab. 3 and are visualized in the histogram of Fig. 6, where we plotted their richness (i.e., how many sources belong to their RS, N RS ) distribution. Both histograms reveal a variety of environments inhabited by these RGs, from the poorest one with a small excess of objects found in the RS area to the richest ones with more than a hundred sources (i.e., rich galaxy clusters).However, both LEGs and HEGs avoid isolated fields and prefer to be in small groups.We conclude that RGs can be found in both large-scale galaxyrich environments (i.e., galaxy clusters) and in poorer fields, with no preferential patterns. We then compared the distributions of N RS of the various classes with a Kolmogorov-Smirnov test.In the case of BLOs and HEGs, the two classes are not statistically different (see Table 3), which is as expected from the UM, although they are within the relatively small number of BLOs.The same result from the comparison of HEG/BLO and LEG types also applies.Furthermore, the values of the median of N RS for the various classes are consistent within errors. We also tested the suggestion of the presence of a correlation between radio luminosity and N RS .A Spearman rank test indicates that such a connection is not present for our sample as we obtained a probability of correlation <0.01. The results presented above are based on the assumption of a specific set of values for the parameters describing the RS, that is, the zero point and dispersion of the RS, and a limit on the absolute magnitude.We then tested whether these values affect our conclusions.We then considered two different limits in absolute magnitude (M r =-16 and M r =-18) and varied the dispersion, adopting the different values derived by O'Mill et al. (2019) in turn in the various redshift ranges.Thus, we repeated the same analysis using the other two values for the dispersion and slope listed in Table 2.The environment richness depends on the RS parameters, as well as on the magnitude limit, but the final results are unchanged: the statistical tests on the distributions show no significant difference among the various classes.The adoption of the wider RS dispersion is preferable and gives a better estimate of the local galaxies' density.This is due to a general increase in the signal-to-noise ratio.The signal-to-noise ratio was calculated as N RS /σ, where σ is the uncertainty associated with N RS . Finally, although our analysis is aimed at limiting the effects of the different redshift of the sources considered as much as possible, we cannot exclude that some residual effect is still present.However, the redshift distribution of the three classes, shown in Fig. 7, is not statistically different as obtained from the Kolmogorov-Smirnov test and the median values (see Table 4 for a summary of the statistical comparison). Results for morphological classes We here analyze the results obtained from the comparison between FR Is and FR IIs, as visualized in the histograms of Fig. 8 where we report their richness distribution.As for the results obtained for spectroscopic classes, both histograms reveal a variety of environments inhabited by FR Is and FR IIs.We conclude that RGs can be found both in large-scale galaxy-rich environments and in isolated or poor fields, independently of their radio morphological classification.Both the Kolmogorov-Smirnov test and the median values on the FR I and FR II distributions (Table 3) indicate that the two classes are not statistically dif- ferent.This result is robust against changes in the different RS parameters adopted. Similarly to the results obtained for the FR IIs, no correlation between the radio luminosity and N RS is found. Summary and conclusions The aim of this project was to explore the large-scale environment of RGs.We selected a sample of 104 3CR RGs with 0.02 < z < 0.3, considering both their optical classification into LEGs, HEGs, and BLOs types, and morphological classification into FR Is and FR IIs. From the Pan-STARRS catalog, we extracted the objects located in a circular area with a radius of r = 500 kpc centered on each source, and we built g − r versus r CMDs.Early-type galaxies belonging to a galaxy group or cluster are expected to lie along a RS in this diagram.The richness of each 3CR source has been defined as the difference between the number of objects falling into the RS area of the source and that in the background, defined as the average of four different regions of the same area at a 5 Mpc distance from the source. However, for the sources located at the lowest redshifts (i.e., z < 0.05), the stellar contamination in the CMD becomes dominant and it substantially increases the uncertainties on the local galaxies' density.For these sources the analysis required a further selection of sources, that is, we considered only the extended ones.We exploited Kron and PSF magnitude estimations to identify extended objects (i.e., galaxies) in the fields before the calculation of richness.This method cannot be applied up to z ≳ 0.1, because the angular size of galaxies is not sufficient to provide a clear selection. We obtained the following results: -HEGs' and BLOs' richness distributions are not statistically different, which is as expected from the AGN-unified model, that is, HEGs and BLOs are consistent -from the point of view of their environment -with being the same class of objects but just seen at a different inclination angle.-The distributions of local galaxies' density of LEGs as well as HEGs and BLOs (now considered as a single class based on the previous result) are not statistically distinguishable. The same result applies to the comparison of FR Is and FR IIs.-The distributions of N RS for HEGs and BLOs as well as LEGs, in addition to for FR Is and FR IIs, span over a wide range of richnesses.Hence, independently of their optical and morphological classification, RGs can be found in relatively poor environments (i.e., small galaxy groups) as well as in rich galaxy clusters.With just a few exceptions among the FR IIs, the distributions also highlight the preference for RGs to avoid being isolated sources.-The radio luminosity is not connected with the source environment. Our main result, that is to say the lack of a connection between the environment and RGs' properties (both morphological and spectroscopic), is in agreement with the findings of Massaro et al. (2019b) and Vardoulaki et al. (2021).However, other works on RG environments had shown different results.Massaro et al. (2019b) discuss, in detail, possible reasons for this discrepancy (and others).We followed a similar approach based on the results of our analysis. The Σ 5 parameter, defined as the ratio between the number of sources and the projected area included between the central galaxy and the fifth nearest neighbor, is a widely used parameter to estimate the galaxies density and Ching et al. (2017) found that HEGs are associated with lower values of Σ 5 than LEGs.However, Σ 5 is a redshift-dependent quantity because faint galaxies at larger distances are undetected.The comparison of the properties of sources at a different redshift may then be inaccurate.Our method, based on redshift-independent quantities such as the galaxies' absolute magnitude, is not affected by this problem.Furthermore, although Ching et al. (2017) started from a large sample, ∼ 400 sources, their conclusion is based on a rather small group of high power HEGs.Gendre et al. (2013) concluded that HEGs are found almost exclusively in low-density environments while LEGs occupy a wider range of densities, apparently in contrast with our conclusions.However, when the comparison between LEGs and HEGs is limited to those with a FR II morphology, they found that the richness of the two classes in indistinguishable, which is in agreement with our findings. In conclusion, we considered sources over a wide range of redshift.Our analysis limits the effects of the different distances of the sources as much as possible; furthermore, the redshift distributions of the various subclasses are not statistically different from each other.This restricts biases introduced by the analysis of sources at different redshifts. However, our work considers only a relatively small group of sources and a larger sample should be used to put out results on a stronger statistical basis.Furthermore, we referred only to the local galaxies' density to characterize the RGs' environment.Other factors can be important, such as the location of a given source within its group or cluster of galaxies.This effect cannot be studied with our method due to the large contribution of interlopers in the CMDs which prevent the estimate of the location of the group or cluster center and/or the presence of close companions, for example.This requires complete spectroscopic coverage of nearby objects. Alternatively, a more comprehensive picture of the environment could be obtained by analyzing the gaseous component in the surroundings of RGs.In particular, X-rays probe the hot gas underlying the halo hosting the RGs.Properties of this largescale gas, such as density and temperature, in correlation with the properties of the RG (i.e., position in the cluster or group, morphology, and accretion rate) can lead to a more comprehensive description of the inter-connection between the AGN activity and the large-scale structure.The forthcoming all sky survey performed by the eROSITA instrument on board of the Spectrum-Roentgen-Gamma mission (Predehl et al. 2021) is the ideal instrument to perform such a study. In the near future, the Rubin Legacy Survey of Space and Time (Rubin-LSST; Ivezić et al. 2019) will monitor the southern sky for ten years in six filters with unprecedented depth.This will allow us to extend the study of the environmental properties of RGs to high redshifts.According to Kotyla et al. (2016), half of the RGs at z > 1 lie in rich environments, but this is based on a small sample of 21 RGs from 3CR.Using their results and an elliptical galaxy template (Polletta et al. 2007), we infer that Rubin-LSST will have to reach a co-added image depth of about 24 mag in the z band to make the analysis of the clustering properties up to z ∼ 0.5 possible.This will be obtained in the first years of the survey.Instead, a mag of ∼ 26.5 is needed to go up to z ∼ 2, which is approximately the depth that Rubin-LSST will reach in ten years.Therefore, as the project progresses, we will be able to push the study of the RG environment from the local Universe to cosmological distances. Fig. 1 . Fig. 1.Left panel: CMD of the source 3CR 089.The blue dot represents the host of 3C089; the red dots are the spectroscopic companions identified through SDSS data; the red lines represent the boundaries of the RS from O'Mill et al. (2019); and the green dots are the sources falling inside the RS with absolute magnitude M r < −17.Right panel: CMD of the source 3CR 063.The RS in this source is less clearly defined. Fig. 2 . Fig. 2. Upper panel: Histogram showing the number of sources in the various magnitude bins falling into the RS (black line) and the number of sources of the corresponding background RS (red line).Lower panel:Histogram of the number of sources in the RS after background subtraction.The vertical blue dotted line marks the magnitude limit adopted. Fig. 3 . Fig. 3. CMD of the source 3C 338.The blue dot represents the RG and the green ones are all the sources falling into the RS relation.The red dots are the spectroscopic members identified with SDSS. Fig. 4 . Fig. 4. Comparison of the PSF and Kron magnitudes.Their difference is approximately zero for point sources (i.e., stars), while the fluxes of the extended galaxies are under-estimated by the PSF photometry.This creates a sharp separation between them.The two solid lines mark the boundaries of the regions where extended galaxies are located. Fig. 5 . Fig. 5. CMD of the source 3C 338 but the point-like sources have been removed.The remaining sources are all extended and form a welldefined RS. Fig. 6 . Fig.6.Histograms showing the distribution of the number of RS objects for the HEGs and BLOs (left panel) and LEGs (right panel).The insets show an enlargement of the same distributions with a smaller bin size.The magnitude cut used is M r = -17, while the RS dispersion is 0.312. Fig. 7 . Fig. 7. Redshift distributions of LEGs (upper panel), HEGs (middle panel), and BLOs (lower panel) in the redshift range 0.05 < z < 0.3.The statistical tests show that they are not statistically different for the three classes of RGs . Fig. 8 . Fig. 8. Histograms showing the distribution of N RS for the FR Is (left panel) and FR IIs (right panel).The insets show an enlargement of the same distributions with a smaller bin size.The magnitude cut used is M r =-17, while the RS dispersion is 0.312 Table 1 . 3C sources considered ordered for increasing redshift.The number of sources in the RS (N RS ) for RGs at z > 0.05 is the one found with the first method, without selecting just the extended sources.Number of RS sources subtracted to the ground.Values outside the parenthesis refer to the estimates of N RS obtained by selecting only extended optical sources for the sources with 0.05 < z < 0.1. were compared in the 0.05 < z < 0.3 bin.In the former case, the lower redshifts were necessary to include FR I RGs in the sample, but the background contamination is significantly higher (see Sect. 3.2).In addition, the methods used to compare the different classes in the two ranges (also described in Sect.3.2) are different.Hence, dividing our RGs into different redshift bins is Table 2 . O'Mill et al. (2019)Mill et al. (2019)listed for increasing redshift bins.The last row in the z=0.2−0.3 bin reports the zeropoint value calculated by our RS extrapolation and the other parameters adopted for the main analysis. Table 3 . Results of the statistical tests on N RS distributions for LEGs, HEGs, and BLOs at 0.05 < z < 0.3 and FR Is and FR IIs at 0.02 < z < 0.1. Table 4 . Results of the statistical tests on the redshift distributions for LEG, HEG, and BLO.
2024-02-22T06:45:08.420Z
2024-02-21T00:00:00.000
{ "year": 2024, "sha1": "a9d705593fc3817691446780cfca61ccf663d467", "oa_license": "CCBY", "oa_url": "https://www.aanda.org/articles/aa/pdf/2024/04/aa47525-23.pdf", "oa_status": "HYBRID", "pdf_src": "ArXiv", "pdf_hash": "a9d705593fc3817691446780cfca61ccf663d467", "s2fieldsofstudy": [ "Physics" ], "extfieldsofstudy": [ "Physics" ] }
52032795
pes2o/s2orc
v3-fos-license
Hyperbaric oxygen therapy to improve cognitive dysfunction and encephalatrophy induced by N2O for recreational use: a case report N2O, or laughing gas, is generally used for anesthesia, especially in stomatology and pediatrics but is also commonly used recreationally. Cognitive dysfunction induced by the recreational use of N2O is rare. Here, we present the case of an 18-year-old female with a history of having used N2O recreationally for 5 months who suffered from encephalatrophy and severe cognitive dysfunction. All of the symptoms gradually subsided with ~20 days of treatment by hyperbaric oxygenation. We hypothesize that the long-term use of N2O may have induced a chronic state of systemic hypoxia that further induced cerebral atrophy with impaired cognitive function. Hyperbaric oxygen therapy (HBOT) is reported here for the first time as an important therapeutic element for treating N2O toxicity due to recreational use. Introduction Nitrous oxide (N 2 O) is a colorless, nonflammable, inorganic volatile gas with psychedelic effects that is commonly referred to as laughing gas. 1 It is widely used for anesthesia and as an analgesic; it is also commonly used recreationally. The effects of N 2 O were first reported in 1799 as consisting of a brief but vivid intoxication, accompanied by a powerful euphoria that may distort sensation, as well as temporal and spatial perceptions. 2 During the 19th century, N 2 O was a popular recreational drug used in theater halls to relieve pain during performances. The recreational use of N 2 O re-emerged during the 1960s, and it is now widely used and available via a variety of different manners of administration, including inhalation via canisters, balloons, respirators, and airtight bags. 2 The side effects of N 2 O include transient dizziness, dissociation, disorientation, loss of balance, impaired memory and cognition, and weakness in the legs. 3 It was previously reported that N 2 O might induce cognitive impairment when used as an anesthetic. 4,5 We present a case report in which a patient presented with encephalatrophy with cognitive dysfunction caused by the recreational use of N 2 O. Encephalatrophy with impaired cognitive function caused by N 2 O recreational use has not been reported previously. This case report is the first to report encephalatrophy accompanied by altered cognitive functioning apart from peripheral neuropathy following intense N 2 O abuse. The benefits of hyperbaric oxygen therapy (HBOT) are extensive, and chief among them is the capacity to improve cognitive functioning with delayed encephalopathy after acute carbon monoxide poisoning. 6 submit your manuscript | www.dovepress.com 1964 Luo et al Additionally, we tried to use HBOT to relieve the symptoms N 2 O toxicity caused by recreational use. Case report An 18-year-old female who studied abroad in Australia and recently returned presented at our inpatient department with numbness and weakness in all four limbs, disturbance of orientation, and memory impairment for 5 days. She also presented with abnormal sensation in the lower limbs, difficulty walking, trouble speaking, and irritation. She was found lying on the ground in her house in Sydney and was unable to identify her brother. They also found thousands of steel bulbs 3 (each of which contained 10 mL of pressurized N 2 O) in the house. She admitted that she had used N 2 O bulbs recreationally for 5 months. She used at least 50 bulbs during the past 5 months, one bulb every other day. During the last 4 days, she used more frequently than before, but she could not remember the exact number of bulbs she used. Vital 112/62 mmHg) were normal, and the physical examination was notable for a weakly positive Babinski sign, enhanced sensation in all four limbs, and ataxia. The upper limbs exhibited stage 4 muscle strength, while the lower limbs exhibited stage 3 muscle strength. The patient's past history of medical and psychiatric diseases were unremarkable, and there was no family history of psychiatric disorders. A full blood examination showed hemoglobin (Hb) 112 g/L, platelet count 174×10 9 /L, white cell count 7.95×10 9 /L, and mean corpuscular volume (MCV) 93.2 fL. Vitamin B 12 was 1,500 pmol/L. The results of a urine toxicology screen were all negative, including methylamphetamine, heroin, morphine, ketamine, and methylenedioxyphenethylamine. Blood gas (arterial blood) analysis showed partial pressure of oxygen in the alveolar (PAO 2 ) 97.5 mmHg, partial pressure of oxygen in the artery ( 4.43 mmol/L, and chlorine (Cl) 104.0 mmol/L. Due to concerns regarding a potential spinal cord compromise and cerebral disease, an enhanced spinal and cerebral magnetic resonance imaging (MRI) was implemented. Figure 1A shows gyrus atrophy and broadened anfractuosity compared to the normal state on a T1-weighted image. Table 1 (before the treatment) shows the results of motor nerve conduction velocity (MCV). The motor conduction amplitude of bilateral tibial nerves, peroneal nerves, median nerves, and ulnar nerves decreased, and motor conduction velocity of bilateral median nerves and ulnar nerves similarly slowed down. Table 2 (before the treatment) shows the result of sensory nerve conduction velocity (SCV). Sensory conduction amplitude of bilateral peroneal nerves decreased, and sensory conduction velocity of bilateral median nerves, ulnar nerves, and peroneal nerves slowed down. Bilateral anterior tibial muscles showed a large amount of spontaneous potential (positive sharp wave and fibrillation wave), and the right abductor hallucis brevis showed multiple spontaneous potential (positive sharp wave and fibrillation wave), small contraction, and huge motor unit action potentials (MUAPs) in a quiet state. The patient was unable to complete the Montreal Cognitive Assessment (MoCA) and the Mini-Mental State Examination (MMSE) due to the poor medical condition. The patient did not use N 2 O recreationally again, and we prescribed vitamin B 12 to improve neurological symptoms and an atypical antipsychotic drug, quetiapine, to help control irritation. The patient also received HBOT with a treatment pressure of 2 atm in an air-pressurized chamber and was given 100% oxygen for a period of 90-120 minutes at a time, a procedure that was repeated three times a session for 20 sessions. The dysfunction in sensory ataxia, 1965 HBoT in improving cognitive dysfunction and encephalatrophy numbness, and impaired cognitive functioning gradually improved with treatment by hyperbaric oxygenation. The MoCA and MMSE scores after treatment were in the normal range. A subsequent MRI to re-examine the cranium showed ( Figure 1B) ethics statement Written informed consent for the publication of her clinical details and clinical images was obtained from the patient. A copy of the consent from is available for review from the editor of this journal. Discussion The patient met the diagnostic criteria for N 2 O abuse-induced encephalatrophy and cognitive impairment due to the history Notes: Before indicates that the patient was not treated. after indicates that the patient was treated. Abbreviations: aDM, abductor digiti minimi; aHB, abductor hallucis brevis; amp, amplitude; apB, abductor pollicis brevis; Bel elb, below elbow; CV, conduction velocity; eDB, extensor digitorum brevis; F, F-wave; Lat, latency. There were several reported cases of the recreational use of N 2 O resulting in myelopathy and polyneuropathy. [7][8][9][10] The neurological symptoms of these patients were commonly associated with the deficiency of vitamin B 12 . 11 In this case, the reason why the level of vitamin B 12 was high was that the patient received vitamin B 12 treatment after she was found and sent to the Emergency Department of the local hospital in Sydney. However, there were no cases in which cognitive impairment was induced by N 2 O and the mechanism by which N 2 O induces encephalatrophy is not yet completely understood. As an anesthetic, N 2 O can affect cognitive functioning after surgery by influencing brain activity 12 and the depth of the anesthesia is also related to cognitive functioning. 13 The dose of N 2 O that the patient used recreationally is much greater than that used in anesthesia, so it is not too difficult to understand why the patient exhibited cognitive dysfunction. However, we cannot determine if the cognitive dysfunction induced by the N 2 O was acute or chronic. One study demonstrated that the most important safety consideration in the use of N 2 O as an anesthetic is the prevention of hypoxia. 14 A related study suggested that hypoxia may damage brain cells, 15 and other studies also found that N 2 O increases brain injury after ischemia or hypoxia in surgery. [16][17][18][19][20] In this case, the cerebral atrophy induced by N 2 O may be the result of chronic hypoxia because of the long-term recreational use of N 2 O and the associated cognitive impairment. Hyperbaric oxygen (HBO) provides 100% oxygen under high pressure, which significantly increases oxygen delivery to the mitochondria at the cellular level, reduces intracranial pressure, and has both anti-inflammatory and neuroplasticity effects in different types of brain injuries. According to the Henry law, maximizing tissue oxygenation, HBO increases the amount of oxygen carried in solutions and tissue by raising the external pressure, which is sufficient to support resting tissues without a contribution from the Hb, and induces rapid and significant vasoconstriction. 21,22 Generation of oxygenderived free radicals increased as a result of HBO, destroying DNA and inhibiting bacterial metabolic functions. 21 HBOT not only accelerates collateral circulation to protect neurons from ischemic death but also repairs the damaged microvessels, thereby stimulating angiogenesis and neurogenesis. 22,23 Additionally, HBOT can effectively counter ischemia and hypoxia, so all instances of hypoxia, ischemic diseases, or a series of diseases caused by hypoxia and ischemia can be resolved successfully. There were a number of reports about the use of HBOT to treat hypoxia and ischemic diseases as a result of all kinds of causes, such as brain injury, cerebral palsy, stroke, and others, 24,25 especially to improve cognitive dysfunction after brain injury. 26 In particular, HBOT can ameliorate cognitive functioning in someone suffering from anoxic brain damage. 27 In this case, we speculate that the high dosage of N 2 O induced hypoxia and hypoxia induced cerebral atrophy and cognitive impairment. We tried to use HBOT to relieve the patient's cerebral atrophy and cognitive impairment, which we did successfully. After the HBOT treatment, the patient's cerebral atrophy and cognitive impairment improved, a strong confirmation of our initial hypothesis regarding pathophysiology. There are important points to consider regarding this case report; for example, we could not conclusively determine whether the patient's cognitive impairment and cerebral atrophy caused by the N 2 O was acute or chronic. Additionally, although we affirmed the effectiveness of HBOT for cognitive dysfunction caused by N 2 O, there were no previous reports of the use of HBOT to treat cognitive impairment and brain atrophy caused by N 2 O. As such, the use of HBOT as the standard therapy for ailments associated with laughing gas abuse still warrants further research. Conclusion This case report is the first to present encephalatrophy with severe cognitive impairment as a side effect of recreational abuse of N 2 O. Symptoms such as numbness and weakness in all four limbs, disturbance of orientation, memory impairment, abnormal sensation in the lower limbs, and difficulty walking and speaking were relieved by the HBOT treatment, providing an important clue regarding the mechanism behind N 2 O-induced encephalatrophy and the role of HBOT as a new treatment for this pathophysiology. Neuropsychiatric Disease and Treatment Publish your work in this journal Submit your manuscript here: http://www.dovepress.com/neuropsychiatric-disease-and-treatment-journal Neuropsychiatric Disease and Treatment is an international, peerreviewed journal of clinical therapeutics and pharmacology focusing on concise rapid reporting of clinical or pre-clinical studies on a range of neuropsychiatric and neurological disorders. This journal is indexed on PubMed Central, the 'PsycINFO' database and CAS, and is the official journal of The International Neuropsychiatric Association (INA). The manuscript management system is completely online and includes a very quick and fair peer-review system, which is all easy to use. Visit http://www.dovepress.com/testimonials.php to read real quotes from published authors.
2018-08-21T22:42:14.737Z
2018-08-01T00:00:00.000
{ "year": 2018, "sha1": "edb14df77d4beb1a5c9d80d01ac5562d8b61fd2f", "oa_license": "CCBYNC", "oa_url": "https://www.dovepress.com/getfile.php?fileID=43496", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "3b50548db37580e14fdb0d957add41c02be829a1", "s2fieldsofstudy": [ "Medicine" ], "extfieldsofstudy": [ "Medicine" ] }
54924426
pes2o/s2orc
v3-fos-license
The Impact of Soil Erosion as a Food Security and Rural Livelihoods Risk in South Africa This study evaluates soil erosion/attrition as a major food security and rural livelihoods risk in South Africa, with the Upper and Lower Areas of Didimana, Eastern Cape Province, as a case study. The survey research method was adopted for the study. Farmers and extension officers’ behaviours relating to soil erosion control was negative even though the impact of erosion in the area was high. Approximately 75% of farmers indicated that they lose more than 21% of their crops yearly due to erosion and 55% said their crops and livestock, as well as their household feeding, suffer due to the problem. The results of the multiple linear regression analysis indicate that farm yield and farmers’ access to market are positively related to farmers’ adoption tendencies regarding erosion control, implying that farmers are more willing to adopt recommendations if their yields and access to market can increase. Similarly, age of farmers is positively related with erosion impact, indicating that older people have a higher tendency to cause erosion in the study area. This is true, as the area consists more of older people, who are generally known to resist change, thus low in adoption. Therefore, it is perceived that if farmers manage soil erosion appropriately, they will achieve higher yields. More so, pull factors like improved rural infrastructures and adequate agricultural incentives for youths are suggested to lure more youth in taking into farming in the study area. Introduction Although soil erosion/attrition (Note 1) is one of the main themes in environmental studies, an unresolved question is whether its relevance is accorded due place in agriculture and related studies.This is of great concern because of all human activities; the agriculture sector is affected the most by erosion (Note 2).It is considered the most conspicuous and widespread agent of soil/land degradation ever known (Lal, 2003; European Cooperation in Science and Technology [COST], 2008; Kumar & Ramachandra, 2003).It is estimated that 1/6 of global soils has already been degraded by erosion due to water and wind, resulting in a reduced ability of society to produce sufficient food (agriculture) (COST, 2008).The productive power of some lands (worldwide) has declined by half due to the effect of erosion and desertification (Eswaran, Lal, & Reich, 2001).Annually, 75 billion tons of soils are lost from farm lands, and 12 million hectares of cropping land which means approximately 1% of the total area is no longer fit for farming, leading to the degradation of 38% of global cropland since World War II (Dahl, 2013).Soil loss also leads to several other farm challenges, and as these problems increase, there comes a point at which the farm is abandoned (COST, 2008;Anthoni, 2000).costly to remedy.One millimetre of soil, easily lost in one rain or wind storm, is so small that its loss goes unnoticed, yet this loss over a hectare of cropland can amount to 15 tons ha -1 (Pimentel, 2006).This often neglected development, as small as it may seems, when considered critically, reveals a loss of colossal amount of soil materials (including soil nutrients) being washed away after every passing incidence of erosion.The inherent danger is that, replenishing soil loss under agricultural conditions requires approximately 20 years (Pimentel, 2006). Soil attrition can be devastating to agricultural development and food security (Ighodaro, Lategan, & Yusuf, 2013).It leads to productivity or overall farm yield losses, especially because of decreased fertility of the soil due to loss in soil nutrients.It diminishes the quality of soil through the loss of water, soil organic matter, nutrients, biota, and depth of soil, thus reducing the productivity of natural, agricultural and forest ecosystems (Pimentel & Kounang, 1998).Further, eroded sediment contains a considerably higher measure of organic matter and nutrients than that of the topsoil from where it is derived (Young, 1989). Annual soil loss in South Africa is estimated at 100-400 million tons, nearly three tons for each hectare of land (Kumar & Ramachandra, 2003;Hoffman et al., 1999).It has been estimated that it will take R1000 million to replace the soil nutrients carried out to sea by rivers in South Africa each year, with fertilizer (Kumar & Ramachandra, 2003).Over 70% of the land surface of South Africa has been affected by varying degrees and types of soil erosion (Le Roux, Newby, & Sumner, 2007).In fact, it is perceived that soil attrition is the principal, or largest, environmental problem in South Africa (Muliban, 2001).Compounding the problem is the fact that soil formation rates in the country are thought to be about 30 times slower than rates of soil loss (Hoffman et al., 1999).Thus, to attain food security and improvements in livelihoods, especially in the rural areas of South Africa, soil erosion is no doubt one of the agricultural problems that need to be addressed. Background of the Study The Eastern Cape Province, where this study consists, is rated as one of the three most degraded and poorest provinces in South Africa (Department of Environmental Affairs, Republic of South Africa [RSA], 2007;Bank, Minkley, & Kamman, 2010).Therefore a study such as this in the area is very relevant, seeking amongst other things, to demonstrate that soil erosion is a major food security and rural livelihoods risk in South Africa, using the Upper and Lower Areas of Didimana in the Eastern Cape as a case study.It thus aims at providing answers to the following objectives: 1) To assess farmers' adoption behaviour in respect to the use of erosion control methods in the study area; 2) To assess agricultural extension officers' behaviour with respect to the use of erosion control methods in the study area; and 3) To evaluate the impact of erosion on food security and livelihoods of rural farmers in the study area. An Overview of the Study Area The area of study consists of Upper Didimana, Lower Didimana and Romanslaagte villages, respectively, located in Ward three of the Tsolwana Local Municipality in the Chris Hani District Municipality of the Eastern Cape Province of South Africa.These villages are less than three kilometres apart.The Chris Hani District Municipality is characterised by approximately 56.6% of people living in poverty and some experiencing high unemployment rate (Chris Hani District Municipality, 2010/2011).Nevertheless, livestock farming is said to be an important source of income for people. In terms of its geology, the area is made up of rolling and undulating hilly to very steep areas within the valleys, and consists mostly of Beaufort sediments intruded by dolerite, and its altitude varies from 1280.2 m to 1463.0 m (District of Whittlesea, 1966). The climatic conditions of the study area vary from arid climate to very cold high veld climate and falls largely into two climatic zones (Tsolwan Local Municipality, 2010/2011).Further, its mean annual precipitation is between 301 ml and 600 ml and its average maximum temperature is 22.3 ºC and an average minimum of up to 8.9 ºC, respectively.The study area is said to have some of the most erodible soils in the entire region of Chris Hani District Municipality (Tsolwan Local Municipality, 2010/2011). Research Methodology Procedure The study adopted a survey research methodology using self administered questionnaires distributed to a total of 60 farmers through a one-on-one collection process.The random sampling technique was applied and data was collected through a joint effort of the researcher as well as the services of five other survey assistants, who speak the local dialect of respondents, and who also were well trained to understand the objective and intentions of the survey.The enumerators were well trained to employ specific protocol in order to establish rapport and encourage farmers to cooperate and give honest and unbiased answers (Mukarumbwa, 2009).Further, the enumerators, apart from helping to overcome the problem of farmers' conservatism and reluctance to discuss matters, helped to ensure consistency and reliability in the data collection process.Data collected were coded and analysed with the aid of SPSS statistical package version 20, and statistical techniques used include basic descriptive statistics (like frequencies, percentages, and means) and the multiple linear regression model.The descriptive statistics are first step, needed to determine the distribution of variables and to give a summary of large amounts of information (Annor-Frempong & Duvel, 2009).However, to test for relationships (such as the understanding of the effect of independent variables, predictors, on the dependent, outcome, variables) which exist between variables, other higher statistical test are needed like the multiple linear regression model (Annor-Frempong & Duvel, 2009).The multiple linear regression models were adopted because all three dependent variables were linear in nature. Conceptual Framework The problem conceptualization can be defined as a mental or hypothetical construct which provides a scientific basis for purposeful and systematic probing into the causes of a problem and it offers a frame of reference from where extension problems are investigated (Duvel, 1991).Table 1 is a mental construct which offers a concise understanding of the problem of this study, and thus provides a basis for questionnaire design and variables of analysis.How does farmers' behaviour impact on the quality of their product? How does farmers' behaviour impact on their farming sustainability? How does farmers' behaviour impact on their profitability? How does farmers' behaviour impact on access to food in their area? How does farmers' behaviour impact on the availability of food in their area? Extension behaviour with respect to soil management/ erosion control in study area How does extension behaviour impact on farm yield in study area? How does extension behaviour impact on the product quality in study area? How does extension behaviour impact on farming sustainability in study area? How does extension behaviour impact on farming profitability in study area? How does extension behaviour impact on access to food in study area? How does extension behaviour impact on food availability in study area? Impact of soil erosion in study area How does soil erosion impact on farm yield in study area? How does soil erosion impact on product quality in study area? How does soil erosion impact on farming sustainability in study area? How does soil erosion impact on farming profitability in study area? How does soil erosion impact on access to food in study area? How does soil erosion impact on food availability in study area? The Multiple Linear Regression Modelling The multiple linear regression model was adopted for the analysis of data.The multiple regression model is defined as a statistical technique that allows the user the opportunity to be able to predict the result of a dependent variable based on the scores of several other variables which are called the independent variables (Melusi, 2012).The multiple linear regression models usually take the form of: Where: y = Farmers' food security and livelihoods in study area after soil erosion impact; x = Exogenous input data of food security and farmers livelihoods (independent variables); α = Intercept of y; β = Partial regression coefficient; α and β= Parameters to be estimated; Ɛ = Stochastic error term. Incorporating the demographic characters of farmers into the model, plus the food security and rural livelihoods variables chosen for the study, the expected equation is as represented in Equation 2, Where, A = Age of farmers; G = Gender; M = Marital status; E = Education level; Y= Yield; PQ = Product quality; S = Sustainability; P = Profitability; A = Accessibility; Av = Availability. Description of Variables Used for the Study Three dependent variables were adopted for the study against ten independent variables as indicated in the Table 2.The first dependent variable was farmers' behaviour with respect to erosion control in the study area, which was measured in terms of how farmers utilized extension officers' advice or recommendations on erosion control. The second dependent variable was extension officers' behaviour with respect to erosion control, which was measured in terms of how often extension officers talked about the control of soil attritiion during times of their visit.The third dependent variable was impact of soil erosion, which was measured upon the percentage of farmers' crops perceived to be lost annually due to erosion impact as well as the impact of erosion on the farmer's livelihoods. Results and Discussion The demographic characters of farmers, which are part of the independent variables in the study, impact on the mediating variables (perceptions, needs and knowledge of farmers) to create and determine farmers' behaviours and the eventual production efficiency or problems in an area (Duvel, 1991).These are very important because they help to indicate behavioural patterns of respondents (Shaw & Constanzo, 1970). According to the findings of the study highlighted on (Ighodaro, Lategan, & Yusuf, 2013, citing Ayinde, 2011).The indication of this is that farming decisions are left in the hands of older people who are more conservative in thinking, and tend to avoid more risk which is one great factor needed for any business success (Bembridge, 1991).Similarly, the study area constitutes more males (53.3%) than females (46.7%), while 60% are married as compared to other marriage groups in the area. As findings reveal, education level of farmers is poor in the area as only 5% of farmers' population barely exceeded grade 12, which is not quite different from education levels of rural people in the Eastern Cape.For example, a study done in Sheshegu community also in the Province show that only 6% barely exceeded grade 12 (Ighodaro, 2010).This of course do not speak good concerning sound decision making regarding farming in the study area, as poverty and inadequate education were said to be two main factors which lead to poor farming decisions (Pender and Hazell, 2000).The findings of this study show that 48.2% of the farmers have been in farming for less than ten years.This does not show much experience in the farming business as suggested by Barkai and Levhari (1973) and Pender and Hazell (2000).Moreover, any learning that is associated with accumulation of experience, contributes to production (Pender & Hazell, 2000).Source: Survey research (2012). Farmers' Use of Extension Officers' Advices and Recommendations The farmers' behaviours which relate to decisions taken with respect to use of extension advice or recommended soil management practices concerning erosion control in the study area were found negative as indicated on the Table 4.The study shows that 41.7% of the farmers do not use extension advices, and a total of 36.7% use them inappropriately.Similarly, 58.3% of the farmers had a problem with using the recommended practices on soil management in their area.In fact, 25% said they never use them at all, 13.3% said they rarely use them, while 20% said they sometimes use them.The implication of these findings is that most farmers do not use extension advices neither recommended soil management practices regarding erosion control in their area.Of those who use them, there are still some who use them inappropriately.The problems normally encountered in agricultural settings are ultimately related to issues of non-adoption or inappropriate use of particular recommended practices (Duvel, 1991).Source: Survey research (2012). The above findings can be interpreted as inappropriate behaviour of farmers regarding the control of erosion problem in the study area, and an indication that soil erosion is not accorded proper place of relevance.Otherwise, appropriate recommended practices or advices by experts would have been given top priority. Extension Officers' Perception on Soil Erosion The behaviours of extension officers regarding soil attrition problem in the study area was measured in relation to their perception and efforts towards the control of soil attrition in the country.This is because, if extension officers' perception is in accord with farmers' perception on the level of the problem in study area, it will form part of their major themes during times of their visits.The idea is that the more extension officers' talk on controlling the problem, the less the problem would be in the study area, and vice versa. However, as Table 5 indicates, over half of the population of farmers (58.3%) said extension agents never talk about soil erosion and its control during times of their visit, which is a reflection of poor behaviour with respect to the problem.This is because as glaring as the problem is in the study area, if soil erosion was given proper place of relevance, it ought to form part of extension major themes in the area.Notwithstanding the above, agricultural extension is the most important source of information to farmers in most African countries (Oladele & Tekena, 2010).Further, agricultural extension plays significant roles in influencing farmers' adoption behaviours (Oladele & Tekena, 2010).Source: Survey research (2012). Impact of Soil Erosion on Food Security and Rural Livelihoods The level of crop loss in an area is a reflection of food security or insecurity, especially for rural areas.Nutrient deficient soils, which are one main impact of erosion, produce 15 to 30% lower crop yields than an un-eroded soil (Pimentel, 2006).According to the findings of this study outlined on the Table 6, 75% population of farmers confirmed that they lose well above 21% of crops every year due to erosion.This is a high reflection of food insecurity in the study area, because the level of crop loses is positively related with food security of an area.In support of this, Odendo, Obare, and Salasya (2010) emphasize that soil fertility degradation on smallholder farms (which is mostly caused by erosion) is reported as the primary biophysical root cause of food insecurity and poverty in sub-Saharan Africa, where most people live in rural areas and obtain their livelihoods from farming.Source: Survey research (2012). According to farmers, the impact of erosion on farmers' crops and livestock (40%), as well as farmers' household feeding (15%) (factors of farmers' livelihood) were the major livelihood factors impacted the most by soil erosion in the study area (Table 7).This is highly remarkable because the census data of 2002 indicates that the main land use of people in the Eastern Cape is sheep farming (Statistics South Africa, 2002) even though other livestock are also reared (Statistics South Africa, 2002).The implication is that the main source of livelihood is severely threatened.Source: Survey research (2012). Farmers' Behaviour Regarding the Control of Soil Erosion The yield of farmers was found to be positively related with farmers' adoption behaviours in the study area with a statistical significance of 0.021, at a p-value of 0.05 (5%), which is remarkable as indicated by the results on Table 8.It agrees with findings of the descriptive statistics, as yield of farmers in the area was perceived low, hence also low adoption rate of recommended practices.The yield is supposed to be positively related with farmers' adoption behaviours because high yield is an indication of increased profit.It is expected that farmers will, in the adoption decision making process compare the advantages and appropriateness of different soil conservation technologies, based on the available resources at their disposal and their opportunity for profit (Tiwari et al., 2008). Similarly, food accessibility of farmers was found to be positively related with farmer's adoption behaviour with a statistical significance of 0.027, at a p-value of 0.05 (5%) significance level (Table 8).This also is remarkable because, as expected, the more accessible food is in the study area, the more economically and socially empowered farmers become.Therefore, they would like to adopt more soil management technologies to access more of the gains of improved production and social status.This is because, hunger is one of the socioeconomic factors which can lead to poverty and disease, and ultimately to food insecurity, as well as to slow economic growth of society.In fact, the Parliamentary Office of Science and Technology (2006) maintains that "hunger, poverty and disease are interlinked, with each contributing to the occurrence of the other two".The selling of crops which is one measure of accessibility in the area was found to be difficult and this contributed towards the low adoption rate of recommended practices. Agricultural Extension Officers' Behaviour and Soil Erosion Control According to results (Table 8), extension behaviour with respect to erosion control was found to be negatively related with farm yield and profitability with a statistical significances of 0.043 (5% p-value) and 0.003 (1% p-value) levels of significance respectively.These are actually unexpected, which disagree with earlier literature findings on relationship of farm yield and profitability of farmers and effective extension service.As expected, the more farmers achieve higher yields and profitability, the more they accept extension messages and vice versa.For example, the rational choice theory propounds that every action is essentially 'rational' in nature, and that individuals calculate the likely costs and benefits of any action before deciding on them (Scott, 2000).Further, the behaviour of farmers is motivated by the possibility of making gains (Barungi & Maonga, 2011).Similarly, Roberts, English, and Larson (2002) maintain that the key to farmers' adoption of site-specific farming (and perhaps other technologies) is the profitability of the technology.This means that profitability propels adoption tendency.Supporting also, Muyanga and Jayne (2006) agree that there is a general view that extension services, if well designed and implemented, improve agricultural productivity (Muyanga & Jayne, 2006). Although findings above are negative, there might be other factors responsible.One of them could be the problem of subsistence farming which exists in the study area, as is common with most developing rural communities.Most farmers only farm for extra food for the home, not for profit, and as such are not too concerned about the long term gains of adopting improved soil technologies and advises from extension officers.The descriptive statistics also reveal that most farmers are old and inadequately educated.It is a general belief that older and uneducated people or farmers (especially in rural areas) are more closed to change and very conservative, traditional and less innovative as such they may not be too concerned about extension services on soil erosion control (Bembridge, 1991;Ighodaro, 2010). Impact of Soil Erosion on Food Security Based on results, soil erosion relates positively with the age of farmers with a statistical significance of 0.024, which is 5% significance level (Table 8).This means erosion impact increases with age in the study area.In other words, the more of old people residing in the area, the more the problem of erosion.This result seems congruent with findings of the descriptive statistics of this study.According to findings, the study area consists more of older people; very few of the farmers use extension officers' advice on erosion control regularly; and even extension officers hardly talk about erosion control during their visits to the study area; as such, the problem of erosion is on the rise in the area. According to the literature survey conducted, age as a factor in adoption decision-making can be positive or negative.For example, a study done in Burkina Faso indicates that age was found to influence adoption of sorghum positively (Bonabana-Wabbi, 2002).However, it was also stated in the same study that age has been found to be either negatively correlated with adoption, or rather not significant in farmers' adoption decision making process. Conclusion Soil erosion is a major farming problem in any society, especially for the fact that food, which is chiefly grown on the soil, is the greatest human need.The literature review of this study indicated that, in South Africa, erosion affects over 70% of the land space, and the Eastern Cape which constitutes the study area is one of the three most degraded Provinces in the country.Therefore this should make soil attrition as it relates to food security and rural livelihoods a thing of great concern to all stakeholders.Nevertheless, the findings of this study are alarming.This is because as obvious as problems resulting from erosion (such as high crop losses and negative impact on farmers' crops and livestock, as well as their household feeding) are in the study area, farmers' and extension officers' (who are two key stakeholders with respect to agricultural improvement in rural communities) behaviours were found negative.Farmers do not adopt extension advices or recommended soil management practices, and the little percentage that do, use them arbitrarily or inappropriately.Notwithstanding the above, extension officers hardly talk about the problem and its control. On a similar note, farmers' adoption behaviour regarding the control of erosion was positively related with farm yield and accessibility, indicating that if yield and accessibility can be increased, farmers will be willing to adopt extension recommendations for erosion control.Additionally, agricultural extension behaviour relates negatively with farm yield and profitability, which are unexpected.There is however certain factors that can be responsible for this.One of them could be the problem of subsistence farming which exists in the area, as is often the case in most developing rural areas.Moreover, erosion impact relates positively with age of farmers, indicating also that older people have a higher tendency to cause soil erosion.This is as supported in literature.Thus, old people often tend to be more conservative and resistive to change.They find it more difficult to adopt new innovations as such. Therefore if the above behaviours of farmers and agricultural extension officers were allowed to continue, farming which is the chief source of food security and rural livelihoods in sub-Saharan Africa is at great risk. Hence efforts need to be accelerated to overturn this trend.The following suggestions below strive to address this.Firstly, education should be improved, as level of education was found to be low in the area.Studies have indicated that education is one independent variable which impacts greatly on farmers' behaviour.This can be by mass education campaign or through adult education programmes.Similarly, since farm yield and accessibility increase adoption tendencies of farmers, efforts should be given to increase yield and accessibility.This can be by providing more incentives and soft loans for farmers and providing farmers with more access to lands for farming and better roads to their communities.Farmers can as well be assisted with a ready market to easily dispose their farm produce with no difficulty.In order to encourage younger people into farming, adequate infrastructural facilities should be ensured in rural areas of South Africa, to limit rural-urban drifts of young people, and motivate them to take up farming as a main source of livelihood. Furthermore, efforts are needed by the government to train and retrain extension officers in practical and more modern ways of soil erosion control in rural communities.This is why the suggestion is that extension curriculums need to be reviewed, and properly tailored towards more practical erosion control programmes.Finally, to assist efficiency of extension practice, an independent monitoring body is recommended to monitor extension programmes and activities in rural areas. Table 1 . A conceptual framework suitable for the information needs of the study Table 2 . Description of variables used for the research Table 3, the study area consists of older people whose age group ranges from 46-55 and 56-65 years old, as compared to young people less than 35 years of age, which is the maximum age of youth in South Africa, is just only 15% (Rural Urban Consultant, 2001).The average age of farmers is approximately 57 years, revealing the problem of ageing phenomenon Table 3 . Personal and demographic characters of farmers in the study area Table 4 . Use of extension officers' advices and recommendations in study area Table 5 . Frequency of extension officers' talk on erosion in study area Table 6 . Percentage of crop loss due to soil erosion impact Table 7 . Impact of soil erosion on livelihoods of farmers Table 8 . Results of variables' relationship in the study area-regression analysis
2018-12-14T21:26:48.657Z
2016-07-17T00:00:00.000
{ "year": 2016, "sha1": "8307021d8e868abad7047712acd4bf8cadb1bfd5", "oa_license": "CCBY", "oa_url": "https://ccsenet.org/journal/index.php/jas/article/download/57441/32934", "oa_status": "HYBRID", "pdf_src": "Anansi", "pdf_hash": "8307021d8e868abad7047712acd4bf8cadb1bfd5", "s2fieldsofstudy": [ "Environmental Science", "Agricultural and Food Sciences" ], "extfieldsofstudy": [ "Biology" ] }
59945092
pes2o/s2orc
v3-fos-license
A comparative in vitro study of the osteogenic and adipogenic potential of human dental pulp stem cells, gingival fibroblasts and foreskin fibroblasts Human teeth contain a variety of mesenchymal stem cell populations that could be used for cell-based regenerative therapies. However, the isolation and potential use of these cells in the clinics require the extraction of functional teeth, a process that may represent a significant barrier to such treatments. Fibroblasts are highly accessible and might represent a viable alternative to dental stem cells. We thus investigated and compared the in vitro differentiation potential of human dental pulp stem cells (hDPSCs), gingival fibroblasts (hGFs) and foreskin fibroblasts (hFFs). These cell populations were cultured in osteogenic and adipogenic differentiation media, followed by Alizarin Red S and Oil Red O staining to visualize cytodifferentiation. Quantitative Real-Time Polymerase Chain Reaction (qRT-PCR) was performed to assess the expression of markers specific for stem cells (NANOG, OCT-4), osteogenic (RUNX2, ALP, SP7/OSX) and adipogenic (PPAR-γ2, LPL) differentiation. While fibroblasts are more prone towards adipogenic differentiation, hDPSCs exhibit a higher osteogenic potential. These results indicate that although fibroblasts possess a certain mineralization capability, hDPSCs represent the most appropriate cell population for regenerative purposes involving bone and dental tissues. MSCs were also isolated from the pulp of exfoliated deciduous teeth, apical papilla, dental follicle, periodontal ligament 9 and periapical cysts 10 . However, these dental-derived MSC populations vary in their expression of stem cell surface markers and in their ability to differentiate into distinctive cell lineages 11 . Human dental pulp stem cells (hDPSCs) have been extensively studied over the past years and constitute very attractive candidates regarding cell-based regenerative therapies for a variety of reasons: they can be conveniently collected from extracted adult teeth without ethical concerns 8 , they possess immunosuppressive activity 12 and can be safely cryopreserved without affecting their differentiation properties 13 . Consequently, hDPSCs may be isolated from a patient, stored and transplanted autologously at a later date, therefore making allogenic grafting and immunosuppression redundant 14 . hDPSCs are able to differentiate into odontogenic, osteogenic, chondrogenic, adipogenic, vascular, myogenic and neurogenic lineages 11,15 . Among adipose tissue, bone marrow and dental-derived MSCs, hDPSCs produce the greatest volume of mineralized matrix and therefore have greater potential for future applications in the regeneration of damaged tooth structures or bone in mandibular defects 16 . Fibroblasts are ubiquitously distributed in connective tissues and play a major role in synthesis and secretion of extracellular matrix, as well as in inflammation, wound healing and fibrosis 17 . MSCs and fibroblasts share several common properties in terms of morphology, cell-surface marker and gene expression patterns, as well as differentiation potential 18,19 . Indeed, in vitro studies have shown that fibroblasts are also plastic-adherent and capable of differentiating into bone, fat and cartilage, while expressing all of the MSC surface markers 20 . Additionally, similar to MSCs, fibroblasts are able to suppress mitogenic and allogeneic lymphocyte proliferation 21 . Because of these properties, fibroblasts hold the potential for clinical application in the treatment of many diseases and constitute a very appealing alternative for regenerative applications due to their high accessibility and availability. Therefore, in this study we assessed the differentiation potential of two human fibroblastic populations, foreskin (hFFs) and gingival (hGFs) fibroblasts, and compared them to hDPSCs by means of in vitro differentiation assays, complemented with expression of specific stem cell, osteogenic and adipogenic marker genes. Results In vitro differentiation assays. Osteogenic differentiation assay. Staining with Alizarin Red S allows visualization of extracellular calcium deposits in a bright orange-red colour. Staining revealed that hDPSCs, hGFs and hFFs cultured in control medium (CM) were not able to form mineralized nodules. When cultured for 21 days in presence of osteogenic medium (OM) hDPSCs formed a dense mineralized plexus (Fig. 1A,D). hFFs also displayed unequally distributed mineralized nodules when cultured in OM (Fig. 1C,F), whereas no mineral deposits were visible in cultures of hGFs with OM (Fig. 1B,E). Quantification of the alizarin red staining confirmed the observations, showing a significantly higher Alizarin Red-staining in hDPSCs when compared to hGFs and hFFs (Fig. 1G), as well as in hFFs compared to hGFs. Adipogenic differentiation assay. To monitor the adipogenic differentiation progress, cultured cell populations were dyed with Oil Red O, which stains lipid droplets in red colour. Staining revealed that after 21 days of culture in adipogenic medium (AM) hGFs and hFFs formed scattered lipid droplets (Fig. 2E,F), while no lipid droplets were observable in cultures of hDPSCs (Fig. 2D). The observation was confirmed by quantification of Oil Red O staining, which displayed significantly higher areas of Oil Red O staining in both hGFs and hFFs compared to hDPSCs (Fig. 2G). Furthermore, cells of the fibroblastic groups (Fig. 2E,F) exhibited a more spherical shape than hDPSCs (Fig. 2D). Gene expression analysis. The expression levels of each analysed gene within the three groups (i.e., hDP-SCs, hGFs and hFFs) are shown individually after different days of incubation in osteogenic and adipogenic medium. The gene expression levels of the samples within each group (cell type) are presented relative to the gene expression level on day one of the respective group (A-C in all panels), as well as normalized for the expression of the gene in hDPSCs on day 0 (before treatment; figure D in all panels). Gene expression analysis of stem cell markers. We first verified and quantified the expression of genes used as markers for mesenchymal stem cells, namely CD73, CD90 and CD105 6 , in hDPSCs, hGFs and hFFs cultured in control conditions. Expression of these three genes was detected in all three cell populations analysed (Fig. 3). Of notice, hDPSCs and hFFs expressed comparable levels of CD73, CD90 and CD105, while hGFs displayed significantly lower levels of these mRNAs (Fig. 3). We then analysed the expression of NANOG and the Octamer-binding transcription factor 4 (OCT-4), recognized stem cell markers that play pivotal roles in the maintenance of self-renewal and pluripotency [22][23][24][25][26][27] . NANOG: NANOG was surprisingly upregulated in all cell types upon treatment with differentiation media. Incubation of hDPSCs in osteogenic conditions led to a significant upregulation of the expression of NANOG already after one week, followed by its progressive downregulation at day 14 and 21 ( Fig. 4A). A similar pattern was observed in hFF, where NANOG expression increased by day 7 of culture in OM and significantly decreased by day 21 (Fig. 4C). hGFs displayed a different response, as they did not upregulate NANOG expression by day 7. However, in these cells we detected an extremely high NANOG expression at day 14, followed by its abrupt downregulation by day 21 (Fig. 4B). Less dramatic fluctuations were detected in the expression of NANOG in all cell types cultured in adipogenic conditions ( Fig. 4E-G). hDPSCs and hFFs displayed an upregulation of NANOG expression by day 7, followed by its downregulation by day 21 (Fig. 4E,G), similarly to what was observed upon incubation with OM. Of notice, the upregulation of NANOG in hFFs in AM was much less pronounced than that observed in OM (Fig. 4C,G). hGF cultured in AM displayed a modest modulation of NANOG expression, characterized by a mild, upregulation at day 7 and downregulation at day 14. The observed difference in the levels of NANOG expression in control conditions at day 0, with hFFs expressing lower levels (approx. 50% less) of NANOG compared to hDPSCs and hGFs, do not obviously correlate with the modulation of the expression of this gene throughout osteogenic and adipogenic differentiation (Fig. 4D,H). OCT-4: OCT4 expression displayed trends grossly similar to those observed for NANOG expression. In hDP-SCs, an increase, although variable, in OCT4 expression at day 7 in OM was followed, at the subsequent timepoints, by its generalized maintenance at levels 4 times higher than in control conditions (Fig. 5A). Interestingly, culture in AM led to a strong upregulation of OCT4 expression by day 14, followed by its significant downregulation by day 21 (Fig. 5E). Expression of OCT4 in hFFs showed a trend similar to that observed analysing NANOG expression (Fig. 5C). In this case, however, the modulation of OCT4 expression between the different time points in hFFs was less pronounced (Fig. 5C). Similar to NANOG, OCT4 expression in hGFs cultured in OM displayed a great increase at day 14, followed by a sudden downregulation at day 21 (Fig. 5B). hGF cultured in AM displayed an opposite but much more modest modulation of OCT4 expression (Fig. 5F). In these cells, OCT4 was mildly upregulated at day 7, downregulated at day 14 and upregulated again at day 21 (Fig. 4B). In contrast to what observed for NANOG, at the basal level, hFFs displayed a 2-fold higher expression of OCT4, compared to hDPSCs and hGFs (Fig. 5D,H). Gene expression analysis of osteogenic and odontogenic markers. Runt Related Transcription Factor 2 (RUNX2), alkaline phosphatase (ALP) and osterix (SP7/OSX) are well-established osteogenic markers [28][29][30] . RUNX2 is crucial for the formation of mineralized tissues, and is generally used as a marker of early phases of osteogenic differentiation. ALP is a widely expressed hydrolase enzyme and plays a key role in the mineralization of hard tissues, as the hydrolysis of phosphate esters supplies free phosphate that is required for the creation of hydroxyapatite crystals 31,32 . OSX is encoded by the SP7 gene and is an osteoblast-specific transcription factor 29,33 . DSPP codes for dentin sialophosphoprotein, a key component of the dentin matrix, expressed at highest levels by odontoblasts 34 . RUNX2: hDPSCs and hFFs incubated with OM exhibited upregulation of RUNX2 expression levels at day 7. In both groups RUNX2 expression decreased at day 14, and increased subsequently in DPSCs at day 21 ( Fig. 6A,C,D). In hGFs, incubation in OM did not lead to any increase in RUNX2 expression, and ultimately led to its significant downregulation at day 14 and 21 ( Fig. 6B). At basal levels the expression of RUNX2 was comparable in all groups, while it was surprisingly lower in hDPSCs at day 14 ( Fig. 6D). At day 21, RUNX2 expression was significantly higher in hDPSCs and hFFs, compared to hGFs (Fig. 6D). ALP: An increase in ALP expression was observed in all cell groups cultured in presence of OM (Fig. 7). In hDPSCs ALP upregulation was already significant at day7, and reached a >100-fold increase by day 14, maintained at day 21 (Fig. 7A). The increase was much more modest in hGFs and hFFs, where ALP expression reached a maximum of a 5-fold and 10-fold increase compared to time 0, respectively (Fig. 7B,C). Basal levels of ALP were comparable among all groups, although hGFs and hFFs displayed a mildly higher expression when compared to hDPSCs. From day 14, ALP levels were significantly higher in hDPSCs than in hFFs and hGFs (Fig. 7D). SP7/OSX: Incubation with OM induced a significant upregulation of SP7 expression already at day 7 in hDP-SCs and hGFs (Fig. 8A,B). At day 14, hDPSCs and hGFs showed a massive SP7 upregulation (Fig. 8A,B), while the levels detected in hFFs remained constant (Fig. 8C). At day 21, hDPSCs and hGFs showed a downregulation of SP7 expression, which however remained significantly higher than that observed at T0 (Fig. 8A-C). At basal levels, hDPSCs displayed a slightly lower expression of SP7 (Fig. 8D). DSPP: DSPP expression was significantly and progressively upregulated in hDPSCs at 7 and 14 days of culture in osteogenic conditions, and it was then downregulated to basal levels by day 21 (Fig. 9A). No significant increase in DSPP expression could be detected in hGFs and hFFs (Fig. 9B,C). On the contrary, DSPP expression became transiently undetectable in hGFs upon incubation in OM (Fig. 9B). At all stages from day 7 to day 21, DSPP levels were significantly higher in hDPSCs compared to both hGFs and hFFs (Fig. 9D). Gene expression analysis of adipogenic markers. Peroxisome proliferator-activated receptor γ2 (PPAR-γ2) and lipoprotein lipase (LPL) are used as adipogenic differentiation marker genes. PPAR-γ2 is considered as a master regulator of adipogenesis 28 , while LPL is involved in lipid transport and provides glycerol and free fatty acids by catalysing the hydrolysis of triglycerides 35 . PPAR-γ2: PPAR-γ2 expression was upregulated in all groups cultured in the presence of AM (Fig. 10). This upregulation was extremely pronounced in hFFs, where PPAR-γ2 expression reached levels 70-fold higher than at T0 already after 7 days, and remained high throughout the differentiation period, with a decrease at day 14 (Fig. 10C). PPAR-γ2 was also upregulated in hDPSCs and hGFs cultured in AM, albeit to a lower extent. In these two groups, PPAR-γ2 expression increased progressively from day 0 to day 21 (Fig. 10A,B). Interestingly, expression of PPAR-γ2 was significantly higher in fibroblasts compared to hDPSCs already at time 0, with PPAR-γ2 expression in hGFs even >20-fold higher than that detected in hDPSCs (Fig. 10D). LPL: LPL expression was upregulated in all three groups cultured in AM (Fig. 11). In hDPSCs LPL expression peaked at day 14, to then decrease at day 21 (Fig. 11A). In hGFs the upregulation observed at day 7 was followed by a progressive downregulation at day 14 and day 21 (Fig. 11B). hFFs showed an opposite trend, as they displayed a continuos upregulation of LPL expression from day 0 to day 21, with a peak of 400-fold increase at the latest timepoint (Fig. 11C). In contrast to what was observed with PPAR-γ2, the expression levels of LPL before any treatment were comparable among the three groups (Fig. 11D). Discussion Carious and periodontal diseases such as periodontitis, fractures, genetic defects or aging can lead to tooth damage and loss, thus decreasing the quality of life 36 . So far in dental clinical practice the treatment of choice for replacing missing teeth are osseointegrated dental implants 37 , while the treatment of a carious lesion consists of the removal of the infected hard tissue part and its replacement by composite resins 38 . When the bacterial infection reaches the dental pulp, root canal therapy is the treatment of choice, in which the pulp is substituted by synthetic filling materials 39 . Regenerative dentistry represents an alternative solution for the repair of dental tissues using a variety of techniques and therapeutic approaches 40 . For example, methods used nowadays include injection of stem cells and/or soluble molecules such as growth factors 41 . Therefore, the goal of regenerative dentistry is to achieve partial or complete regeneration of the damaged or missing dental tissues, thus restoring their biological function and structure 40 . Various promising in vivo studies have already demonstrated the potential of human dental pulp stem cells (hDPSCs) for regenerative purposes, such as alveolar and mandibular bone regeneration in patients or the reestablishment of dental pulp and mineralized tissues in dogs 42,43 . However, there is a need for alternatives since teeth do not always constitute an ideal cell source: the supply of hDPSCs is limited and bound to the extraction of the respective, healthy tooth 8 . Fibroblasts from different organs can be more accessible, as they can be obtained from a plethora of surgical procedures, and have been shown to share many similarities with mesenchymal stem cells (MSCs) such as the multilineage differentiation potential 19 . Fibroblasts for various sources were already analysed for their regenerative potential. Human gingival fibroblasts (hGFs) are easily obtainable from the gingiva that is often resected during general dental treatments 44 . We have recently shown that both hGFs and hDPSCs are able to attract vessels when seeded into silk fibroin scaffolds, and therefore may improve healing and regeneration of damaged tissues 45 . While there is still little information on the stem cell properties and differentiation potential of hGFs 46 , several in vitro studies have shown that these cells are able to form osteogenic, chondrogenic and adipogenic tissues 47,48 . Human foreskin fibroblasts (hFFs) are accessible after circumcision/biopsies and are also capable to differentiate into bone, cartilage and fat 49 . Several studies have indicated that both gingival and foreskin tissues contain subpopulations of mesenchymal progenitor/stem cells that allow cytodifferentiation into multiple lineages [50][51][52] . However, it is not clear if distinct cell types exist within these two tissues, since hGFs and hFFs are barely distinguishable from MSCs regarding phenotype, expression of specific markers and immunosuppression responses 53,54 . Cell differentiation follows a shift in gene expression, a process involving the coordinated action of transcription factors, non-coding microRNA, DNA methylation, histone modifications and other chromatin remodeling activities 55,56 . NANOG and OCT-4 are both well-established embryonic stem cell (ESC) markers. They play a major role in the maintenance of the pluripotent state of ESCs and are down-regulated as the cells become more committed 22,27 . Although previous studies suggested that these genes were no longer expressed in adult stem cells 23 , more recent findings showed that their expression persists in MSCs 24 . We detected low levels of NANOG and OCT4 in basal conditions in hDPSCs, hGFs and hFF, followed by a significant initial upregulation and successive modulation of their expression mostly during osteogenic differentiation in all three cell populations. Importantly, the dynamic of the modulation of these genes in the three cell types was very diverse, showing a cell-type-specific regulation of stemness-related genes upon differentiation, thus indicating that the expression of NANOG and OCT-4 is not simply increased or decreased in equal measure as differentiation occurs or subsides. These results apparently contradict previous findings showing down-regulation of this gene during cytodifferentiation 23,57 . However, recent studies suggested that increased OCT-4 expression enhances the ability of MSCs to differentiate into osteogenic and adipogenic lineages 58,59 , possibly priming loci coding for factors fundamental for lineage commitment 60 . MSCs have been shown to differ in the expression of ESC markers depending on the source from which they were obtained 58 . It is likely that the decision if a cell will differentiate or remain quiescent and self-renew is the result of the interplay with many other transcription factors and pathways and a question of increased and decreased expression of genes over the course of time rather than a simple switch on or off 22,61 . In this regard, all groups modulated the expression of OCT4 and NANOG both in osteogenic and adipogenic conditions. hGFs displayed a striking synchronized increase in the expression of these two genes after two weeks of incubation in osteogenic medium, a dynamic completely different from that of hDPSCs and hFF, which might be correlated with the observed differences in differentiation potential. In fact, hDPSCs and hFFs, but not hGFs, were able to form mineralization nodules upon osteogenic induction in vitro. This correlates with the different modulation of osteogenic markers observed in hDPSCs and hFF, and hGFs. RUNX2 is known as a master control gene in osteoblastic differentiation as it plays a crucial role in the differentiation of MSCs into preosteoblasts 62 . SP7/OSX is involved in the maturation of preosteoblasts into mature osteoblasts 29 . ALP plays an important role in the mineralization process 31 and is often used as a marker for osteogenic differentiation. In osteogenic conditions, both hDPSCs and hFF showed sustained RUNX2 expression, a moderate peak of SP7/OSX expression at day 14, and progressive upregulation of ALP. hGFs failed to maintain RUNX2 expression and to upregulate ALP. However, hGFs showed a striking peak in SP7/OSX expression at day14, associated with a major upregulation of OCT4 and NANOG. Although previous in vitro studies have demonstrated that hGFs are capable to form mineralized deposits in osteogenic media concentrations that differ from those used for the present study 46 , the observed expression patterns might be indicative of a cell-specific incapability to pursue the full osteogenic differentiation path. In this regard, it has been shown that SP7/OSX overexpression can induce osteogenic differentiation in murine embryonic stem cells and murine bone marrow stromal cells, but not in fibroblasts 63 . Importantly, hDPSCs were the only cell type upregulating DSPP expression when cultured in osteogenic conditions. This is in accordance with their tissue of origin and their known ability to give rise to odontoblasts 64,65 . The observed dynamic modulation of RUNX2, OSX and ALP also correlates with an odontoblastic differentiation program. During murine tooth development, RUNX2 and OSX are highly expressed in immature odontoblasts, while RUNX2 is downregulated upon terminal differentiation 66 . ALP, on the contrary, is expressed at high levels also in mature odontoblasts 67 . These observations suggest that hDPSCs cultured in osteogenic conditions actually show an odontoblastic-like differentiation dynamic. The expression of PPAR-γ2, a transcription factor essential in the formation of adipocytes 28 , and LPL, which is expressed in preadipocytes and plays a crucial role in lipid metabolism and concentration of triglycerides 35 , was analyzed in order to assess the adipogenic differentiation potential of the three cell populations. In adipogenic culture conditions, significant upregulation of PPAR-γ2 and LPL expression was observed in all experimental groups already at early time points. This upregulation was particularly pronounced in hFFs. Histological staining with Oil Red O revealed lipid droplets in hFFs and hGFs, but not in hDPSCs. Similarly, only hFFs and hGFs showed a shift from fibroblastic/spindle to spherical cell shape, which represents a clear sign of adipogenic differentiation 28,35,68 . LPL expression was significantly higher in hFFs, but not in hGF, when compared to hDPSCs, despite the clearly higher adipogenic potential of both these fibroblastic populations (as indicated by Oil Red O stainings). Fibroblasts expressed significantly higher levels of PPAR-γ2 already in basal conditions, with hGFs expressing over 20-fold more PPAR-γ2 than hDPSCs. This higher expression was then maintained throughout the differentiation period. These results thus indicate that hFFs and hGFs possess a significantly higher adipogenic potential compared to hDPSCs. Nevertheless, in vitro differentiation assays do not constitute a physiological environment and therefore it is not clear whether the observed changes during cytodifferentiation are rather caused by a temporary up-regulation of tissue-specific genes due to artificial in vitro concentrations of substances and if they can truly be translated to an in vivo situation 18 . In conclusion, the present findings support the idea of using fibroblasts for regenerative purposes based on their multilineage differentiation potential. Both hGFs and hFFs contain multipotent progenitors that are able to form osteogenic and adipogenic tissues and are more prone towards adipogenic differentiation when compared to hDPSCs. However, hDPSCs might represent a more appropriate cell population for regenerative purposes involving bone and dental tissues. Materials and Methods Collection of human cells. The procedure for anonymized human dental pulp stem cells (hDPSCs) and human gingival fibroblasts (hGFs) collection at the Zentrum für Zahnmedizin, Zürich, was approved by the Kantonale Ethikkommission of Zurich (reference number 2012-0588) and the patients gave their written informed consent. All procedures were performed according to the current guidelines. All surgical procedures and tooth extractions were performed by professional surgeons and dentists. Human foreskin fibroblasts (hFFs) were purchased from ATCC (ATCC, Manassas VA, USA). Human dental pulp stem cells (hDPSCs) were isolated from the dental pulp of extracted wisdom teeth of healthy patients as previously described 45 . The dental pulps were enzymatically digested for one hour at 37 °C in a solution of collagenase (3 mg/mL; Life Technologies Europe BV, Zug ZG, Switzerland) and dispase (4 mg/mL; Sigma-Aldrich Chemie GmbH, Buchs SG, Switzerland). A filtered single-cell suspension was plated in a 40 mm Petri dish with hDPSC growth medium containing DMEM/ F12 (Sigma-Aldrich Chemie GmbH, Buchs SG, Switzerland) with 10% fetal bovine serum (FBS) (PAN Biotech GmbH, Aidenbach, Germany), 1% penicillin/streptomycin (P/S) (Sigma-Aldrich Chemie GmbH, Buchs SG, Switzerland), 1% L-glutamine (Sigma-Aldrich Chemie GmbH, Buchs SG, Switzerland), and 0.5 μg/ml fungizone (Life Technologies Europe BV, Zug ZG, Switzerland) after washing away the enzyme solution. Cells were passaged at 80-90% confluence and expanded in the same growth medium. Gingival fibroblasts (hGFs) were isolated from healthy parts of gingiva collected from biopsies, as previously described 45 Technologies, Switzerland) supplemented with 10% Foetal Bovine Serum (FBS, Bioswisstech AG, Switzerland), 100 U/ml penicillin/streptomycin (Sigma-Aldrich/Merck, Darmstadt, Germany), and Amphotericin B 0.25 μg/μL (ThermoFisher Scientific, Switzerland) incubated at 37 °C in 5% CO 2. The medium was replaced every second day. Cells were passaged once a confluence of 70-80% was reached. Cells were washed once with phosphate buffered saline (PBS) before trypsin was added for 3 min at 37 °C for their detachment. Trypsin was blocked by addition of 5 volumes of DMEM/F12 supplemented with 10% FBS. The cells were then centrifuged and seeded into T25 flasks (Sarsted AG, Switzerland) for the differentiation assays. 20'000 cells per well were seeded onto 24-well-plates for histological staining, while for gene expression analysis, 250'000 cells were seeded onto T25 plates. The osteogenic differentiation medium consisted of DMEM supplemented with Ascorbic Acid (200 μM), β-Glycerolphosphate (10 mM), Dexamethasone (10 nM) (Sigma-Aldrich/Merck, Darmstadt, Germany), and Amphotericin B 0.25 μg/μL (ThermoFisher Scientific, Switzerland). The adipogenic differentiation medium consisted of DMEM (1 ml) supplemented with Dexamethasone (1 μM), IBMX (0.5 mM), Indomethacin (200 μM), Insulin (10 μM) (Sigma-Aldrich/Merck, Darmstadt, Germany) and Amphotericin B 0.25 μg/μL. Cells were cultured for 21 days in osteogenic medium (OM) and adipogenic medium (AM). Cells were collected from the T25 flasks on day 0 (plating day), 7, 14 and 21 and used for RNA extraction. Cells cultured on 24-wells plates were cultured for 21 days, stained (see following paragraph) and examined under a bright-field microscope. stainings. Alizarin Red S staining was performed to identify extracellular calcium deposits of cells differentiated into osteoblasts. Alizarin Red S powder was dissolved in distilled water, pH 4.2. Cells were washed with PBS, fixed with 4% PFA for 30 min, washed with distilled water and finally Alizarin Red S staining solution was added to each well for 45 min at room temperature in the dark. Thereafter wells were washed with deionized water and then PBS was added. The cells were viewed under a bright-field microscope, where calcium deposits exhibited a bright orange-red color. Oil Red O staining was performed to identify lipids in cells differentiated into adipocytes. 300 mg of Oil Red O powder were added to 100 ml of 99% isopropanol and then mixed with deionized water and filtered through a funnel. Cells were washed with PBS, fixed with 4% PFA for 30 min, washed again with deionized water, 60% isopropanol was added for 2-5 min and after isopropanol aspiration Oil Red O was added for 5 min. Thereafter, the wells were rinsed with tap water, hematoxylin counterstain was performed for 1 min and cells were rinsed with warm tap water. The cells were viewed under a bright-field microscope, where lipids exhibited a red color while the nuclei of cells were blue. Stainings were quantified by measuring the proportion of Alizarin-Red and Oil-Red-O positive area over the total area imaged, using Fiji 69 . 3 independent samples were analysed for each cell type. Gene expression analysis. Collection of cells and snap-freezing. Cells were collected by trypsinization at day 0, 7, 14 and 21, snap-frozen in liquid nitrogen and stored at −80 °C. RNA isolation and purification. The RNA isolation on snap-frozen cells from the differentiation assays using the RNeasy Plus Universal Mini Kit was performed according to the instructions given (Qiagen AG, Hombrechtikon ZH, Switzerland). cDNA synthesis. Reverse transcription of the isolated RNA was performed using the iScript ™ cDNA synthesis Kit and according to the instructions given (Bio-Rad Laboratories AG, Cressier FR, Switzerland). Briefly, 1000 ng of RNA were used for reverse transcription into cDNA. Nuclease-free water was added to add up to a total of 15 μl. 4 μl of 5x iScript reaction mix and 1 μl of iScript reverse transcriptase were added per sample in order to obtain a total volume of 20 μl. The reaction mix was then incubated for 5 min at 25 °C, for 30 min at 42 °C and for 5 min at 85 °C using a Biometra TPersonal Thermocycler (Biometra AG, Göttingen, Germany). Expression levels were calculated by the comparative ΔΔCt method (2 −ΔΔCt formula), after being normalized to the Ct-value of the GAPDH housekeeping gene. Gene expression analysis was performed on 6 independent samples per condition. Samples were always compared one-vs-one using the Mann Whitney -U/Wilcox Rank Sum Test (Graph Pad Prism 8.0). ethical approval and informed consent. The procedure for anonymized human dental pulp stem cells (hDPSCs) and human gingival fibroblasts (hGFs) collection at the Zentrum für Zahnmedizin, Zürich, was approved by the Kantonale Ethikkommission of Zurich (reference number 2012-0588; confirmed 2017-00932) and the patients gave their written informed consent. All procedures were performed according to the current guidelines. All surgical procedures and tooth extractions were performed by professional surgeons and dentists. Human foreskin fibroblasts (hFFs) were purchased from ATCC (ATCC, Manassas VA, USA). Data Availability The datasets generated during and/or analysed during the current study are available from the corresponding author on reasonable request.
2019-02-12T15:01:57.679Z
2019-02-11T00:00:00.000
{ "year": 2019, "sha1": "38df01f196724e12fdd5d0f3453170d4f2e03fd7", "oa_license": "CCBY", "oa_url": "https://www.nature.com/articles/s41598-018-37981-x.pdf", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "38df01f196724e12fdd5d0f3453170d4f2e03fd7", "s2fieldsofstudy": [ "Medicine", "Biology" ], "extfieldsofstudy": [ "Chemistry", "Medicine" ] }
15089346
pes2o/s2orc
v3-fos-license
Finite and countable infinite products of Probabilistic Normed Spaces In this work we first give for PN spaces results parallel to those obtained by Egbert for the product of PM spaces, and generalize results by Alsina and Schweizer in order to study non-trivial products and the product of $m$-- transforms of several PN spaces. In addition we present a detailed study of $\alpha$--simple product PN spaces and, finally, the product topologies in PN spaces which are products of countable families of PN spaces. INTRODUCTION We assume that the reader is acquainted with the basic notions of the theory of PN spaces. These, as well as terms and concepts not defined in the body of this paper, may be found in [1,2,3,5,6]. DEFINITION 1. A probabilistic metric space (henceforth and briefly, a PM space) is a triple (S, F , τ ) where S is a nonempty set (whose elements are the points of the space ), F is a function from S × S into ∆ + , τ is a triangle function, and the following conditions are satisfied for all p, q, r in S: • F (p, r) ≥ τ (F (p, q), F (p, r)). If, instead of (N1), we only have ν θ = ǫ 0 , then we shall speak of a Probabilistic Pseudo Normed Space, briefly a PPN space. If the inequality (N4) is replaced by the equality ν p = τ M (ν αp , ν (1−α)p ), then the PN space is called ǎ Serstnev space and, as a consequence, a condition stronger than (N2) holds, namely Here j is the identity map on R, i.e. j(x) := x (x ∈ R). AŠerstnev space is denoted by (V, ν, τ ). There is a natural topology in a PN space (V, ν, τ, τ * ), called the strong topology; it is defined, for t > 0, by the neighbourhoods By setting F ≤ G whenever F (x) ≤ G(x) for every x ∈ R + and F, G ∈ ∆ + , one introduces a natural ordering in ∆ + . DEFINITION 3. Let (V, · ) be a normed space and let G ∈ ∆ + be different from ǫ 0 and ǫ +∞ ; define ν : V → ∆ + by ν θ = ǫ 0 and where α > 0 and α = 1. Then the pair (V, ν) will be called the α-simple space generated by (V, · ) and by G. DEFINITION 4. Let τ 1 , τ 2 be two triangle functions. Then τ 1 dominates τ 2 , and we write Notice that since τ 1 is associative one has τ 1 ≫ τ 1 , so that "dominates" is reflexive but its transitivity is still an open question. DEFINITION 7. Let τ be a triangle function and let m belong to M b . Then The funtion F m is called the m-transform of F . Theorem 1. Let T be a continuous t-norm and let m ∈ M b . Then m is τ T -superadditive if and only if m is superadditive (see [2], Theorem 3). Finite τ -Products of PN spaces In this section, we give our definition of a τ -product of two probabilistic normed spaces which is a generalization of a parallel result from Egbert about the τ -product of PM spaces). Moreover we show a necessary and sufficient condition for the τ -product of two general PN spaces ofŠerstnev to be aŠerstnev space as well as a sufficient condition for the τ -product of two Menger PN spaces to be also a PN space of Menger. The proof of most theorems is omitted since it is just a matter of straightforward verification of the statements. DEFINITION 8. Let (V 1 , ν 1 , τ, τ * ) and (V 2 , ν 2 , τ, τ * ) be two PN spaces under the same triangle functions τ and τ * . Let τ 1 be a triangle function. Their τ 1 -product is the quadruple is a probabilistic seminorm defined by for any (p, q) ∈ V 1 × V 2 . One may wonder whether the PM space associated with the τ 1 -product of two PN spaces characterized in Theorem 2 coincides with the τ 1 -product of corresponding PM spaces. The following theorem gives an answer in the affirmative to this question. Theorem 3. Let (V 1 , ν 1 , τ, τ * ) and (V 2 , ν 2 , τ, τ * ) be two PN spaces under the same triangle functions τ and τ * and let the triangle function τ 1 be such that τ * ≫ τ 1 and [12], p. 211, Theorem 12.7.8). But in principle, if one has two simple PN spaces (V 1 , ( · ) 1 , G, M) and (V 2 , ( · ) 2 , G, M) with the same d.d.f. G its M-product is not necessarily a PN space because of the assumption τ M * ≫ M of Theorem 1 fails here. Now then, when one replaces τ M * by M in these simple PN spaces we obtain the PN spaces (V 1 , · 1 , G, M) and (V 2 , · 2 , G, M) respectively, as is easily checked. Both of them areŠerstnev. Can the M-product of these be a simple PN space? The following theorem answers that question in the affirmative. and · 3 be the two above mentioned PN spaces and the norm defined on V 1 × V 2 by Can a τ -product of two simple PN spaces with the same generator function G be a simple PN space also with G as generator ?. The following theorem answers this question in the affirmative. be twoŠerstnev spaces under the same triangle function τ . Let us assume that τ 1 is a triangle function such that τ 1 ≫ τ , then, their τ 1 -product is also aŠerstnev space if, and only if τ 1 ≫ τ M and τ M ≫ τ 1 . Proof : By Theorem 1 the τ 1 -product of the twoŠerstnev spaces exists. Now since both PN spaces areŠerstnev one has, for all α ∈ [0, 1] and for all this implies τ 1 ≫ τ M and τ M ≫ τ 1 . Corollary 1. Under the same assumptions as in Theorem Now, the τ 1 -product of two Menger spaces can again be a Menger space. A sufficient condition is provided by the following theorem. Theorem 7. Let (V 1 , ν 1 , T ) and (V 2 , ν 2 , T ) be two Menger PN spaces. If T 0 is a left-continuous t-norm that satisfies the conditions T * ≫ T 0 and T 0 ≫ T, then the τ T 0 -product is a Menger PN space under T. Proof: It suffices to apply Lemma 12.7.3 in [12] and Theorem 1. ✷ Example 3. It is known that for every t-norm T one has M ≫ T (this is a result due to R. Tardiff) and it is easily checked that T * ≫ M * . Then the τ M -product of two Menger PN spaces is also a Menger PN space. In the next section we study non trivial products. Countable τ -Products of PN spaces We shall need some preliminaries before stating the main results of this section. Let (V, · ) be a normed space and let α > 1. If the d.f. G ∈ D + is continuous and strictly increasing, then ( [6]; Section 3) (V, · , G; α) is a Menger PN space under the strict t-norm defined for all x, y in [0, +∞] by One is now ready to state the main results of this section. The following one is the analogue of Theorem 12.7.9 in [12] and shows the relevance of the t-norm T G in countable τ -products. But in order to state it one gives a previous definition. Proof: It suffices to notice that the mapping in definition 9 is a norm for all β ∈ ]0, +∞[. ✷ The space (V 1 ×V 2 , · β , G; α) with β = α α−1 is α-simple: it suffices to show that where j is the identity map on R The analogous theorem for the case α ∈]0, 1[ is an open problem (see [6]; Section 3). We know that if α > 1 there exist normed spaces, (V, · ) with the following property: " if G ∈ ∆ + is continuous and strictly increasing, then the t-norm T G is the strongest continuous t-norm under which (V, · , G; α) is a Menger PN space "(see [6]; Theorem 3.3). However, a new phenomenon arises in the case of product PN spaces, for contrary to the above, in this case the t-norm T G is not the strongest continuous t-norm under which (V 1 × V 2 , · β , G; α) is a Menger PN space, as is easily checked. Such a phenomenon is today an open problem. In the sequel we study a special kind of probabilistic norms on the countable product of a family of PN spaces. (N3) Since (V, ν, τ T , T) is a PN space and m is superadditive, for any p in V , one has ν p+q m ≥ τ T (ν p , ν q )m ≥ τ T (ν p m, ν q m). (N4) for every α ∈ [0, 1], for every p ∈ V and for every x ∈ R + , whence for every α ∈ [0, 1] and for every p ∈ V . Corollary 2. Let T 1 , T 2 be two t-norms such that T 1 ≤ T 2 , then the mtransform of any one of PN spaces (V, ν, τ T 1 , τ T 2 ) is a PN space under τ T 1 and T 2 . Proof: If T 1 ≤ T 2 one has τ T 1 ≤ τ T 2 , and now it suffices to apply to the axiom (N4) the well-known inequality τ T ≤ T for any t-norm T (see [12]). Now let us recall some conventions and results about infinite τ -products. Since the τ T operations are associative, for any sequence F i ∈ ∆ + , the n-fold τ T -product τ n T (F 1 , . . . , F n+1 ) is well defined for each n as the serial iterates of τ T , defined recursively via If T is a continuous t-norm, then it is well known that where the supremum is taken with respect to all sequences {x n } of positive numbers such that be a countable family of proper PN spaces, i.e., PN spaces with a continuous triangle function τ that satisfies τ (ǫ s , ǫ t ) ≥ ǫ s+t . The function τ T is one of these. Let b i be an infinite sequence of positive numbers such that the series Proof: The proof in [2] only needs is to be supplemented by the new notation of PN spaces with respect to the PM spaces as follows: For any positive integer n, let According to the result in Lemma 4 about the non trivial limit of the infinite τ T -product, one has that G αp and G (1−α)p are in D + for every α ∈ [0, 1]. Example 4. (A particular countable, but finite, τ -product) Let (V, G) be the product of the Definition 8 for i = 1, 2, then (V 1 ×V 2 , τ T (ν 1 To prove this it suffices to check axiom (iv): By Theorem 9 and since T ≫ τ T for every t-norm T (see [12]) one has Just now one has the following question: Is the product (V, G) a PN space under τ T and τ T * ?. First of all, can any member of the family (V i , ν i m i , τ T , τ T * ) be a PN space?. We answer that question in the negative because of the axiom (iv). Axiom (iii) works with m superadditive, but axiom (iv) needs m to be subadditive. The same function m must to appear in second and third terms of the following chain of inequalities and it would be meaningful that m is τ T * -subadditive, whatever convergence factor one introduces in τ T * , which is absurd. In order to make the reader's task easier the following result in PN spaces will be needed, for the last section of this paper. In order to simplify the notation we replace henceforth ν Σ p by νp. N} be a countable family of PN spaces and let τ i ≥ τ W and τ * i ≤ τ * W for all i ∈ N, then the Σ-product of this family denoted by The proof is similar to the one in Alsina [2]. It suffices to apply Lemma 1 and Lemma 2. Product topology for countable τ -products In this section we want to show that the product topology and the strong topology in countable infinite τ -products are not equal. Let us recall that this is what happened with the same type of τ -products of PM spaces. Theorem 12. Let each of the PN spaces (V i , ν i , τ i , τ * i ) be endowed with the strong topology corresponding to ν i , i ∈ N, and ∆ + with the topology of weak convergence. Then the product topology is weaker than the strong topology in (V, G). be a standard neighborhood in the product topology. Choose ǫ = min{ǫ 1 , ǫ 2 , . . . , ǫ n } and letq ∈ Np(ǫ). Then, since Gpq ≤ ν i q i −p i for all i ∈ N, one has In general, the two topologies are not equal. For, if this were the case, given Np(ǫ) there would exist a product neighborhood U , which implies that V i = N p i (ǫ) for all i > m, a very strong condition. Product topology for Σ-products Contrary to what happens with countable infinite τ -products, in Σ-products the two topologies are equal. Let us recall that if (V, ν, τ W ) is a PM space with τ M uniformly continuous, then for the strong neighborhoods N p (t), p ∈ V, t > 0, the following statements hold: • If q ∈ N p (p), there exists a t ′ > 0 such that N q (t ′ ) ⊂ N p (t) • If p = q, there exists a t > 0 such that N p (t) ∩ N q (t) = ∅ The proof of the following theorem is similar to that of Theorem 1.4 in [3]: only small changes in the notations are needed. Theorem 13. Let {(V i , ν i , τ i , τ W * )|i ∈ N} and V, ν Σ be as in theorem 8. Let each V i be endowed with the strong topology induced by ν i . Then the strong topology on V induced by ν Σ is the product topology. The reason for the difference between Theorem 11 and 12 is easily understood if one pays attention to the probabilistic interpretation of the ǫneighborhood in the respective products spaces: If Np(ǫ) is a neighborhood in the τ -product, andq ∈ Np then, with probability greater than 1−ǫ, all the components p i of p are at a distance (the one associated to the norm in V i ) less than ǫ from the corresponding q i . On the other hand, if Np(ǫ) is a neighborhood in the Σ-product thenq ∈ Np implies that, with probability greater than 1 − ǫ, at least one of the components p i of p is at a distance less than ǫ from the corresponding q i .
2014-10-01T00:00:00.000Z
2004-04-08T00:00:00.000
{ "year": 2004, "sha1": "b59a801b2edce435190afed526f0a4764b8ed438", "oa_license": null, "oa_url": null, "oa_status": null, "pdf_src": "Arxiv", "pdf_hash": "b59a801b2edce435190afed526f0a4764b8ed438", "s2fieldsofstudy": [ "Mathematics" ], "extfieldsofstudy": [ "Mathematics" ] }
220855307
pes2o/s2orc
v3-fos-license
LIFESTYLE AND QUALITY OF LIFE IN WORKING-AGE PEOPLE AFTER STROKE SUMMARY Recommendations for changing one’s lifestyle in the aspect of factors that increase the risk of another stroke are often included in the plan of caring for patients after stroke. The style of life is connected to the quality of life and can be formed not only by socialization but also by conscious work on its health-promoting aspect. Lifestyle is a unique configuration of everyday behavior depending mostly on the quality of life available. The aim of the research was to identify the correlation between lifestyle and quality of life in people of working age after stroke. There were 279 patients after first-ever ischemic or hemorrhagic stroke, including 131 women and 148 men. Abbreviated version of the World Health Organization Questionnaire and the Sickness Impact Profile scale were used to examine the quality of life. For assessment of the quality of life, the following indicators were created: lifestyle before stroke and lifestyle after stroke. Less healthy lifestyle before stroke resulted in lower quality of life in the psychological and environmental sphere of life in these people after stroke, especially those having suffered stroke six months to two years before. Better quality of life in people after stroke was found to be connected to a pro-health lifestyle. Introduction In recent years, the concept of lifestyle has become more important in the aspect of stroke prevention. Polish people often associate healthy lifestyle with high demands, with barriers that people have to overcome in order to lead this kind of lifestyle. This may give rise to defensive attitude and instead of changing their way of thinking and acting people might close and become unwilling to listen to arguments and take up pro-health actions. Healthy lifestyle, as understood by the people who were examined, mainly refers to two areas, physical activity and nutrition. Drugs, time management, susceptibility to stress are completive but not key to building a healthy lifestyle 1 . David Mechanic discovered that the concept of lifestyle refers, among other things, to attitudes and health convictions, risky behavior, habits, healthy behavior, and preventive behavior 2 . Terminology of the quality of life is not complete yet, therefore there are lots of equivalent definitions and terms, and their range is often different. Besides the term of the quality of life, other terms are also used, i.e. conditions of life, standard of living, life rate, way of life, or lifestyle 3 . Quality of life is an interdisciplinary concept, which has many meanings. Many different researchers deal with it, to mention only social scholars such as sociologists, philosophers, economists, statisticians to whom the state of health is of minor importance, as well as scholars from the field of medicine and psychology to whom the quality of life depends on one's health. Health is understood as a positive state that can be assessed in a subjective way 4 . The quality of life is one of the most important Thematic Panels of the Poland 2020 National Foresight Program within the Research Field of Balanced Development of Poland 5,6 . People's health largely depends on our pro-health behavior that constitutes people's lifestyle. This behavior may result in positive or negative health results 7 . The quality of life in the case of people having suffered stroke largely depends on the patient's general condition, as well as his lifestyle before the disease 3 . Pro-health possibilities of a present-day man (length of life, technological progress in medicine, level of hygiene or discovery of antibiotics) make it possible to live a satisfying life but they may also present dangers, e.g., many diseases that are the result of prolonged life expectancy or wrong health behavior that largely reduce the quality of life 8 . Recommendations for changing lifestyle in the aspect of factors that increase the risk of another stroke are often given in the plan of caring for patients after stroke. Lifestyle is connected to the quality of life and it can be shaped not only by oneself. It can also be the result of conscious decisions of people who want to improve themselves and the society. Lifestyle is a unique configuration of everyday behavior, which largely depends on the quality of life available. Lower quality of life is generated by chronic diseases, and stroke is undoubtedly one of them. Chronic diseases are a source of negative emotions and this may influence the person's lifestyle 3 . At the same time, sick people after stroke should be educated and reminded that lifestyle is decisive for their quality of life 9,10 . It is also important for a sick person after stroke to maintain their health through consistent pro-health lifestyle 11 . The aim of the research was to assess the connection between lifestyle and profile of the disease influence (Sickness Impact Profile) in the case of people of working age depending on the time that has elapsed since stroke. Answers to the following questions were looked for during the research: 1. What is the connection between the lifestyle of people after stroke and the quality of their life measured by the abbreviated version World Health Organization Quality of Life Questionnaire (WHOQOL-BREF) depending on the time that has elapsed since stroke? 2. Is there a connection and what kind of connection is there between pro-health lifestyle of people after stroke and the quality of life mea-sured by the disease influence (Sickness Impact Profile) depending on the time that has passed since stroke? Material and Methods In the research, a method of diagnostic survey was used, along with closed questions about health behavior, set by the authors as a tool to assess lifestyle. The abridged Polish version of the WHOQOL-BREF and the Sickness Impact Profile (SIP) questionnaire on the influence of the disease were used to assess the quality of life. For analysis, a quotient of lifestyle before and after stroke was created based on eight questions from the authors' questionnaire. The questions referred to pro-health behavior before and after stroke, with five-grade answers offered. Low results implied positive action and higher results indicated more negative forms of behavior. The research was carried out in 2015, approved by the Bioethical Commission, Regional Medical Chamber in Cracow (86/KBL/OIL/2013). The study group consisted of 279 patients admitted to rehabilitation ward in three neighboring hospitals in Podhale, Poland. The patients had suffered ischemic stroke or hemorrhagic stroke. There were 131 women and 148 men, mean age 57.4 years. Taking into consideration the time that had elapsed since falling ill, the study subjects were divided into three groups: group 1 (n=103) 6-12 months; group 2 (n=60) 13 months to 2 years; and group 3 (n=116) 2-5 years. Statistical analysis of the results referred to comparison of data in different groups of study subjects depending on the type of variables. In the case of quantity variables which were at least ordinal in nature it was checked if there was linear relationship between the variable and its strength and direction by use of Pearson's correlation factors (quantity variables) or Spearman's test (ordinal variables). In all tests, the level of significance was set at 0.05. Any higher levels were statistically nonsignificant. Results Analysis of correlation between lifestyle before stroke and quality of life assessment according to WHOQOL-BREF questionnaire yielded statistically significant differences. Analysis of correlation between lifestyle before stroke and assessment of quality of life satisfaction (Bref1) showed statistically significant correlations in the groups of patients at 6-12 months (p=0.015) and 13-24 months (p=0.023) after stroke. Negative rates of correlation between the indicator of lifestyle (with reverse coding) and one's own sense of quality of life (Bref1) in people after stroke pointed to inverse correlation. It means that the quality of life increased when the person had led a healthy lifestyle before the stroke occurred. In the group of patients examined at 25-60 months after stroke, no significant correlations were noted between the lifestyle before stroke and patient assessment of their quality of life. Analysis of correlation between lifestyle before stroke and satisfaction with one's own health (Bref2) indicated that there were statistically significant connections in the group of patients examined at 6-12 months after stroke (p=0.036). Analysis of interdependence between assessment of one's health (Bref2) at the time of examination and lifestyle after stroke showed significant dependence (p=0.010) only in the group of patients examined at 6-12 months after stroke. Negative rates of correlation between lifestyle indicators (with reverse coding) and one's own assessment of the quality of life pointed to inverse correlation, indicating a connection between increase in the assessment of one's health and increase of pro-health behavior before stroke. This was not the case if the lifestyle before and after stroke influenced assessment of one's health (Bref2) by patients with more than one year elapsed from stroke ( Table 1). Connection of lifestyle and quality of life in its different fields In the case of somatic field (HEALTHCARE CENTRE1), statistically significant differences were recorded in patients with 6-12 months elapsed from stroke. The correlation between the lifestyle before stroke and HEALTHCARE CENTRE1 was statistically significant at the level of p=0.003 and between the lifestyle after stroke and HEALTHCARE CEN-TRE1 at the level of p=0.015. There were negative correlations between the fields examined. Considering reverse coding of the lifestyle indicator before and after stroke, it was noted that the less favorable the lifestyle was before stroke, the worse was the quality of life in the somatic field, and the less favorable the lifestyle was after stroke, the worse was the quality of life in the respective field. In patients with 13-24 months elapsed from stroke, the correlation was on the borderline of statistical significance (p=0.055), i.e. negative correlation between the lifestyle after stroke and the somatic field. It means that at the present stage of research, the existence of correlation between the variables analyzed could not be excluded ( Table 2). The use of Pearson's test proved that there was a correlation between psychological field (HEALTH-CARECENTRE2) and lifestyle before stroke (p= 0.000) in the group of patients with 6-12 months and 13-24 months elapsed from stroke (p=0.002). After analyzing the correlation between lifestyle after stroke and HEALTHCARE CENTRE2, it turned out that there was a statistically significant correlation in the group of patients with 6-12 months elapsed from stroke. The level of significance between the variables was p=0.014. Results in Table 3 show that all correlations between the variables were negative. Taking into consideration reverse coding of the lifestyle factor, it is clear that correlations were inversely proportional. It means that the less favorable for health the lifestyle was before stroke, the worse was the quality of life in the psychological sphere in the group of patients with 6-24 months elapsed from stroke, and the less favorable for health the lifestyle was after stroke, the worse was the quality of life in the psychological aspect in the group of patients having suffered stroke 6-12 months before ( Table 3). The pattern of correlations between the social sphere (HEALTHCARE CENTRE3) and lifestyle before stroke also showed negative statistically significant correlations in two groups. The level of significance of correlations between variables in the group of subjects having suffered stroke 6-12 months and 13-24 months before examination was p=0.001 and p=0.043, respectively. Reverse coding of the lifestyle factor showed dependence, i.e. the less favorable the lifestyle was before stroke, the worse was the quality of life in social sphere in all the before-mentioned groups. There was no statistically significant correlation between social sphere and lifestyle after stroke (Table 4). Considering the correlation between lifestyle and environmental sphere (HEALTHCARE CENTRE4), there was negative correlation between lifestyle before stroke and environmental sphere in the group of patients having suffered stroke 6-12 months (p=0.000) and 13-24 months (p=0.036) before, and between environmental sphere and lifestyle after stroke in the group of patients having suffered stroke 13-24 months before (p=0.017). It was found that the less favorable for health the lifestyle of patients (with reverse coding) was before stroke, the worse was the quality of life in the environmental sphere in the group of patients having suffered stroke 6-12 months and 13-24 months before examination. Research results also indicated that the less favorable was the lifestyle after stroke, the worse was assessment of the life quality in the environmental sphere in the group of patients having suffered stroke 13-24 months before (Table 5). It is interesting that there was no statistically significant correlation between the assessment of the connection of life quality and satisfaction with one's own health in all four spheres of life quality in the group of patients examined at 2-5 years after stroke (Tables 1-5). Correlation between lifestyle and profile of disease influence Analysis of the results on correlations between lifestyle and Sickness Influence Profile (SIP), a statistically significant correlation was found between lifestyle after stroke and physical sphere (SIP1) in the group of pa- tients examined at 6-24 months of stroke. The level of significance of the correlation between variables was p=0.017 and p=0.043 in the group examined at 6-12 months and 13-24 months after stroke, respectively. The correlation was positive in both groups. Positive factors of correlation between the indicator of lifestyle (with reverse coding) and SIP1 with reverse interpretation of the result showed a directly proportional correlation (Table 6). It means that the less favorable for health was the lifestyle after stroke, the worse was the quality of life in the physical sphere. Analysis of the correlation between the psychosocial sphere (SIP2) and lifestyle did not yield a statistically significant correlation, thus it did not confirm the results of correlations between the psychological sphere (HEALTH-CARE CENTRE2) and lifestyle (Table 3). Discussion Lifestyle is formed throughout the human life and influences health the most 11,12 . Proper information concerning the right lifestyle is an important element of health education of sick people after stroke. According to Pierzchała et al., the range of knowledge concerning the change of lifestyle in post-stroke patients should mainly include physical activity, relaxation, prevention of stress, quitting addictions, especially smoking cigarettes 13 . Moreover, Banecka-Majkutewicz et al. pay attention to a change of lifestyle involving moderate consumption of alcohol and avoiding smoking tobacco 8 . In the research, the following was taken into consideration: physical activity, quitting addictions (smoking cigarettes and drinking alcohol) and checking blood pressure. Członkowska in her research showed that the lifestyle of people having suffered stroke was not right before falling ill. Most of them were characterized by the lack of physical activity, obesity, use of alcohol, smoking cigarettes, exposure to stress, poor diet, etc. 14 . Our study confirmed that the examined group of people had an unfavorable lifestyle before falling ill. Błaszczyk et al. also noted frequent lack of physical activity among the people having suffered first-ever ischemic stroke 15 . Hahn et al. report that sedentary lifestyle is connected to 23% of deaths due to serious chronic diseases such as stroke 16 . Sarzyńska-Długosz et al. think that a low percentage of people that practice physical activity as prevention and cure for chronic diseases results from disregarding these issues by doctors. Very few doctors encourage their patients to increase their daily dose of physical activity 17 . The study by Chiuve et al. showed that regular intense physical activity after six months of falling ill enabled more physical efforts and led to improvement of sensory and motor functions 18 . Banecka-Majkutewicz et al. state that if you want to decrease the risk of stroke you have to diagnose the risk factors as soon as possible and then implement the appropriate course of action, which means promoting healthy lifestyle, i.e. moderate calorie consumption, change of diet and more physical activity 8 . According to Opara, physical activity after stroke should be perceived as one of the most important elements of the program to decrease the risk of secondary stroke 19 and diabetes, which belong to the modifiable risk factors for stroke 21 . Our and other epidemiological researches proved that lifestyle influences health much more than genetic factors or environmental factors 8,[11][12][13][14][15][16][17][18]20 . Moreover, modifying the risk factors by healthy lifestyle is, theoretically, the most accessible but also the most difficult method of preventing cardiovascular diseases 8 To sum up from the research done by many authors, as well as our own, it is clear that the level of health education of the Polish society is low, so it is justified to create and develop the existing educational programs, which increase the effectiveness of preventive actions and, in the case of post-stroke people, prevent secondary stroke 13,15,18,22 . Therefore, as Członkowska claims, it is important and necessary to popularize the right lifestyle as prevention of stroke 14 . Research results showed differentiation of the quality of life in the study time periods in different fields among the post-stroke people of working age. Interestingly, there was no statistically significant correlation between the assessed quality of life in all four spheres and satisfaction with one's own health and lifestyle in the group of patients examined at 2-5 years after stroke. The results obtained in the study showed that prospective research is necessary. The aim of the research was to assess whether improper lifestyle influenced the quality of life of people after stroke.
2020-07-28T05:04:48.465Z
2020-03-01T00:00:00.000
{ "year": 2020, "sha1": "821a6635c336bc01bf4128b9a6282ed64306b3ef", "oa_license": "CCBYNCND", "oa_url": "https://hrcak.srce.hr/file/351799", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "821a6635c336bc01bf4128b9a6282ed64306b3ef", "s2fieldsofstudy": [ "Medicine", "Psychology" ], "extfieldsofstudy": [ "Medicine" ] }
5047118
pes2o/s2orc
v3-fos-license
Note: On the dielectric constant of nanoconfined water Investigations of dielectric properties of water in nanoconfinement are highly relevant for various applications. Here, using a simple capacitor model, we show that the low dielectric constant of nanoconfined water found in molecular dynamics simulations can be largely explained by the so-called dielectric dead-layer effect known for ferroelectric nanocapacitors. Investigations of dielectric properties of liquid water in nanoconfinement are highly relevant for the energy storage in electrochemical systems, mineral-fluid interactions in geochemistry and microfluid based devices in biomedical analysis 1 . It has been reported that polarization shows a strong anisotropy at water interfaces 2,3 and the dielectric constant of nanoconfined water is surprisingly low ( ⊥ ∼ 10) 4 . Here, using a simple capacitor model, we show that the low dielectric constant of nanoconfined water can be largely accounted by the so-called dielectric dead-layer effect known for ferroelectric nanocapacitors 5 . Before talking about the effect of nanoconfinement, one needs to realize that the first effect of having an interface corresponds to a switch in the electric boundary condition. From classical electrodynamics, we know that the electric field E z is discontinuous at a dielectric interface, that is the reason why it is convenient to use the electric displacement D as the fundamental variable instead 6 . In the latter case, the polarization of dielectrics P z is added into the electric field. This makes D continuous in the direction perpendicular to an interface and leads to its When the electric boundary condition is switched from constant electric field E to constant electric displacement D, the dielectric response will be different accordingly 7-10 . P = χ E, = 1 + 4πχ (1) The difference in χ due to the electric boundary condition leads to differences in the fluctuation of polarization at zero field and in the corresponding relaxation time. This phenomenon is not limited to water in nanoconfinement where switching of electric boundary condition is enforced by introducing explicit interfaces 2,3 but can also be realized in bulk liquid water by turning on the constant electric displacement simulation 9,10 . For bulk liquid water, ⊥ = even though χ ⊥ is radically different from χ . Now let us go back to the original question: What accounts for the low dielectric constant ⊥ of water slab in nanoconfinement 4 . Is water in nanoconfinement completely different from that in the bulk ? To answer this question, we applied the constant electric displacement simulation with D = 0.6835V/Å to a water slab confined between two hydrophobic walls at a) Electronic mail: chao.zhang@kemi.uu.se FIG. 1. a) A snapshot of MD simulations of water slab confined between rigid walls under constant electric displacement D = 0.6835V/Å . The seperation distance between walls Lw is 30.77Å in this case.; b) The corresponding electrostatic potential profile ϕ(z) generated from the charge density. The slope gives the negative of the deploarization field 4πP ⊥ in the bulk water region. ambient conditions (Fig. 1a). Interactions between water molecules is described by simle point charge/extended (SPC/E) model 11 and the rigid hydrophobic walls are composed of atoms on a dense cubic lattice. All molecular dynamics (MD) simulations were performed with GROMACS 4 package 12 and technical settings are the same as described in the previous work 9 . From the corresponding electrostatic potential profile ϕ(z), we can extract the deploarization field −4πP ⊥ from the slope in the middle region of water slab (Fig. 1b). Inserting this value into Eq. 2 and knowing D = 0.6835V/Å as the control variable, one gets ⊥,bulk = 65. This number is indeed quite close to that of the bulk liquid water at the same magnitude of D, see Ref. 9 . In other words, water slab of about 30Å thick can already recover the bulk dielectric response. Then, the question is why the reported dielectric constant ⊥ can be as low as a single digit number 4 ? One needs to realize that ⊥ is the overall dielectric constant which includes both surface contribution and bulk contribution. Because the simulation system is under the constant electric displacement condition, there-arXiv:1802.02030v2 [cond-mat.soft] 6 Apr 2018 fore surface region and bulk region can be regarded as capacitors connected in series. This was already pointed out in the study of ferroelectric nanocapacitor 5 . where C ⊥ = ⊥ /L w , C surf = ⊥,surf /L surf and C bulk = ⊥,bulk /(L w − L surf ). L w is the seperation distance between walls and L surf is the total width of two interfaces. Because of this sum of inverses, the region which has a smaller dielectric constant will dominate. From Fig. 1, one can clear see there are two vacuum gaps between walls and confined water slab. Therefore, our simple capacitor model will just approximate the confined water as vacuum gaps ( ⊥,surf = 1) plus bulk water ( ⊥,bulk = 65 at D = 0.6835V/Å). Base on these considerations and Eq. 3, ⊥ can be rewritten as: Here, the only unknown parameter is L surf , which is the width of vacuum gaps in this capacitor model. Because L surf depends on the van der Waals radius of wall atoms and interfacial water molecules, therefore, we approximate L surf as σ w + σ corr . σ w is the interatomic distance when the underlying Lennard-Jones potential becomes zero and it roughly doubles the van der Waals radius of the corresponding wall atom. Because water in the nanoconfined geometry face two walls, therefore we consider σ w as a first approximation of L surf . The remaining term σ corr is a correction factor for the mixing effect, thus it should be small and can be obtained by fitting MD data. Results of ⊥ as a function of L w are shown in Fig. 2a. Fitting MD data with Eq. 4 gives σ corr as 0.22Å, which turns out to be small as supposed. Using this model, one can make a prediction regarding the relationship between the Lennard-Jones parameter σ w of wall atoms and the dielectric constant of nanoconfined water ⊥ . The agreement with MD simulations is encouragingly good (Fig. 2b). We conclude that this simple capacitor model successfully captures the main physical reason behind the low dielectric constant ⊥ of water in nanoconfinement. Nevertheless, one should be aware that the dielectric constant of the interface ⊥,surf is (drastically) approximated to be 1 in Eq. 3 and future works should include the effect of the interfacial water (Fig. 1b).
2018-04-06T15:03:42.000Z
2018-02-06T00:00:00.000
{ "year": 2018, "sha1": "06a50e62ae002fb9196f343469eb25211ba09ac5", "oa_license": null, "oa_url": "http://arxiv.org/pdf/1802.02030", "oa_status": "GREEN", "pdf_src": "Arxiv", "pdf_hash": "9787a139cc9624b0118aefb653017163fa4c7e00", "s2fieldsofstudy": [ "Chemistry", "Physics" ], "extfieldsofstudy": [ "Physics", "Medicine", "Materials Science" ] }
252511822
pes2o/s2orc
v3-fos-license
Role of Atmospheric Temperature and Seismic Activity in Spring Water Hydrogeochemistry in Urumqi, China Springs offer insight into the sources and mechanisms of groundwater recharge and can be used to characterize fluid migration during earthquakes. However, few reports provide sufficient annual hydrochemical and isotopic data to compare the variation characteristics and mechanisms with both atmospheric temperature and seismic effects. In this study, we used continuous δ2H, δ18O, and major ion data from four springs over 1 year to understand the groundwater origin, recharge sources, circulation characteristics, and coupling relationships with atmospheric temperature and earthquakes. We found that (1) atmospheric temperatures above and below 0 °C can cause significant changes in ion concentrations and water circulation depth, resulting in the mixing of fresh and old water in the aquifer, but it cannot cause changes in δ2H and δ18O. (2) Earthquakes of magnitude ≥ 4.8 within a 66 km epicentral distance can alter fault zone characteristics (e.g., permeability) and aggravate water–rock reactions, resulting in significant changes in δ2H, δ18O, and hydrochemical ion concentrations. (3) Hydrogen and oxygen isotopes are the most sensitive precursory seismic indicators. The results of this study offer a reference for the establishment of long-term hydrochemical and isotopic monitoring, with the potential for use in earthquake forecasting. Introduction Springs offer abundant information related to deep fluids, groundwater circulation, and tectonic activity [1][2][3][4]. They may result from upwelling magmatic fluids or from deep-circulating meteoric water migrating along faults [5][6][7]. Generally, groundwater experiences continuous physical and chemical interactions during circulation; however, in relatively stable aquifers, it can maintain its specific hydrogeochemical characteristics and isotopic composition [8,9]. Earthquakes can change the crustal structure at local or regional scales, leading to the alteration of pore pressure within rock bodies and the mixing of aquifers. This process can alter the hydrogeochemistry and isotopes of spring water [10,11]. Therefore, the hydrogeochemical and isotopic characteristics of springs may reveal the origin, properties, migration path, and vertical deep circulation characteristics of the water body, including the dynamic processes of groundwater during tectonic activity [3,[12][13][14][15][16][17][18][19][20][21]. Studies on spring hydrogeochemistry and isotopic composition have been reported for over 50 years. Some have focused on their changing characteristics in relation to atmospheric temperature, while others have focused on the impact of earthquakes [22][23][24][25][26][27]. However, few reports provide sufficient annual hydrogeochemical and isotopic composition data to correlate the variation characteristics with both atmospheric temperature and seismic effects. To distinguish the impacts of these two factors, it is necessary to obtain long-duration continuous observations that reveal geological structure, fluid origins, fluid circulation characteristics, interaction processes between deep and shallow fluids, and the geochemical background of the system [28,29]. In the Tianshan area with frequent earthquakes, four springs, which are located at a close surface distance, sometimes show different hydrochemical characteristics before In the Tianshan area with frequent earthquakes, four springs, which are located close surface distance, sometimes show different hydrochemical characteristics befor torical earthquakes. The studies of their genetic mechanisms, influence factors, and pled with earthquakes are quite necessary for proposing the typical indicators of e quake forecasting and achieving disaster reduction. In this study, based on 1 year of tinuous observations of hydrogeochemistry and δ 2 H-δ 18 O at four springs in Uru China, we aimed to ① identify the origin and deep circulation processes of the sp water, ② reveal the influence of atmospheric temperature on groundwater circula and ③ establish the response relationship between geochemical changes and e quakes. Our results offer a reference for the establishment of long-term hydrochem and isotopic monitoring and offer new insight into precursory earthquake signals. Geological and Hydrogeological Settings Urumqi is located in the central North Tianshan Mountains of northwestern C It sits on the southern margin of the Junggar Basin and is bounded by Bogda Mounta the east and Turpan Basin to the southeast (Figure 1). The distance between Urumq the sea is the greatest of any city in the world; the city lacks groundwater resources experiences huge annual temperature variations. The annual average temperature is °C, while the monthly average temperatures in summer (July to August) and winter cember to January) are 24 °C and −26.5 °C, respectively. This extreme temperature d ence can reach 69.2 °C [30]. The research area covered four springs (Spring 04, Spring 09, Spring 10, and Sp 15) located in the Permian fan delta shore shallow lake facies on the edge of Bogda M tain [31]. Spring 04 and 10 always have bubbles before earthquakes, which indicates they are closely related to the fault. The topography of this area is characterized by a mont plain with high elevations to the south and low elevations to the north. Groun ter flows from south to north, and mainly originates from Urumqi Glacier No. 1 i southwest of the study area [32]. However, according to the local hydrogeological c tions of Spring 09 and Spring 10, groundwater also flows from mountains in the ea wards the piedmont plain in the west. Groundwater is mostly stored in bedrock fiss which is easily controlled by vertical climate zoning and geological structures [33]. [34,35]; and Figure 1. Topographical map of the study area. Red triangles show the locations of the springs analyzed in this study; green triangles show sample points from previous studies [34,35]; and pink circles show earthquakes. The inset map shows the wider regional setting with the red box delineating the study area. 43.83 • N) are~0.8 km altitude and located 300 m apart on the Shuimogou-Baiyangnangou fault. The lithology of the Spring 04 aquifer is Permian oil shale and siliceous sandstone, and the hydrochemical type is SO 4 -Na. The average annual water temperature is~20 • C, and the pH is 9.29. The aquifer of Spring 15 is Permian sandstone and thin limestone containing fissure phreatic water; the hydrochemical type is SO 4 ·Cl-Na·Ca or SO 4 -Na·Ca. The annual average water temperature is~10 • C, the flow is 1.5 L/s, and the pH is 7.6. Spring 09 (87.62 • E, 43.70 • N) and Spring 10 (87.62 • E, 43.70 • N) are~1 km altitude and located 100 m apart in the Northwest Liushugou-Hongyanchi fault zone (Figure 1), which mainly runs through Carboniferous and Permian strata with strong folds in the south, thrusting northward onto Permian and Triassic strata. The lithology of the Spring 09 aquifer is Permian sandstone and conglomerate; the bedrock fissure water has a hydrochemical type of SO 4 ·Cl-Na or SO 4 ·HCO 3 -Na. The average annual water temperature is~10.6 • C, and the pH is 8.0. The aquifer of Spring 10 is Permian siliceous sandstone and conglomerate along the crushed zone of the fault. The hydrochemical type is SO 4 ·Cl-Na, the average annual water temperature is~11.2 • C, and the pH is 7.7. The δ 2 H and δ 18 O were analyzed using a liquid water isotope analyzer (Picarro L2140-I, Santa Clara, CA, USA) after filtration through a 0.22 µm cellulose-acetate filter membrane [36]. Based on replicate measurements of standards and samples, the analytical precisions for δ 2 H and δ 18 O were better than ±0.46‰ and ±0.05‰, respectively. The results are reported relative to the Vienna Standard Mean Ocean Water (V-SMOW). Extremely accurate measurements of isotopic ratios were achieved when three standard samples were measured after every seven unknown samples. Hydrochemical Characteristics Water samples from Spring 04 were very high in Na + (1531-1848 mg/L) and SO 4 2− (1698-2899 mg/L); the hydrochemical type was SO 4 -Na ( Figure 2). HCO 3 − increased significantly in winter and reached 1065-1308 mg/L; however, the hydrochemical type did not change significantly, with only a small number of samples having a SO 4 ·HCO 3 -Na hydrochemical type. The δ 2 H and δ 18 O gradually decreased over time, interrupted by stepwise increases coincident with the M4.8 Turpan earthquake on 8 August 2020, after which values returned to a declining trend. However, stable isotopes showed no relationship with atmospheric temperature (i.e., above or below 0 • C) or the M4.2 Urumqi earthquake on 12 December 2020. which values returned to a declining trend. However, stable isotopes showed no relationship with atmospheric temperature (i.e., above or below 0 °C) or the M4.2 Urumqi earthquake on 12 December 2020. Figure 3). With winter atmospheric temperature, HCO3 − (301-317 mg/L) increased and Cl − decreased. As a result, the hydrochemical type was SO4·Cl-Na·Ca when the atmospheric temperature was > 0 °C, and SO4-Na·Ca when atmospheric temperature was < 0 °C. Changes in δ 2 H and δ 18 O were similar to those of Spring 04; that is, they decreased over time, interrupted by stepwise increases coincident with the M4.8 earthquake. Again, the stable isotopes showed no clear relationship with − (301-317 mg/L) increased and Cl − decreased. As a result, the hydrochemical type was SO 4 ·Cl-Na·Ca when the atmospheric temperature was >0 • C, and SO 4 -Na·Ca when atmospheric temperature was <0 • C. Changes in δ 2 H and δ 18 O were similar to those of Spring 04; that is, they decreased over time, interrupted by stepwise increases coincident with the M4.8 earthquake. Again, the stable isotopes showed no clear relationship with atmospheric temperature (i.e., above or below 0 • C) or the M4.2 Urumqi earthquake on 12 December 2020. atmospheric temperature (i.e., above or below 0 °C) or the M4.2 Urumqi earthquake on 12 December 2020. The main hydrochemical ions of Spring 09 were Na + (210-423 mg/L), SO4 2+ (334-730 mg/L), and Cl − (115-425 mg/L) ( Figure 4). For samples collected at T < 0 °C, the concentrations of HCO3 − (309-348 mg/L) and Ca 2+ (59-70 mg/L) increased, while Cl − and SO4 2− decreased. As a result, when the atmospheric temperature was > 0 °C, the hydrochemical type was SO4·Cl-Na, but when the atmospheric temperature was < 0 °C, the hydrochemical type was SO4·HCO3-Na. As with the other springs, δ 2 H and δ 18 O decreased over time, interrupted by stepwise increases coincident with the M4.8 earthquake. The stable isotopes showed no clear relationship with atmospheric temperature (i.e., above or below 0 °C) or the M4.2 Urumqi earthquake on 12 December 2020. As a result, when the atmospheric temperature was >0 • C, the hydrochemical type was SO 4 ·Cl-Na, but when the atmospheric temperature was <0 • C, the hydrochemical type was SO 4 ·HCO 3 -Na. As with the other springs, δ 2 H and δ 18 O decreased over time, interrupted by stepwise increases coincident with the M4.8 earthquake. The stable isotopes showed no clear relationship with atmospheric temperature (i.e., above or below 0 • C) or the M4.2 Urumqi earthquake on 12 December 2020. Spring 10 was similar to Spring 09; the main ions were Na + (210-423 mg/L), SO4 2− (508-730 mg/L), and Cl − (199-425 mg/L), and when atmospheric temperature was > 0 °C, the hydrochemical type was SO4·Cl-Na or Cl·SO4-Na ( Figure 5). However, when the atmospheric temperature was < 0 °C, the concentrations of HCO3 − (299-332 mg/L), Ca 2+ (77-95 mg/L), and K + (2-5 mg/L) increased, while Cl − decreased. Therefore, when the atmospheric temperature was < 0 °C, the hydrochemical type was SO4·Cl-Na. As with the other springs, δ 2 H and δ 18 O decreased over time, interrupted by stepwise increases coincident with the M4.8 earthquake. The stable isotopes showed no clear relationship with atmospheric temperature (i.e., above or below 0 °C) or the M4.2 Urumqi earthquake on 12 December 2020. (508-730 mg/L), and Cl − (199-425 mg/L), and when atmospheric temperature was >0 • C, the hydrochemical type was SO 4 ·Cl-Na or Cl·SO 4 -Na ( Figure 5). However, when the atmospheric temperature was <0 • C, the concentrations of HCO 3 − (299-332 mg/L), Ca 2+ (77-95 mg/L), and K + (2-5 mg/L) increased, while Cl − decreased. Therefore, when the atmospheric temperature was <0 • C, the hydrochemical type was SO 4 ·Cl-Na. As with the other springs, δ 2 H and δ 18 O decreased over time, interrupted by stepwise increases coincident with the M4.8 earthquake. The stable isotopes showed no clear relationship with atmospheric temperature (i.e., above or below 0 • C) or the M4.2 Urumqi earthquake on 12 December 2020. In summary, although the major ion concentrations of the four springs differed, all had elevated Na + , SO4 2− , and Cl − . Moreover, when the atmospheric temperature fell below 0 °C in winter, the concentrations of HCO3 − , Ca 2+ , and K + increased, while Cl − and Na + decreased; changes in summer were not synchronous. In contrast, isotopes were not affected by atmospheric temperature but did show obvious stepwise increases associated with the M4.8 Turpan earthquake in 2020. Piper diagram analysis confirmed the changes in water hydrogeochemistry between winter and summer ( Figure 6, Table 1). It also showed that all the samples from springs 04 and 09, and most from springs 10 and 15, were water from a confined aquifer. However, when the atmospheric temperature was < 0 °C, samples from springs 10 and 15 fell within the boundary zone of confined and unconfined aquifers (Figure 6a). In summary, although the major ion concentrations of the four springs differed, all had elevated Na + , SO 4 2− , and Cl − . Moreover, when the atmospheric temperature fell below 0 • C in winter, the concentrations of HCO 3 − , Ca 2+ , and K + increased, while Cl − and Na + decreased; changes in summer were not synchronous. In contrast, isotopes were not affected by atmospheric temperature but did show obvious stepwise increases associated with the M4.8 Turpan earthquake in 2020. Piper diagram analysis confirmed the changes in water hydrogeochemistry between winter and summer ( Figure 6, Table 1). It also showed that all the samples from springs 04 and 09, and most from springs 10 and 15, were water from a confined aquifer. However, when the atmospheric temperature was <0 • C, samples from springs 10 and 15 fell within the boundary zone of confined and unconfined aquifers (Figure 6a). Based on the Na-K-Mg ternary diagram (Figure 6b), water from Spring 04 was classified as deep geothermal water partially equilibrated with the host rock; samples from the other springs were classified as shallow geothermal water non-equilibrated with the host rock. A Schoeller diagram (Figure 6c) showed no hydraulic connection between the four springs, but also confirmed different aquifer characteristics depending on atmospheric temperature (above or below 0 °C). Based on the Na-K-Mg ternary diagram (Figure 6b), water from Spring 04 was classified as deep geothermal water partially equilibrated with the host rock; samples from the other springs were classified as shallow geothermal water non-equilibrated with the host rock. A Schoeller diagram (Figure 6c) showed no hydraulic connection between the four springs, but also confirmed different aquifer characteristics depending on atmospheric temperature (above or below 0 • C). Geothermometry and Circulation Depth We applied the Na-K-Ca geothermometer according to the following empirical formula [38]: where β = 0.75 (T < 100 • C) and β = 0.25 (T > 100 • C). The groundwater circulation depth was calculated as [39]: where H is the circulation depth (m), T Z the estimated reservoir equilibrium temperature ( • C), T 0 is the local annual temperature ( • C), G is the thermal gradient ( • C/km), and H 0 is the thickness of the constant temperature zone (m). The constant temperature zone is defined as the subsurface depth at which changes in atmospheric temperature have no effect on the temperature of the zone [40]. From previous studies, we chose values of H 0 = 20 m, T 0 = 18 • C, and G = 18.2 • C/km [39,41]. Circulation depths were estimated for both summer and winter (atmospheric temperatures of > and < 0 • C, respectively; Table 1). Differences in reservoir temperature estimates reached 2.4-9 • C between winter and summer (Table 1), reflecting significant seasonal differences in circulation depth. The circulation depth of Spring 04 was the deepest (4710-5210 m); those of Spring 09, Spring 10, and Spring 15 were within 880-1940 m ( Table 1). δ2H and δ 18 O Characteristics The Geothermometry and Circulation Depth We applied the Na-K-Ca geothermometer according to the following empirical formula [38]: where β = 0.75 (T < 100 °C) and β = 0.25 (T > 100°C). The groundwater circulation depth was calculated as [39]: where H is the circulation depth (m), TZ the estimated reservoir equilibrium temperature (°C), T0 is the local annual temperature (°C), G is the thermal gradient (°C/km), and H0 is the thickness of the constant temperature zone (m). The constant temperature zone is defined as the subsurface depth at which changes in atmospheric temperature have no effect on the temperature of the zone [40]. From previous studies, we chose values of H0 = 20 m, T0 = 18 °C, and G = 18.2 °C/km [39,41]. Circulation depths were estimated for both summer and winter (atmospheric temperatures of > and < 0 °C, respectively; Table 1). Differences in reservoir temperature estimates reached 2.4-9 °C between winter and summer (Table 1), reflecting significant seasonal differences in circulation depth. The circulation depth of Spring 04 was the deepest (4710-5210 m); those of Spring 09, Spring 10, and Spring 15 were within 880-1940 m ( Table 1). δ2H and δ 18 O Characteristics The δ 2 H and δ 18 O values of each individual spring were relatively concentrated (Figure 7) and had no significant relationship with atmospheric temperature of > or < 0 °C. Samples from Spring 15 plotted on the Xinjiang local meteoric water line (LMWL, δ 2 H = 7.23δ 18 O + 3.60); samples of springs 04, 09, and 10 fell below the LMWL, possibly indicating water-rock isotope exchange resulting in "oxygen drift". Hydrogen and oxygen isotopes in atmospheric precipitation are affected by altitude; therefore, isotopes can be used to estimate the altitude of meteoric water involved in groundwater recharge. In western China, recharge altitude can be calculated as: where H is the recharge altitude of the springs (m). The results show that the average recharge altitudes of the four springs were not significantly different in summer and winter (Table 1). Coupling between Hydrochemistry and Earthquakes The Dobrovolsky et al. [44] formula was used to identify earthquakes with the potential to cause precursory signals at the four springs: where R is the radius of the effective precursory manifestation area depending on earthquake magnitude. We identified two earthquakes-the Turpan M4. Figure 1). The Turpan M4.8 earthquake was caused by strike-slip fault motion within the eastern segment of the Tianshan earthquake zone; the epicenter was~52-66 km from the springs. The Urumqi M4.2 earthquake was caused by reverse fault motion within the central section of the Tianshan earthquake zone; the epicenter was~20-31 km from the springs. Of the two earthquakes, only one, the M4.8 Turpan earthquake of 8 August 2020, had a clear temporal correlation with sudden changes in isotope signatures, suggesting a coupling between hydrochemical changes and these earthquakes. Groundwater Origin, Recharge Sources, and Circulation Characteristics Hydrogen and oxygen isotopes are affected by meteorological processes; as such, the values and distribution characteristics of δ 2 H and δ 18 O provide a basis for the investigation of groundwater recharge sources [4]. Samples from the four springs are all consistent with either the GMWL or LMWL, indicating the strong influence of meteoric water (Figure 7). Springs 04 and 15 are only 300 m apart within the same structural setting. The samples show some similarities; for example, both have higher δ 2 H and δ 18 O values than those from springs 09 and 10, reflecting greater rainfall recharge [45]. However, Spring 04 samples plot on the GMWL, to the right of the LMWL, indicating water-rock interaction. In contrast, Spring 15 plots to the left of the LMWL, which can be explained by degassing or by a large deuterium excess related to the climate regime at the time of precipitation [26]. Moreover, the data for Spring 15 suggest relatively shallower groundwater circulation and with more rapid circulation speeds than that of Spring 04 (Figure 7). As with Spring 04, Springs 09 and 10 plot on the GMWL, to the right of the LMWL, indicating water-rock interaction. Compared with Spring 09, positive shifts in δ 18 O for Spring 10 reflect a strong water-rock interaction related to high temperature. In addition, the average altitude of recharge water is much higher than the actual altitude, which should be long-distance runoff recharge (Table 1). Therefore, the water at all four springs is mainly supplied by atmospheric precipitation and snowmelt from surrounding mountains [26,46]. However, during circulation, this water undergoes water-rock interactions, during which it takes on soluble ions from the aquifer rocks [34]. Differences in the geochemical and structural conditions of the aquifers (e.g., lithology, weathering, structural fissures) lead to differences in the leaching, erosion, and infiltration processes, resulting in variation among the springs. Groundwater in the Urumqi region flows from south to north, and mainly originates from Urumqi Glacier No.1 in the southwest [32]. River and precipitation samples from the region (i.e., samples with no hydraulic connection to the springs in this study) have high Ca 2+ and HCO 3 − concentrations, with both being the major ions in the hydrochemical types [47]. In general, all four springs are fed by confined aquifers (Figure 6). However, some samples of Spring 15 (T > 0 • C) have a hydraulic connection with unconfined aquifer sample TK15 (89.77 • E, 42.62 • N) from the Turpan Basin, while others (T < 0 • C) are hydraulically related to springs S1 (88.21 • E, 43.12 • N) and S2 (88.21 • E, 43.11 • N) in the Turpan Basin [35] (Figure 8). This suggests long-distance runoff recharge from the Turpan Basin, which is also consistent with the high Cl − concentrations, which result from rock salt dissolution and long runoff. In addition, groundwater flow also occurs from mountains in the east towards the piedmont plain in the west, as evidenced by the local hydrogeological conditions of Spring 09 and Spring 10. The water origin of the Turpan Basin is directly related to Bogda Mountain. Furthermore, the average altitude of Bogda Mountain is~4 km, which is consistent with the calculated result in Table 1. In summary, the origin of water in the four springs is most likely rainfall and the deep circulation of meteoric water from Bogda Mountain in the east. ences in the geochemical and structural conditions of the aquifers (e.g., lithology, weathering, structural fissures) lead to differences in the leaching, erosion, and infiltration processes, resulting in variation among the springs. Groundwater in the Urumqi region flows from south to north, and mainly originates from Urumqi Glacier No.1 in the southwest [32]. River and precipitation samples from the region (i.e., samples with no hydraulic connection to the springs in this study) have high Ca 2+ and HCO3 − concentrations, with both being the major ions in the hydrochemical types [47]. In general, all four springs are fed by confined aquifers (Figure 6). However, some samples of Spring 15 (T > 0 °C) have a hydraulic connection with unconfined aquifer sample TK15 (89.77° E, 42.62° N) from the Turpan Basin, while others (T < 0 °C) are hydraulically related to springs S1 (88.21° E, 43.12° N) and S2 (88.21° E, 43.11° N) in the Turpan Basin [35] (Figure 8). This suggests long-distance runoff recharge from the Turpan Basin, which is also consistent with the high Cl − concentrations, which result from rock salt dissolution and long runoff. In addition, groundwater flow also occurs from mountains in the east towards the piedmont plain in the west, as evidenced by the local hydrogeological conditions of Spring 09 and Spring 10. The water origin of the Turpan Basin is directly related to Bogda Mountain. Furthermore, the average altitude of Bogda Mountain is ~4km, which is consistent with the calculated result in Table 1. In summary, the origin of water in the four springs is most likely rainfall and the deep circulation of meteoric water from Bogda Mountain in the east. Regardless of atmospheric temperature (> or < 0 °C), γ(Na + + K + )/γ(Cl − ) was > 1 for all samples from all four springs, indicating that Na+ in the water comes from weathering dissolution or cation exchange of silicate minerals (Figure 9a). In terms of γ(Na + − Cl − )/[γ(Ca 2+ + Mg 2+ )-γ(HCO3 − + SO4 2− )], most samples from Spring 04 were above the y = −x line, indicating enhanced cation exchange, especially at T < 0 °C with increased circulation depth (Figure 9b). Spring 09 samples for T > 0 °C fell below the y = −x line, indicating that they are less affected by cation exchange, while samples for T > 0 °C fell on the y = −x line, indicating that cation exchange is significant. In contrast, almost all samples from springs 10 and 15 fell below the y = −x line, indicating that they are less affected by cation exchange and more impacted by silicate dissolution, regardless of atmospheric temperature (i.e., > or < 0 °C). The altitude of spring rainfall was not season-dependent, but circulation depth was (Table 1). Both Spring 04 and Spring 09 had increased circulation depth and increased cation exchange during winter. However, the circulation depths of springs 10 and 15 increased without significant cation exchange, and the water source was still the original aquifer. Regardless of atmospheric temperature (> or <0 • C), γ(Na + + K + )/γ(Cl − ) was >1 for all samples from all four springs, indicating that Na+ in the water comes from weathering dissolution or cation exchange of silicate minerals (Figure 9a). In terms of γ(Na , most samples from Spring 04 were above the y = −x line, indicating enhanced cation exchange, especially at T < 0 • C with increased circulation depth (Figure 9b). Spring 09 samples for T > 0 • C fell below the y = −x line, indicating that they are less affected by cation exchange, while samples for T > 0 • C fell on the y = −x line, indicating that cation exchange is significant. In contrast, almost all samples from springs 10 and 15 fell below the y = −x line, indicating that they are less affected by cation exchange and more impacted by silicate dissolution, regardless of atmospheric temperature (i.e., > or <0 • C). The altitude of spring rainfall was not season-dependent, but circulation depth was (Table 1). Both Spring 04 and Spring 09 had increased circulation depth and increased cation exchange during winter. However, the circulation depths of springs 10 and 15 increased without significant cation exchange, and the water source was still the original aquifer. The high Na + , Cl − , and SO4 2− concentrations of all four springs also reflect the long runoff and deep circulation characteristics. Under the action of gravity, circulation depths reached ~0.88-5.21 km (Table 1). After being heated by high-temperature rocks, the water mixed with fluids from the deep crust and then moved upwards along faults and fractures. Spring 04 and Spring 15 are 300 m apart on the surface, but exhibit significant differences in circulation depth. The circulation path of Spring 04 is more closely related to the fault zone, resulting in deeper circulation. The surface distance between Spring 09 and Spring 10 is just 100 m, but the depths of deep circulation, especially for samples collected at T > and < 0 °C, differ, reflecting differences in fractures, porosity, and permeability. In summary, Spring 04 and Spring 10 are more influenced by deep circulation compared with Spring 09 and Spring 15. The Schoeller diagram shows that the four springs have different recharge sources at T < 0 °C and T > 0 °C (Figure 6), indicating the complexity of the geological structure in the study area. In this region, sedimentary strata are underlain by granite [48]; during deep winter circulation, this granite releases Na + , Ca 2+ , and HCO3 − ions. However, temporal variations in hydrogen and oxygen isotopes show that the water recharge source did not change significantly. The δ 2 H-δ 18 O plots of springs 04, 09, and 10 are all located on the right side of the LMWL (Figure 7), indicating the mixing of fresh and old water in the aquifer [46]. Water-rock interaction occurs by the precipitation and/or dissolution of minerals [49]; provided this occurs stoichiometrically, species ratios are fixed by the stoichiometry of the ongoing precipitation/dissolution reactions [26]. As such, ion concentration ratios can distinguish groundwater sources from water-rock interactions. However, if ion concentration ratios represent nonstoichiometric precipitation and/or dissolution of minerals, it suggests that atmospheric temperature caused mixing rather than water-rock reactions. In summary, the groundwater origin of the four springs is mainly geothermallyheated, deep circulated atmospheric precipitation, and snowmelt; however, there is also a contribution from long-distance basin recharge sources, which cause increases in dissolved solids. Water-rock reactions are dominated by the dissolution of silicate minerals. Seasonal atmospheric temperature changes have a great impact on the circulation depth of the four springs, and the δ 2 H-δ 18 O data show that changes in ion concentrations are the result of mixing rather than water-rock reactions. Meanwhile, samples collected at T < 0 °C reflect the mixing of fresh and old water in the aquifer. Hydrochemical Changes Coupled to Earthquakes The M4.8 Turpan earthquake (8 August 2020), which occurred during a period of stable atmospheric temperature, had a significant impact on isotope signals and ion concentrations. In contrast, the M4.2 Urumqi earthquake (12 December 2020) occurred at the The high Na + , Cl − , and SO 4 2− concentrations of all four springs also reflect the long runoff and deep circulation characteristics. Under the action of gravity, circulation depths reached~0.88-5.21 km (Table 1). After being heated by high-temperature rocks, the water mixed with fluids from the deep crust and then moved upwards along faults and fractures. Spring 04 and Spring 15 are 300 m apart on the surface, but exhibit significant differences in circulation depth. The circulation path of Spring 04 is more closely related to the fault zone, resulting in deeper circulation. The surface distance between Spring 09 and Spring 10 is just 100 m, but the depths of deep circulation, especially for samples collected at T > and <0 • C, differ, reflecting differences in fractures, porosity, and permeability. In summary, Spring 04 and Spring 10 are more influenced by deep circulation compared with Spring 09 and Spring 15. The Schoeller diagram shows that the four springs have different recharge sources at T < 0 • C and T > 0 • C (Figure 6), indicating the complexity of the geological structure in the study area. In this region, sedimentary strata are underlain by granite [48]; during deep winter circulation, this granite releases Na + , Ca 2+ , and HCO 3 − ions. However, temporal variations in hydrogen and oxygen isotopes show that the water recharge source did not change significantly. The δ 2 H-δ 18 O plots of springs 04, 09, and 10 are all located on the right side of the LMWL (Figure 7), indicating the mixing of fresh and old water in the aquifer [46]. Water-rock interaction occurs by the precipitation and/or dissolution of minerals [49]; provided this occurs stoichiometrically, species ratios are fixed by the stoichiometry of the ongoing precipitation/dissolution reactions [26]. As such, ion concentration ratios can distinguish groundwater sources from water-rock interactions. However, if ion concentration ratios represent nonstoichiometric precipitation and/or dissolution of minerals, it suggests that atmospheric temperature caused mixing rather than water-rock reactions. In summary, the groundwater origin of the four springs is mainly geothermallyheated, deep circulated atmospheric precipitation, and snowmelt; however, there is also a contribution from long-distance basin recharge sources, which cause increases in dissolved solids. Water-rock reactions are dominated by the dissolution of silicate minerals. Seasonal atmospheric temperature changes have a great impact on the circulation depth of the four springs, and the δ 2 H-δ 18 O data show that changes in ion concentrations are the result of mixing rather than water-rock reactions. Meanwhile, samples collected at T < 0 • C reflect the mixing of fresh and old water in the aquifer. Hydrochemical Changes Coupled to Earthquakes The M4.8 Turpan earthquake (8 August 2020), which occurred during a period of stable atmospheric temperature, had a significant impact on isotope signals and ion concentrations. In contrast, the M4.2 Urumqi earthquake (12 December 2020) occurred at the same time as a large change in atmospheric temperature (from~10 to −16 • C) and was related to a stepwise change in ion concentrations but little change in the isotope signal. In general, groundwater moves slowly through the aquifer system under hydrological and geological processes; as such, hydrogeochemical changes tend to be gradual. However, as seismic activity can cause sudden changes to aquifers and the surrounding rock (e.g., changes in permeability or water mixing), the resulting hydrogeochemical changes can occur rapidly. We calculated the time series of the δ 2 H-δ 18 O deuterium excess (where d = δ 2 H − 8 × δ 18 O) and compared it with that of the GMWL (Figure 10). The δ 2 H-δ 18 O deviated significantly at the time of the M4.8 Turpan earthquake, reflecting a change in water source coupled with the occurrence of an earthquake. In contrast, δ 2 H-δ 18 O did not change significantly at the time of the M4.2 Urumqi earthquake; that is, the water source continued to be controlled by meteoric water. The changes in ion concentrations reflected the change in atmospheric temperature, and were not related to the earthquake. same time as a large change in atmospheric temperature (from ~10 to −16 °C) and was related to a stepwise change in ion concentrations but little change in the isotope signal. In general, groundwater moves slowly through the aquifer system under hydrological and geological processes; as such, hydrogeochemical changes tend to be gradual. However, as seismic activity can cause sudden changes to aquifers and the surrounding rock (e.g., changes in permeability or water mixing), the resulting hydrogeochemical changes can occur rapidly. We calculated the time series of the δ 2 H-δ 18 O deuterium excess (where d = δ 2 H − 8×δ 18 O) and compared it with that of the GMWL (Figure 10). The δ 2 H-δ 18 O deviated significantly at the time of the M4.8 Turpan earthquake, reflecting a change in water source coupled with the occurrence of an earthquake. In contrast, δ 2 H-δ 18 O did not change significantly at the time of the M4.2 Urumqi earthquake; that is, the water source continued to be controlled by meteoric water. The changes in ion concentrations reflected the change in atmospheric temperature, and were not related to the earthquake. A seismic observation well (X10; 87.62° E, 43.70° N) near Spring 10 experienced a coseismic step change of water level during the M4.8 Turpan earthquake (52 km from the epicenter) [33]. This confirms that coseismic static strains in this region were sufficiently strong to alter pore fluid pressures and permeability. In contrast, the M4.2 Urumqi earthquake, with an epicenter just 13 km from X10, did not cause a coseismic step change in the water level of the well. This may indicate that the energy of the M4.2 earthquake was insufficient to cause a change in permeability. These findings support our conclusions; that is, the M4.8 Turpan earthquake affected both ion concentrations and δ 2 H-δ 18 O, but the M4.2 Urumqi earthquake did not; changes coincident with the second event were related to changes in atmospheric temperature. According to the geological and hydrogeological setting, the four observation springs are located in areas where stress is easy to concentrate, making them sensitive to seismic activity. The ion concentrations increased slowly before the M4.8 Turpan earthquake, possibly owing to large-scale loading of regional stress and changes in fractures within the fault zone. This allowed a high concentration of fluid to enter the springs and change the ion concentrations; moreover, this increase in permeability would also have intensified water-rock reactions within fractures. In general, owing to the pumping effect of earthquakes, shallow groundwater can also diffuse to the deeper fault, and the circulation depth of groundwater can change before and after earthquakes [50]. However, the M4.8 A seismic observation well (X10; 87.62 • E, 43.70 • N) near Spring 10 experienced a coseismic step change of water level during the M4.8 Turpan earthquake (52 km from the epicenter) [33]. This confirms that coseismic static strains in this region were sufficiently strong to alter pore fluid pressures and permeability. In contrast, the M4.2 Urumqi earthquake, with an epicenter just 13 km from X10, did not cause a coseismic step change in the water level of the well. This may indicate that the energy of the M4.2 earthquake was insufficient to cause a change in permeability. These findings support our conclusions; that is, the M4.8 Turpan earthquake affected both ion concentrations and δ 2 H-δ 18 O, but the M4.2 Urumqi earthquake did not; changes coincident with the second event were related to changes in atmospheric temperature. According to the geological and hydrogeological setting, the four observation springs are located in areas where stress is easy to concentrate, making them sensitive to seismic activity. The ion concentrations increased slowly before the M4.8 Turpan earthquake, possibly owing to large-scale loading of regional stress and changes in fractures within the fault zone. This allowed a high concentration of fluid to enter the springs and change the ion concentrations; moreover, this increase in permeability would also have intensified water-rock reactions within fractures. In general, owing to the pumping effect of earthquakes, shallow groundwater can also diffuse to the deeper fault, and the circulation depth of groundwater can change before and after earthquakes [50]. However, the M4.8 Turpan earthquake did not cause such changes in the springs, and mixing with shallow water can be ignored. Changes in Cl − concentration related to earthquakes can reflect changes in runoff fractures. The coupling relationship between Cl − concentration and the M4.8 Turpan earthquake is consistent with the δ 2 H-δ 18 O response. This suggests a high concentration of fluid input and a strong possibility of mixing with old water. On the whole, ion concentrations in the springs increased before the earthquake while the δ 2 H-δ 18 O data drifted towards the right of the plot, indicating that water-rock reactions intensified, solubility increased, and new fracture surfaces appeared in the aquifer or fault zone. Conclusions In this study, the hydrogeochemistry (major ion concentrations and δ 2 H and δ 18 O isotopes) of four springs in the Urumqi area was analyzed for a 1-year period. We conclude that the four springs are likely recharged by deep circulation of meteoric water from Bogda Mountain in the east, as well as long-distance runoff recharge from the Turpan Basin to the south. The hydrochemical type and circulation depth of the springs are both affected by atmospheric temperature (i.e., T> and <0 • C), although the source remains the same (i.e., meteoric water). We conclude that seasonal changes in atmospheric temperature and M ≥ 4.8 earthquakes within 66 km can cause changes in the spring water ion concentrations, but only earthquakes can cause changes in stable isotopes; this suggests that mixing rather than water-rock reactions is coupled with atmospheric temperature (i.e., T> and <0 • C). Ion concentrations and δ 2 H-δ 18 O are sensitive to earthquakes of M ≥ 4.8, which can alter fault zone characteristics (e.g., permeability and fractures) and intensify water-rock reactions. The results suggest that continuous spring hydrogeochemical observations, especially stable isotopes, offer potential precursory information before earthquakes. Moreover, such an approach offers the potential for a better understanding of the coupling between seismic activity and geochemical variations. Data Availability Statement: All data are available from the corresponding author upon reasonable request.
2022-09-25T15:05:27.882Z
2022-09-22T00:00:00.000
{ "year": 2022, "sha1": "6b2b5a85de14f49facb777d9d93ba55d8a97e5f0", "oa_license": "CCBY", "oa_url": "https://www.mdpi.com/1660-4601/19/19/12004/pdf?version=1663847205", "oa_status": "GOLD", "pdf_src": "ScienceParsePlus", "pdf_hash": "beeb5615d50ab10ac03e737e54be1e51a0182f7e", "s2fieldsofstudy": [ "Environmental Science", "Geology" ], "extfieldsofstudy": [ "Medicine" ] }
247761704
pes2o/s2orc
v3-fos-license
Understanding the pathophysiology of typical acute respiratory distress syndrome and severe COVID-19 ABSTRACT Introduction Typical acute respiratory distress syndrome (ARDS) and severe coronavirus-19 (COVID-19) pneumonia share complex pathophysiology, a high mortality rate, and an unmet need for efficient therapeutics. Areas covered This review discusses the current advances in understanding the pathophysiologic mechanisms underlying typical ARDS and severe COVID-19 pneumonia, highlighting specific aspects of COVID-19-related acute hypoxemic respiratory failure that require attention. Two models have been proposed to describe the mechanisms of respiratory failure associated with typical ARDS and severe COVID-19 pneumonia. Expert opinion ARDS is defined as a syndrome rather than a distinct pathologic entity. There is great heterogeneity regarding the pathophysiologic, clinical, radiologic, and biological phenotypes in patients with ARDS, challenging clinicians, and scientists to discover new therapies. COVID-19 has been described as a cause of pulmonary ARDS and has reopened many questions regarding the pathophysiology of ARDS itself. COVID-19 lung injury involves direct viral epithelial cell damage and thrombotic and inflammatory reactions. There are some differences between ARDS and COVID-19 lung injury in aspects of aeration distribution, perfusion, and pulmonary vascular responses. Introduction According to the current Berlin definition, acute respiratory distress syndrome (ARDS) is characterized by refractory hypoxemia, respiratory failure not explained by cardiac failure or fluid overload, and bilateral opacities on chest imaging, presenting within 1 week of a known clinical insult or worsening respiratory symptoms [1]. Several definitions of severe COVID-19 pneumonia have been proposed by health-care institutions; recognized criteria include dyspnea, peripheral oxygen saturation below 93%, ratio of partial pressure of arterial oxygen to fraction of inspired oxygen (PaO 2 /FiO 2 ) <300 mmHg, and/or bilateral infiltrates involving more than 50% of the lung fields on chest radiographs [2,3]. Infiltrates are typically bilateral in severe COVID-19 and continuous positive airway pressure or positive endexpiratory pressure (PEEP) levels ≥5 cmH 2 O are often applied; therefore, most patients with severe COVID-19 pneumonia fulfill the clinical criteria for ARDS. However, since the early phases of the pandemic, several specific pathophysiologic traits have been highlighted in COVID-19. These include severe endothelial injury [4], hypoxemia not fully explained by loss of aeration [5,6], alveolar-capillary microthrombi [7], venous thromboembolism [8], and marked inflammatory response [9] with possible multisystemic involvement [10]. A broad scientific debate is ongoing on whether these features should modify our clinical approach to COVID-19-related respiratory failure, compared with the conventional protocols applied in classic ARDS, in particular with regard to noninvasive [11] and invasive respiratory support [5,12]. Overall, whereas ARDS is a clinical syndrome including various causes of pulmonary and extrapulmonary injury, COVID-19 pneumonia is a single disease with two specific concurring mechanisms of lung damage: direct viral insult and host local as well as systemic inflammatory response [13,14]. This review does not enter the long-standing discussion of whether COVID-19 pneumonia should or should not be considered ARDS or a distinct disease, but instead highlights the specific aspects of respiratory failure related to COVID-19, thus considering severe COVID-19 pneumonia as a subphenotype of ARDS. In fact, while several diseasespecific features can be observed in COVID-19, severe COVID-19 pneumonia clearly fulfill the current clinical criteria for ARDS. The aim of this review is to summarize the current advances in understanding the pathophysiological mechanisms underlying typical ARDS and COVID-19, highlighting peculiar aspects of COVID-19-related acute hypoxemic respiratory failure that might require a clinician's attention. Pathophysiology of typical ARDS ARDS can originate from a variety of heterogeneous conditions in which the pathophysiologic pathways converge on a single anatomic structure, namely the alveolar-capillary barrier, causing diffuse alveolar damage (DAD) [15]. Two distinct components form the alveolar-capillary barrier: the alveolar epithelium and the capillary endothelium, interleaved by the interstitium, organized in a complex extracellular matrix scaffold [16]. The key aspects of ARDS and COVID-19 pneumonia are summarized in Table 1. Differences between pulmonary and extrapulmonary ARDS Studies on ARDS have explored the hypothesis that, at least in the earlier phases of the syndrome, pathogenic insults reaching the barrier from either the alveolar or the capillary side could result in different alterations and consequently into diverse clinical presentations of ARDS [17]. This led to the definition of two macro-categories of ARDS: (1) ARDS due to direct pulmonary injury (pulmonary ARDS or ARDSp) and (2) ARDS secondary or indirect or extrapulmonary lung injury (extrapulmonary ARDS or ARDSexp) [17]. Causes of ARDSp include bacterial or viral pneumonia, aspiration pneumonia, lung contusion, and drowning; ARDSexp can be secondary to sepsis, polytrauma, acute pancreatitis, massive blood transfusion, and hemorrhagic shock [18]. From this perspective, the evolution of the ARDS definitions reflects three phases of research and understanding of the disease. Early reports mainly focused on ARDSexp [19], the following decades focused on the possible differences between ARDSp and ARDSexp [17,20], and in the era between the Berlin definition and the COVID-19 pandemic, the distinction between the two was underexplored and a unifying approach was attempted, regardless of the type of pulmonary injury. Primary insults in ARDSp act primarily on the alveolar epithelial cells, causing fluid leakage and alveolar flooding, further worsened by impaired clearance of edema from the alveolar space [21]. Damage to type II epithelial cells decreases the production of surfactant, and a proliferation of fibroblasts and deposition of the extracellular matrix might constitute the basis for the development of fibrosis, especially when epithelial repair mechanisms are impaired [22,23]. Compared with ARDSexp, ARDSp is characterized at the alveolar level by increased damage to the alveolar epithelium (type I and type II cells), with prevalent alveolar and apoptotic neutrophils, and a marked alveolar increase in inflammatory mediators; at the interstitial space level, by lower interstitial edema, increased cell Article highlights • ARDS is a clinical syndrome with different causes leading to complex biological and clinical heterogeneity. • Although a single definition of ARDS is widely used and accepted, the heterogeneity of ARDS has been associated with negative treatment outcomes with pharmacotherapies. • The pathogenesis of COVID-19 lung injury involves direct viral epithe lial cell damage and a host defense response with thrombotic and inflammatory reactions in the lung. • There are some differences between ARDS and COVID-19 lung injury in aeration distribution, perfusion, and pulmonary vascular responses. • Further research is warranted to identify sub-phenotypes of ARDS, including severe COVID-19 pneumonia, that could benefit from specific treatments and ventilatory strategies. infiltration and fibrosis and normal elastic fibers; an increase in inflammatory mediators in the blood less than that observed in sepsis [20,24]. Circulating inflammatory mediators cause indirect injury in ARDSexp, reaching the lungs from the pulmonary endothelial cells, which are the initial target of damage in this type of ARDS [25]. Autopsy studies found higher amounts of alveolar collapse, alveolar wall edema, and fibrinous exudate in ARDSp compared with ARDSexp [26]. Respiratory mechanics parameters might be different in these two forms of ARDS. Despite comparable respiratory system compliance, in ARDSp, lung compliance is decreased, whereas reduction in chest wall compliance predominates in ARDSexp [17]. An early report estimating recruitment based on pressure-volume curves reported a lower potential for recruitment in ARDSp compared with ARDSexp, suggesting a potential role for higher PEEP ventilation strategies in the latter group [27]. The findings of this study were not confirmed in a larger population, thus questioning the actual indication of setting PEEP based on the cause of ARDS [28]. Moreover, a recent meta-analysis with metaregression did not observe an association between the effect on mortality of higher PEEP strategies and the percentage of patients with ARDSp versus ARDSexp in randomized controlled trials [29]. However, a randomized trial observed that patients with focal ARDSp receiving higher PEEP strategies had higher mortality [30], highlighting how misclassification of patterns of ARDS might be common and possibly have negative consequences on outcomes. The clinical distinction between ARDSp and ARDSexp is often complex in the real world for two reasons [20]: (1) patients with initial pulmonary damage might evolve from a typical ARDSp pattern to a mixed clinical presentation due to overlapping sepsis and systemic inflammation and (2) coexistence of multiple mechanisms of lung injury in critically ill patients is common. These difficulties might explain why a large meta-analysis including more than 4000 patients did not observe differences in mortality between ARDSp and ARDSexp [18]. Although the American-European Consensus Conference on ARDS definition still recognized potential differences in the clinical management of patients with ARDSp compared to ARDSexp [31], such distinction was abandoned in the current Berlin definition, implying that all patients could possibly benefit from a standardized approach regardless of the cause of ARDS [1]. This unifying approach received criticisms [32], and the numerous diseasespecific features identified during the ongoing COVID-19 pandemic further questioned whether a one-for-all approach was feasible [33]. Based on the available evidence, the distinction between ARDSp and ARDSexp might not translate into different therapeutic strategies, also due to the frequent overlap between the pathophysiological and clinical patterns of the two conditions. Nonetheless, the different pathophysiological mechanisms underlying ARDS should be considered when tailoring treatment of these patients. In fact, the current Berlin definition, while proposing a convenient framework to provide general recommendations on respiratory management of ARDS patients, might miss several disease-specific aspects which could influence the treatment in peculiar sub-groups of patients [33]. Inflammatory phenotypes in typical ARDS In the last decade, researchers have attempted to identify specific subphenotypes of ARDS to guide mechanical ventilation settings and pharmacological treatments [34]. Recently, the existence of a hyper-inflammatory and a hypoinflammatory phenotype of ARDS has been proposed [35]. Although several features of the hyper-inflammatory phenotype overlap with characteristics of ARDSexp, the classification is based on a subset of objective clinical variables, including biomarkers of inflammation, coagulopathy, and endothelial injury [36,37] rather on a subjective classification of the cause of ARDS. The hyper-inflammatory, compared to the hypo-inflammatory phenotype, is characterized by higher interleukin-6, interleukin-8, tumor necrosis factor levels while lower protein C levels and PaO 2 /FiO 2 ratio [36]. These differences can be observed at ICU admission and tend to remain stable over time [38]; moreover, mortality is consistently higher across studies in the hyper-inflammatory phenotype [36][37][38]. These phenotypes, currently under investigation, showed different responses to higher PEEP strategies [37], liberal versus restrictive fluid regimens [39], and anti-inflammatory therapies [40]. Although still experimental, this approach based on clustering reflects the need for sub-classifications of ARDS capable of predicting response to individualized treatments and will be extensively investigated in the near future. Pathophysiology of severe COVID-19 As illustrated in Figure 1, the pattern of COVID-19 pneumonia evolves from early to advanced phases. In the early phases of the disease, the predominant findings are single or multiple ground-glass lesions, which may evolve into complete loss of aeration and the appearance of nonaerated tissue [5,6]. Severe COVID-19 pneumonia often requires invasive mechanical ventilation and appears to be a specific phenotype of ARDS (ARDSp), with a distinct histological pattern compared with ARDSexp. Autopsy studies on COVID-19 have reported diffuse alveolar damage, alveolar flooding with the presence of fibrin and hyaluran [41], intense remodeling [42], platelet-fibrin microthrombi [43], and early fibrotic evolution [44], with variable deposition of collagen fibers [45]. Despite several similarities with conventional ARDSp, which is characterized by normal endothelium, COVID-19 pneumonia, despite being a pulmonary ARDS, presents in the early stages with endothelial injury and dysfunction induced by direct viral action and host inflammatory response [46]. In addition to this peculiar mechanism, the condition of patients with severe COVID-19 requiring prolonged mechanical ventilation is often complicated by bacterial ventilator-associated pneumonia [47] and bloodstream infections [48], which might result in an ARDSexp-like pattern overlapping with COVID-19. These mechanisms of viral and inflammatory alveolar and vascular disruption have been referred to as pneumolysis [49,50] and vascular lysis [51,52], respectively. Distribution of aeration and perfusion in typical ARDS The pulmonary and extrapulmonary routes of lung injury result in different spatial distribution of lesions in experimental models of ARDSp, where multiple foci of pulmonary injury show heterogeneous spatial distribution, and ARDSexp, where a more diffuse and homogeneous pattern is observed [24]. This is reflected by different radiographic findings reported in clinical studies, with more consolidation observed in ARDSp and more diffuse ground-glass opacification in ARDSexp [24]. Figure 2 summarizes the key pathophysiologic mechanisms in typical ARDS and COVID-19. Loss of aeration in typical ARDS In patients with ARDSexp, alveolar-capillary lesions lead to increased interstitial and alveolar edema (excess tissue mass) homogeneously distributed from ventral to dorsal lung regions. The edema replaces an equal amount of gas space, maintaining the total lung volume constant or slightly reduced (15% decrease in cephalocaudal dimensions of the lung [53] associated with a gravitational increase in density). This might be explained by several factors: the thoracic shape, lung weight, and the gravitational distribution of the blood in the lung capillaries [54]. All these factors contribute to the progressive increase in pleural pressure along the vertical axis, decreasing the transpulmonary pressure (airway pressure minus pleural pressure), which is the distending force of the lung [55]. The increased pleural pressure (2-3 cmH 2 O in normal lungs and 6-8 cmH 2 O in ARDS lungs) as well as increased superimposed pressure (5-6 cmH 2 O in normal lungs and 10-12 cmH 2 O in ARDS lungs) due to the increased lung weight promotes the collapse of alveoli, particularly in most dependent lung regions in the supine position [55]. Whereas the thoracic shape and blood distribution are constant, what changes in ARDS is the superimposed pressure (weight of the lung), which is doubled or tripled compared with normal lungs. Experimental work has shown that the superimposed pressure changes, as measured by computed tomography (CT) imaging, are strictly correlated with pleural pressure changes, measured directly at various lung levels in the pleural space [56,57]. Distribution of perfusion in typical ARDS Several techniques for assessment of lung perfusion, including electrical impedance tomography (EIT), depict perfusion but do not consider the different lung densities in the ventral to dorsal gradient [58,59]. On the other hand, positron emission tomography and dual-energy computed tomography (DECT) allow perfusion to be normalized to the perfused lung tissue mass, but these imaging techniques are rarely implemented in clinical practice and in clinical studies. When inhomogeneous lung density is accounted for, perfusion in ARDS has a bell-shaped distribution along the ventraldorsal axis, and intermediate regions are most perfused, with minimum changes in such shape when different PEEP levels are applied [60]. In addition to perfusion changes due to redistribution of blood flow and aeration, pulmonary coagulopathy has been described in ARDS [61], mediated by activation of the tissue factor pathway [62]. This may result in pulmonary capillary thrombosis, which is reported in 24% of patients with confirmed ARDS and diffuse alveolar damage [63]. Ventilation-perfusion matching in typical ARDS Even if the absolute amount of perfusion is nearly normal in non-aerated regions, the ventilation/perfusion (V'/Q') ratio nears zero due to the massive loss of aeration occurring in the dependent regions. These regions act as a shunt, which is the main determinant of hypoxia in conventional ARDS [64]. Poorly aerated regions might also play a role because they may receive proportionally more perfusion than aeration, thus functionally acting as non-shunt low V'/Q' areas (V'/Q' < 1), but their role is overwhelmed by true shunt regions (V'/Q' = 0) in conventional ARDS. Gas exchange in conventional ARDS is the result of the interaction between (1) aerated and perfused lung regions mainly located in non-dependent lung regions; (2) atelectatic lung regions, mainly located in the dependent lung regions; (3) consolidated lung regions prevalently distributed across the vertical gradient [65] or in the dependent part of the lung [66]; (4) minor amount of poorly aerated lung regions, distributed between aerated and collapsed lung regions. Response to PEEP and prone positioning in typical ARDS This model explaining gas exchange impairment in conventional ARDS has been further corroborated by the fact that progressive increases in pressure at end-inspiration [67] and at end-expiration [65] are associated with better aeration in dependent lung regions and more homogeneous distribution of aeration and ventilation from non-dependent to dependent lung regions. Overall, across different studies, the amount of recruitable tissue related to the excess tissue mass located in the non-aerated regions ranges from 9% to 25% of the total lung weight, suggesting the role of compression atelectasis in determining the amount of recruitable tissue [54,57,66]. In addition, prone positioning, homogenizing the pleural gradient, can redistribute aeration from non-dependent to dependent lung regions, suggesting a relevant role of atelectatic alveoli in determining changes in aeration in ARDSexp [68]. Thus, improvement in oxygenation in prone position is mainly due to alveolar recruitment and increased regional ventilation, with limited changes in the distribution of perfusion [69]. These physiologic gains in prone positioning could reduce ventilator-induced lung injury and improve mortality. Randomized trials showed conflicting results [70,71], but the most recent study, applying prolonged cycles of prone positioning in early, severe ARDS showed a significant reduction in mortality [72]. Distribution of aeration and perfusion in severe COVID-19 Two phenotypes of COVID-19 pneumonia have been described [5,6], the first characterized by lower lung weight, higher aeration, and lower amount of non-aerated tissue, and the second characterized by higher lung weight, lower aeration, and higher amount of non-aerated lung tissue (Figure 1). Patients able to maintain noninvasive respiratory support are characterized by better aeration and less poorly aerated and non-aerated tissue; in contrast, patients who require invasive mechanical ventilation are characterized by lower aerated tissue, and higher poorly aerated and non-aerated tissue [48]. The key pathophysiologic mechanisms in severe COVID-19 pneumonia are summarized in Figure 2. Loss of aeration in severe COVID-19 Severe COVID-19 pneumonia requiring invasive mechanical ventilation has an ARDS-like pattern of loss of aeration, with large amounts of non-aerated regions [73,74]. In these patients, the lung weight is roughly equivalent to that reported in ARDSexp [29,74], as is reduced respiratory system compliance [12,74]. Nonetheless, several specific aspects of COVID-19-related respiratory failure can be highlighted. Similar to ARDSexp, COVID-19 lungs are characterized by a predominance of non-aerated tissue in dependent regions in the advanced phases of the disease, with poorly aerated ground-glass lung regions homogeneously distributed from non-dependent to dependent lung regions, typically reaching the pleura [75]. Respiratory system compliance tends to be inversely associated with the severity of hypoxemia in ARDS [76], but these two parameters might be de-coupled in COVID-19, with severe hypoxemia also observed in patients with relatively preserved compliance [74]. In a study comparing severe COVID-19 with ARDS from other causes, hypoxemia was more severe in COVID-19 than in ARDS when matched for similar respiratory system compliance [77]. These factors question the validity of the PaO 2 /FiO 2 ratio as a single physiological parameter to define the severity of lung function impairment, which is a cornerstone of the Berlin definition of ARDS. In fact, the decoupling of oxygenation and compliance might result in COVID-19 patients with very low PaO 2 /FiO 2 ratio but relatively preserved compliance and normal inspiratory drive, which may not require invasive ventilation. Distribution of perfusion in severe COVID-19 Different from ARDSexp, regional perfusion shows a nongravitational distribution that is higher in non-dependent (more aerated) and less in dependent (non-aerated) lung regions [51]. Patients with COVID-19 have a high incidence of pulmonary capillary microthrombosis [78], pulmonary embolism [79], and venous thrombosis [80], reflected by levels of D-dimers higher than those reported with other causes of pneumonia [81], which are independently associated with increased mortality [82]. Compared with historical cohorts of patients who died from Spanish flu, the incidence of pulmonary macrothrombi in COVID-19 autopsy studies is markedly higher [83]. These findings seem compatible with a COVID-specific de novo coagulopathy with in situ pulmonary clot formation and activation of systemic coagulation pathways [84]. No specific differences in regional antigravitational distribution in perfusion have been detected between patients with early COVID-19 receiving noninvasive respiratory support and those under invasive ventilation [51]. Ventilation-perfusion matching in severe COVID-19 As much as one-third of the lung volume in severe COVID-19 receives wasted ventilation, i.e. it is characterized by regions with a high V'/Q' ratio (V'/Q' > 1) or dead space (V'/Q' = ∞) [73]. This wasted ventilation distributes primarily in non-dependent lung regions, and non-aerated perfused lung tissue is prevalent in the dependent part of the lung. Interestingly, areas with low V'/Q' are homogeneously distributed from non-dependent to dependent lung areas [51]. Regions with a low V'/Q' ratio contribute more to impaired oxygenation in patients receiving noninvasive compared with invasive respiratory support; however, true shunt alone in invasively ventilated patients does not fully explain hypoxemia, as observed in vivo using DECT [51] and in a computational model [85]. One-third of non-aerated tissue is also characterized by non-perfused lung regions [51]. When these perfusion defects are in poorly aerated or nonaerated compartments, this might have a partial protective effect on gas exchange impairment by diversion of blood flow toward non-injured lung regions, minimizing the further deterioration of gas exchange due to low V'/Q' and true shunt. The hypothesis is that high V'/Q' areas are characterized by lower perfusion due to microthrombi and/or hyperinflation and that ground-glass and consolidated regions are partly excluded from lung perfusion by local thrombosis. Response to PEEP and prone positioning in severe COVID-19 Several studies have investigated the effects of PEEP in COVID-19, using either CT or EIT. Application of higher levels of PEEP was associated with limited alveolar recruitment in most patients with COVID-19, suggesting that non-aerated tissue is mainly characterized by consolidated, non-atelectatic lung regions [73,86]. Whereas the combination of recruitment maneuvers plus PEEP increased the amount of recruited lung tissue [87,88] compared with increasing PEEP alone [73,86], all studies consistently reported worsening of respiratory system elastance at higher PEEP. This suggests that, in invasively ventilated patients with COVID-19, PEEP levels necessary to achieve clinically meaningful lung recruitment are also associated with relevant overinflation of the non-dependent regions. Prone positioning has been used extensively in both awake [89] and sedated, intubated [90] patients with COVID-19. Although no randomized study has evaluated the efficacy of prone positioning in intubated patients with COVID-19, improvement in oxygenation has been widely reported [90,91]. However, in contrast to what occurs in most patients with ARDSexp, increase in PaCO 2 is often observed in COVID-19 after pronation [90,92]. This might suggest that part of the non-perfused dorsal regions may receive more ventilation thus resulting in dead space and worse CO 2 washout. This ventilation could be inefficient and may be solely a distention of the alveoli with poor ventilation, giving rise to increased dead space. However, these pathophysiological hypotheses warrant confirmation in experimental and clinical studies. Moreover, the efficacy of prone positioning in invasively ventilated COVID-19 patients remains to be systematically tested in large, randomized trials. Conclusions ARDS is a complex syndrome with several causes of pulmonary and extrapulmonary lung injury. COVID-19 represents a specific sub-type of pulmonary ARDS, in which hypoxia is explained by the coexistence of scarcely recruitable nonaerated regions and large areas of low ventilation-perfusion ratio. In the initial phases, patients with COVID-19 could be managed noninvasively and respond to high concentrations of inspired oxygen. However, later stages of the disease typically require invasive ventilation and might show limited improvement with the application of higher PEEP levels. Further research is warranted to better elucidate diseasespecific aspects of ARDS from causes other than COVID-19. Expert opinion Since the earliest definition of ARDS, a unifying approach was widely applied to identify therapeutic strategies, including personalized ventilatory settings, that might be applied independently of the cause of lung damage and respiratory failure. This attempt to lump altogether different causes of lung diseases in a single entity is convenient and frequently applied in clinical practice. On the other hand, this simplistic view of ARDS might miss several disease-specific aspects of different pathologies. A first attempt to distinguish two entities within the definition of ARDS was performed by classifying it based on pulmonary and extrapulmonary causes of lung injury. This classification provided important insights in the understanding of ARDS, but whereas experimental models had clear differences based on how lung injury was established, the clinical separation between these two entities is often blurred. Lack of clear evidence of different ventilatory strategies acting differently in patients with pulmonary versus extrapulmonary ARDS boosted research toward more sophisticated phenotyping of ARDS. Currently, several phenotypes classification methods for ARDS are under investigation based on clinical and laboratory parameters, with promising results and potential clinical implications relevant to the respiratory management of these patients. The ongoing COVID-19 pandemic has provided the opportunity to study extensively a homogeneous group of patients fulfilling the clinical criteria for ARDS but sharing the same underlying cause of lung damage. COVID-19 pneumonia is a cause of pulmonary ARDS. Compared with other causes of pulmonary ARDS, patients with COVID-19 show early endothelial activation and dysfunction. This translates into a high incidence of pulmonary and systemic hypercoagulability, which affects the distribution of pulmonary blood and regional perfusion. Patients with COVID-19 have a heterogeneous distribution of different ventilation-perfusion patterns, with predominance of low V'/Q' in the early stages overlapping with a true shunt in the most advanced, severe cases. In severe COVID-19, elastic properties of the lungs are not always coupled to the severity of hypoxemia, as it occurs in typical ARDS. This brings into question the use of the PaO 2 /FiO 2 ratio as a single indicator of the severity of the disease; this is a commonly applied strategy in typical ARDS, where cutoffs of the PaO 2 /FiO 2 ratio are part of guidelines and recommendations on the indication for intensive care admission, initiation of noninvasive positive pressure respiratory support, invasive mechanical ventilation, and rescue strategies, including prone positioning and extracorporeal membrane oxygenation. The spatial distribution of loss of aeration is similar in ARDS and severe COVID-19, but the response to higher PEEP levels is modest and often accompanied by worsening of respiratory system compliance. Also, a paradoxical increase in PaCO 2 is often seen during prone positioning in COVID-19, suggesting diversion of ventilation toward scarcely perfused dorsal regions. During the ongoing COVID-19 pandemic, unprecedented use of noninvasive respiratory support has been reported, even in patients with gas exchange impairment previously considered as a strict indication for intubation. However, cautious monitoring of patients receiving noninvasive respiratory support is mandatory in COVID-19, since patients ultimately requiring intubation must be identified timely to avoid further pro-gression of disease. It is yet to be determined how this renewed interest in noninvasive management of respiratory failure will change our research agenda and our clinical practice in non-COVID-19 ARDS. Further research is warranted to better elucidate disease-specific aspects of ARDS from causes other than COVID-19. Declaration of interest M. Bassetti reports honoraria for lectures and another educational event from Angelini, Bayer, bioMérieux, Cipla, Gilead Sciences, Menarini, Merck Sharp & Dohme (MSD), Pfizer, and Shionogi; grants from Pfizer and MSD, outside of the submitted work. Outside the submitted work, D.R. Giacobbe reports an unconditional grant from Correvio Italia, and investigator-initiated grants from Pfizer and Gilead Italia. The authors have no other relevant affiliations or financial involvement with any organization or entity with a financial interest in or financial conflict with the subject matter or materials discussed in the manuscript apart from those disclosed. Reviewer disclosures Peer reviewers on this manuscript have no relevant financial or other relationships to disclose.
2022-03-29T06:22:58.531Z
2022-03-28T00:00:00.000
{ "year": 2022, "sha1": "4736d2808062390f7c0117f1b9bdbf77f23b81a4", "oa_license": null, "oa_url": null, "oa_status": null, "pdf_src": "TaylorAndFrancis", "pdf_hash": "da00621db9f857771b7597b702cd14d8eabe2eae", "s2fieldsofstudy": [ "Medicine" ], "extfieldsofstudy": [ "Medicine" ] }
235755136
pes2o/s2orc
v3-fos-license
Tropicalization of toric prevarieties The homogeneous spectrum of a multigraded finitely generated algebra (in the sense of Brenner-Schr\"oer) always admits an embedding into a toric variety that is not necessarily separated, a so-called toric prevariety. In order to have a convenient framework to study the tropicalization of homogeneous spectra we propose a tropicalization procedure for toric prevarieties and study its basic properties. With these tools at hand, we prove a generalization of Payne's and Foster--Gross--Payne's tropical limit theorem for divisorial schemes. Introduction 1 1. Toric prevarieties and systems of fans 4 2. Tropical toric prevarieties 6 3. The process of tropicalization 11 4. Homogeneous spectra of multigraded rings 16 5. Divisorial schemes and prevarieties 20 6. Limits of tropicalizations 25 References 28 I Non-separated compactifications of moduli spaces appear very naturally in algebraic geometry; among other examples compactified Jacobians over semistable degenerations of algebraic curves come to mind. The upshot is that (possibly) non-separated compactifications often have a chance to be a lot closer to the original moduli problem than compactifications that are forced to be separated. Tropical geometry, as initially introduced by Mikhalkin (see e.g. [Mik06]), has become an important tool to study the combinatorial geometry of compactifications of moduli spaces (see in particular [ACP15,CCUW20]). For example, in the case of compactified Jacobians the combinatorics of the different possible limits of line bundles in the special fiber can be recovered in terms of the chip-firing combinatorics of divisors on the dual graph of a semistable curve (see [BJ16] and the references within as well as, in particular, [AP20,MMUV21] for a perspective on this topic in terms of universal Jacobians). The first step when studying the tropicalization of an algebraic variety Y over a non-Archimedean field K is to choose some form of local or global coordinates. The most classical way to do this is by finding a closed embedding i of Y into a toric variety X with big torus T . Then this embedding allows us to take coordinate-wise valuations in order to get a polyhedral 1 object, the tropicalization of Y with respect to the embedding i : Y ֒→ X. We refer the reader to [MS15,Gub13] for more technical details in the case of algebraic tori and to [Kaj08,Pay09] in the case of toric varieties. We, in particular, point the reader to [Pay09] and [FGP14], where the authors use the tropicalization with respect to embeddings into toric varieties to study the structure of Berkovich analytic spaces. There are, however, many cases, in which an embedding into a toric variety is not-well understood. One aspect of this problem is that non-separated varieties do not admit closed embeddings into a toric variety and so the phenomenon of non-separatedness cannot be captured by the process of tropicalization described above. We point to [BS03,KU19] for the theory of multihomogeneous spectra of multigraded rings that form a very natural and rich source of non-separated algebraic varieties. In this article we intend to offer a remedy to this particular problem by developing a theory of tropicalization for (possibly non-separated) toric prevarieties (as introduced and studied in [KKMSD73] and [ANH01]). The central idea is that every toric prevariety X (with big torus T ) admits an open cover by T -invariant open affine subsets U σ = Spec K[S σ ] (for a rational polyhedral cone σ) on which the process of tropicalization is well-understood by [Kaj08,Pay09]. The main task of this paper is to glue these local pictures in a way that respects the combinatorial structure of X that is described in terms of so-called systems of fans in [ANH01]. It turns out that, contrary to the situation of (separated) toric varieties, there are really two different ways of doing this: The first approach goes by using the natural stratification of the affine tropical toric variety U trop σ = Hom(S σ , R) at infinity and gluing X trop the same way the strata of X are glued algebraically. The other approach is to consider the non-negative part U trop,≥0 σ = Hom(S σ , R ≥0 ) and by gluing along the faces of σ (which naturally endows X trop,≥0 with the structure of an extended cone complex in the sense of [ACP15,Uli17a]). We refer to X trop as a tropical toric prevariety and to X trop,≥0 as a non-negative tropical toric prevariety. When X is a (separated) toric variety, the non-negative tropicalization X trop,≥0 of X is a subspace of X trop , namely the closure of the fan of X in X trop . Contrary to the usual tropicalization of a toric variety, the non-negative tropicalization can be generalized to toroidal embeddings (see [Thu07,Uli17a]) and therefore plays an important role when studying the tropical geometry of moduli spaces. We shall see in Section 3.1 below that there is also a natural continuous, proper and surjective tropicalization map trop X : X an −→ X trop from the Berkovich analytic space X an associated to X to X trop . Any toric prevariety X is the base change of a certain monoid scheme defined Z; so we may also speak of toric prevarieties over the valuation ring R of K base change (or any ring for that matter). This is not strictly speaking a "variety" in the classical sense, but we prefer this terminology to avoid clumsy phrases (as, for example, the term "toric scheme" is already taken, see [KKMSD73,Gub13]). We will see in Section 3.2 below that we have a natural continuous and surjective non-negative tropicalization map trop ≥0 X : X • −→ X trop,≥0 from the Raynaud generic fiber X • of a toric prevariety X over the valution ring R to X trop,≥0 . The relationship between the two objects can be summarized by the commutative diagram Here the vertical arrows are the natural morphisms X • → X an K (sending an R ′ -valued point, for a valuation ring R ′ extending R, to its generic fiber) and X trop,≥0 → X trop (constructed in Proposition 2.11 below). Both vertical arrows are injective, as soon as X is separated, and bijective, when X is proper. Given a possibly separated scheme Y of finite type over K as well as a (locally closed) embedding i : Y ֒→ X into a toric prevariety X (over K), we may now define the tropicalization of Y with respect to the embedding i by Similarly, given a possibly separated scheme Y of finite type over R as well as a (locally closed) embedding i : Y ֒→ X into a toric prevariety X (over R), we define the positive tropicalization of Y with respect to the embedding i by The restriction of this process to every torus orbit is the usual tropicalization in the sense of [MS15,Gub13] and so, by the Bieri-Groves Theorem [BG84,EKL06], the intersection of Trop(Y, i) (or Trop ≥0 (Y, i) respectively) with every stratum of X trop (or X trop,≥0 respectively) carries a (possibly non-unique) structure of a polyhedral complex that is rational with respect to the value group of K. The process of tropicalization constructed here is functorial with respect to toric morphisms of the surrounding toric prevariety and so it makes sense to consider systems of embeddings into toric prevarieties. In fact, expanding on [Pay09,FGP14], we have the following: Theorem A. Let K be a complete non-Archimedean field and R its valuation ring. (i) Let Y be a divisorial scheme of finite type over K. Then the tropicalization maps associated to all embeddings i : Y → X into a simplicial toric prevariety X naturally induce a homeomorphism between the Berkovich analytic space Y an and the projective limit over all tropicalizations Trop(Y, i). (ii) Let Y be a divisorial scheme of finite type over R. Then the non-negative tropicalization maps associated to all embeddings i : Y → X into a simplicial toric prevariety X naturally induce a homeomorphism between the Raynaud generic fiber Y • and the projective limit over all Trop ≥0 (Y, i). 3 Our proof of Theorem A mostly follows along the lines laid out in [FGP14] with one main exception. In order to construct a sufficient number of embeddings into toric varieties the authors of [FGP14] rely on some rather technical results coming from [Wło93]. We provide a more elementary alternative by working with the theory of multihomogeneous spectra of multigraded algebras, as developed in [BS03] and further expanded upon in [KU19]. The price to pay for this simplification is that, in general, we cannot work with closed embeddings into toric varieties, but only with locally closed embeddings into toric prevarieties, for which we develop the tropical machinery in this article. T Let us first recall the description of the category of toric prevarieties in terms of systems of fans that was developed by A'Campo-Neuen and Hausen in [ANH01]. Here we begin by working over the complex numbers C, toric prevarieties over arbitrary base rings are introduced in Definition 1.6 below. Definition 1.1. A toric prevariety is a normal integral scheme that is of finite type (but not necessarily separated) over C together with an operation of an algebraic torus T , and the choice of a point x 0 ∈ X such that T ֒→ X given by t → t · x 0 is an open embedding. A toric prevariety is a (normal) toric variety in the sense of [Ful93] or [CLS11] if and only it is separated. A morphism f : X → X ′ between toric prevarieties (X, x 0 ) and (X ′ , x ′ 0 ) is said to be a toric morphism, if f(x 0 ) = x ′ 0 and there is a homomorphism φ : T → T ′ of algebraic tori such that f(t · x) = φ(t) · f(x) for all t ∈ T and x ∈ X. The category of (separated) toric varieties with toric varieties is equivalent to the category of (rational polyhedral) fans. In order to give a similar description for the category of toric prevarieties one needs to generalize this notion (see [ANH01, Section 2]). Definition 1.2. Let N be a finitely generated free abelian group and write N R for the vector space N ⊗ R. A collection S = (∆ ij ) i,j∈I of fans in N R (indexed by a finite set I) is said to be a system of fans, if (i) ∆ ij = ∆ ji for all i, j ∈ I and (ii) ∆ ij ∩ ∆ jk is a subfan of ∆ ik for all i, j, k ∈ I. It immediately follows from these axioms that for all i, j ∈ I the fan ∆ ij is a subfan of ∆ ii . To a system of fans S(∆ ij ) i,j∈I we may naturally associate a toric prevariety X = X(S) as follows: Consider the toric varieties X i = X(∆ ii ) for i ∈ I and the open subvarieties X ij = X(∆ ij ) of X i respectively X j for i, j ∈ I with i = j. By Axiom (i) we have a natural isomorphism f ji : X ij ∼ − → X ji and by Axiom (ii) we have f ki = f kj • f ji for all i, j, k ∈ I. Therefore we may glue the X i along the isomorphisms X ij ≃ X ji . This gluing is compatible with the respective operations of the torus T := N ⊗ C * and therefore we obtain a toric prevariety X(S). Every toric prevariety has two distinguished systems of charts: One is by maximal torusinvariant separated open subsets, the other by maximal torus-invariant affine open subsets. The latter corresponds to a system of fans S, in which every ∆ ii is a fan associated to a rational polyhedral cone. We refer to such systems of fans as affine systems of fans. By Sumihiro's Theorem every toric prevariety may be covered by finite many torus-invariant open affine subsets (see [ANH01,Prop. 1.3]). Therefore every toric prevariety is of the form X(S) for a system of fans S. Then the corresponding toric prevariety X(S) is given by taking two copies of A 1 , namely X(∆ 11 ) and X(∆ 22 ), and gluing them over G m ⊆ A 1 via the identity G m = X(∆ 12 ) = X(∆ 21 ) = G m . So X(S) is precisely the affine line with two origins. Let S be a system of fans. Consider the set of pairs (σ, i) consisting of a rational polyhedral cone σ in N R and an index i ∈ I that fulfil σ ∈ ∆ ii . Two such pairs (σ, i) and (τ, j) are said to be equivalent if σ = τ as cones and σ = τ ∈ ∆ ij . We write Ω(S) for the set of equivalence classes Let X = X(S) be a toric prevariety. By [ANH01, Theorem 4.1] there is a (separated) toric variety X together with a toric morphism X → X that is initial among all toric morphisms into (separated) toric varieties. The toric variety X is unique up to unique toric isomorphism and is called the toric separation of X. This construction allows to think of a toric prevariety X as being constructed from a (separated) toric variety by torus-invariantly multiplying non-dense torus-orbits (e.g. in Example 1.3 the affine line with two origins). In this section toric prevarieties have been defined over C. Nevertheless all toric prevarieties naturally arise as a base change of a monoid scheme X Z over Z: for an affine toric variety U σ = Spec C[S σ ] associated to a rational polyhedral cone σ we have U σ,Z = Spec Z[S σ ] and in general this scheme is given by gluing the Spec Z[S σ ] according to the combinatorics of the system of fans defining X. Definition 1.6. Let K an arbitrary field (or more generally an arbitrary ring R). A toric prevariety over K (or R) is a base change to K (or to R respectively) of the monoid scheme X Z associated to a toric prevariety X. Similarly a toric morphism of toric prevarieties is also defined over Z and so it makes sense to refer to a base change of such a toric morphism as a toric morphism of toric prevarieties over K (or R) respectively. 2. T 2.1. Tropical toric varieties. We begin by recalling from [Kaj08,Pay09] and, in particular, from [Rab12] how to construct a tropical analogue of a toric variety. Write R for the additive monoid R ∪ {∞} satisfying the rule a + ∞ = ∞ for all a ∈ R. For a rational polyhedral cone σ in N R one can define the affine tropical toric variety U trop σ as a partial compactification of N R by The topology on U trop σ is the weakest topology that makes all evaluation maps u → u(s) for s ∈ S σ continuous. Since S σ is finitely generated, we may choose a finite generating set s 1 , . . . , s n of S σ and then Hom(S σ , R) carries the subspace topology with respect to the embedding Hom(S σ , R) ֒→ R n given by u → (u(s 1 ), . . . , u(s n )). For every monoid homomorphism u : S σ → R the preimage of R ⊆ R is of the form τ ⊥ ∩ S σ for some face τ of σ. So we naturally find a stratification whose image is the union of locally closed strata that corresponds to the faces of σ that are also faces of τ. Definition 2.1. Let ∆ be a rational polyhedral fan in the vector space N R spanned by the finitely generated free abelian group N and denote by X = X(∆) the associated toric variety. The tropical toric variety associated to ∆ is defined to be the colimit taken over all cones σ ∈ ∆ together with maps arising as in (2) from τ σ. Example 2.2. The tropical toric variety A 2,trop is given by Hom(N 2 , R) = R 2 , which is naturally stratified with strata R 2 , R × {∞}, {∞} × R, and {∞} × {∞}. We may glue four copies of A 2,trop as indicated in Figure 1 to obtain the tropical toric variety with its nine locally closed strata. Let ∆ and ∆ ′ be two fans in N R and N ′ R respectively, giving rise to toric varieties X = X(∆) and that respects the stratifications. Tropical toric prevarieties. We have seen in the previous section that a toric prevariety is defined by gluing toric varieties according to the combinatorial data of a system of fans. We now define a tropical toric prevariety by accordingly gluing tropical toric varieties along such data. Let X = X(S) be a toric prevariety defined by a system of fans S = (∆ ij ) i,j∈I and write We now glue by the equivalence relation ∼ on the disjoint union and f i = f j . Definition 2.3. Let S = (∆ ij ) i,j∈I a system of rational polyhedral fans in N R and let X = X(S) be the corresponding toric prevariety. The tropical toric prevariety associated to X is defined to be by locally closed subsets. The construction of X trop is functorial with respect to toric morphisms. Indeed, by restricting a morphism f : X → X ′ of toric prevarieties to X i , we obtain a morphism X i → X ′ which is the gluing of its restrictions to open affine toric subvarieties of X i . Since f maps open affine toric subvarieties into open affine toric subvarieties, we have an induced map and these maps glue to a continuous map One can also express X trop as a colimit of tropical affine toric varieties as follows. For Remark 2.5. In [GG16] the authors construct a tropicalization for all schemes that admit a model over F 1 , the so-called "field with one element". One may deduce from the theory in [ANH01] that a toric prevariety X naturally admits a model over F 1 , i.e. they arises as the base change of a monoid scheme to C (also see the discussion above Definition 1.6). The approach of [GG16] would lead to a semiring scheme X T over the tropical numbers, whose set of T-valued points agrees with X trop . Non-negative tropical toric prevarieties. There is an alternative tropical analogue of a toric variety, the so-called non-negative tropicalization, that seems to have appeared first in [PPS13] and naturally lends itself to generalizations to toroidal embeddings [Thu07,ACP15] and logarithmic schemes [Uli17a]. Write R ≥0 for the submonoid of R consisting only of non-negative elements. For a rational polyhedral cone σ in N R one can define the non-negative affine tropical toric variety U trop,≥0 The topology on U trop,≥0 σ is the weakest topology that makes all evaluation maps u → u(s) for s ∈ S σ continuous. Again, since S σ is finitely generated, we may choose a finite generating set s 1 , . . . , s n of S σ and then Hom(S σ , R) carries the subspace topology with respect to the embedding Hom(S σ , R ≥0 ) ֒→ R n ≥0 given by u → (u(s 1 ), . . . , u(s n )). The non-negative affine tropical 8 toric variety U trop,≥0 σ is a canonical compactification of the cone σ via the open embedding For every monoid homomorphism u : S σ → R ≥0 the preimage of R ≥0 ⊆ R ≥0 is of the form τ ⊥ ∩ S σ for some face τ of σ. So we naturally find a stratification σ into locally closed subsets. Here an element [u] ∈ σ/τ corresponds to the monoid homomorphism S σ → R ≥0 given by and this does not depend on the choice of representative of [u] ∈ σ/τ. This implies that for a face τ of σ the monoid homomorphism S σ → S τ induces a closed embedding Definition 2.6. Let ∆ be a rational polyhedral fan in the vector space N R spanned by the finitely generated free abelian group N and denote by X = X(∆) the associated toric variety. The non-negative tropical toric variety associated to ∆ is defined to be the colimit taken over all cones σ ∈ ∆ together with maps arising as in (4) from τ σ. The non-negative tropical toric variety X trop,≥0 naturally carries the structure of an extended cone complex in the sense of [ACP15,Uli17a]. From (3) we immediately obtain a stratification by locally closed subsets. Let ∆ and ∆ ′ be two fans in N R and N ′ R respectively, giving rise to toric varieties X = X(∆) and that respects the stratifications. As in Section 2.2 above, we may define a non-negative tropical toric prevariety by gluing according to the combinatorics of a system of fans. Let X = X(S) be a toric prevariety defined by a system of fans S = (∆ ij ) i,j∈I and write X ij = X(∆ ij ) and X i = X ii . Since ∆ ij is a subfan of ∆ ii ∩ ∆ jj , we have inclusion maps X ij → X i and X ij → X j which induce morphisms As above, we now glue by the equivalence relation ∼ on the disjoint union i∈I X Definition 2.7. Let S = (∆ ij ) i,j∈I a system of rational polyhedral fans in N R and let X = X(S) be the corresponding toric prevariety. The non-negative tropical toric prevariety associated to X is defined to be The construction of X trop,≥0 is functorial. Indeed, by restricting a morphism f : X → X ′ of toric prevarieties to X i , we obtain a morphism X i → X ′ which is the gluing of its restrictions to open affine toric subvarieties of X i . Since f maps open affine toric subvarieties into open affine toric subvarieties, we have an induced map One can also express X trop,≥0 as a colimit of non-negative tropical affine toric varieties as and they form a direct system. The maps U Example 2.8. Let us consider again X the affine line with two origins, viewed as a toric prevariety as in Example 1.3 and Example 2.4. The corresponding non-negative tropical toric prevariety X trop,≥0 is obtained by gluing two copies of U The non-negative tropical affine line X trop,≥0 with two origins mapping to the tropical affine line X trop with two origins. In this example, we already see that there is a natural map X trop,≥0 → X trop which is not injective (see Proposition 2.11 below for details). Remark 2.9. Again, as in Remark 2.5, one may think of X as being defined over F 1 . Taking a base change X O T to the semiring O T of non-negative tropical numbers, we recover the non-negative tropicalization as the set of O T -valued points of X O T . (see [Lor15] for more details on this in the language of blueprints). Remark 2.10. One may think of a toric prevariety as a logarithmic scheme with respect to its boundary. When K is endowed with the trivial absolute value, the tropicalization map constructed in Proposition 3.2 is exactly the tropicalization map for logarithmic schemes constructed in [Uli17a]. 2.4. Comparison between X trop and X trop,≥0 . For an affine toric variety U σ associated to a rational polyhedral cone σ ⊆ N R , there is a natural continuous inclusion map commute. Therefore the locally defined map glues to an (automatically unique) continuous map X trop,≥0 → X trop , as desired. If X is separated, it can be described by a rational polyhedral fan ∆ (and not a system of fans). In this case X trop,≥0 → X trop restricts to the embedding of the fan ∆ on N R and the image of X trop,≥0 → X trop is precisely the closure of ∆ in X trop . This is all of X trop if and only the support of ∆ is all of N R , which, in turn, is equivalent to X being proper. Vice versa, suppose that X is not separated. Then there are two cones σ 1 and σ 2 in two different fans in the system of fans describing X that are equal as cones in N R . In X trop,≥0 these two cones are glued only over a proper face, while in X trop those two cones would be identified in full (while their limits at infinity in X trop remains distinct). Thus, in this case, the map X trop,≥0 → X trop is not injective. 3. T 3.1. Analytification and tropicalization. In this section we construct a tropicalization maps for toric prevarieties that generalizes the tropicalization for toric varieties introduced in [Kaj08] and [Pay09] and prove some of its properties. The domain of this tropicalization map is the Berkovich analytification (in the sense of [Ber90]). Let X = Spec A be an affine scheme of finite type over a non-Archimedean field K. A point x in the Berkovich analytification X an is a multiplicative seminorm |.| x : A → R that restricts to the given non-Archimedean norm on K. The space X an carries the weakest topology making evaluation maps x −→ |f| x for all f ∈ A continuous. For a morphism φ : Y → X of affine schemes of finite type over K there is a natural continuous map φ an : Y an → X an that is given by When X is a not necessarily affine scheme that is locally of finite type over K, the Berkovich analytification X an is locally on affine open subsets given by the above space of seminorms and globally by gluing. This way every point in X an may be represented by an L-valued point for a suitable non-Archimedean extension L of K. Moreover, for a morphism f : X → Y of schemes locally of finite type over K, there is an induced continuous map φ an : X an → Y an that restricts to the above pullbacks of seminorms on open affine subsets; the association f → f an is functorial in f. We refer the reader to [Ber90] and also to [Pay15] for further details and, in particular, to [Ber90] for the definition of a structure sheaf on X an . Since here we only care about the underlying topological space of X an , we refrain from going further into this direction. (ii) For a toric morphism f : X → X ′ induced by a morphism (F, f) : S → S ′ of systems of fans, the natural diagram There is a natural section J X : X trop → X an of trop X such that the composition r X = J X • trop X : X an → X an is a strong deformation retraction onto a closed subset Σ(X an ) of X an (that is automatically homeomorphic to X trop ). (iv) Let T • = t ∈ T an |χ m | = 1 for all m ∈ M be the affinoid torus in T an . There is a natural homeomorphism between the underlying topological space of the non-Archimedean analytic stack quotient [X an /T • ] and X trop that makes the diagram Proof. Let f : U σ → U σ ′ be a toric morphism of affine toric varieties and write f # for the induced K-algebra homomorphism K[S σ ′ ] → K[S σ ] and the monoid homomorphism S σ ′ → S σ . Let x ∈ X an and s ′ ∈ S σ ′ . Then we have and this shows that the diagram For Part (i) let σ be a rational polyhedral cone. The commutativity of (6) applied to the open embedding of a torus-invariant open subset U τ ⊆ U σ associated to a face τ of σ shows that trop Uσ naturally restricts to trop Uτ on U an τ ⊆ U an σ . The existence of trop X now follows, since we can write X an as the colimit of the U an Let A be a compact subset of X trop , i.e. A is a quasi-compact Hausdorff space. Then A has to lie in a Hausdorff open subset in X trop of the form X(∆ i ) for a fan ∆ i in S (here we choose the fans in S to be maximal). Then trop −1 X (A) ⊆ X(∆ i ) an and so it is enough to show the properness of trop X for a (separated) toric variety. Here this is a fact that seems to be well-known in the community. We only provide a proof, since we could not find it written explicitly in the literature. Let X = X(∆) be a toric variety associated to a fan ∆ in the vector space N R generated by the cocharacter lattice N of its big torus T . Let A ⊆ X trop a compact subset. Choose a toric completion X of X. Then A is a closed subset of X trop . Its preimage trop −1 X (A) is closed, since trop X is continuous; so, using that X trop is compact, we find that is compact as well. For Part (ii) it suffices to check this statement for an torus-invariant open affine subset U σ ⊆ X and then this is the commutativity of (6). 13 In Part (iii) we again first consider a torus-invariant open affine subset U σ of X. Here the section J Uσ : U trop σ → U an σ is given by associating to u ∈ U σ = Hom(S σ , R) the multiplicative seminorm defined by s∈Sσ a s χ s for an element s∈Sσ a s χ s ∈ K[S σ ]. It is clear that this map is continuous and that the equality trop Uσ •J Uσ = id Uσ holds. Set ρ Uσ := J Uσ • trop Uσ and observe that Therefore ρ Uσ : X an → X an is a retraction onto a closed subset of X an , the non-Archimedean skeleton of U an σ (which is automatically homeomorphic to U trop σ ). We refer the reader to [Thu07, Section 2] for an explicit construction of a strong homotopy between the identity map on U an σ and ρ Uσ . Since all of these construction are naturally compatible with restrictions to a torus-invariant open affine subset U τ of U σ for a face τ of σ, there is also a continuous section J X : X trop → X an of trop X , a retraction ρ X : X an → X an given by ρ X := J X • trop X , and a strong homotopy between the identity on X an and ρ X . In Part (iv) we may again reduce to a torus-invariant open affine subset U σ and in this case the proof is identical to the one presented in [Uli17b]. 3.2. Raynaud generic fiber and non-negative tropicalization. In this section we construct a natural non-negative tropicalization map, which seems to have appeared first in [PPS13] for affine toric varieties. The domain of the non-negative tropicalization map is not the Berkovich analytification, but rather a certain variant called the Raynaud generic fiber that, in its simplest form, associates to a scheme locally of finite type over the valuation ring R of non-Archimedean field K a Berkovich analytic space X • (see [Tem15, Sections 5.2 and 5.3] for details). On an affine R-scheme X = Spec A of finite type the underlying topological space of the Raynaud generic fiber X • is the closed subset of X an K that parametrizes multiplicative seminorms |.| x : A → R ≥0 that are bounded, i.e. that fulfill |f| x ≤ 1 for all f ∈ A. Given a morphism φ : Y → X of affine schemes X = Spec A and Y = Spec B of finite type over R the induced map φ an : Y an K → X an K restricts to a natural continuous map φ • : Y • → X • . Notice, however, that for an open affine subset U = Spec B of X = Spec A, the subset U • ⊆ X • is closed (and, in fact, a so-called affinoid domain). For a not necessarily affine scheme X that is locally of finite type over R, the Raynaud generic fiber is the colimit of all U • for open affine subsets U = Spec A of X. Note that this way every point of X • may be represented by an R ′ -valued point for the valuation ring R of a non-Archimedean extension of K. Also every morphism f : X → Y of schemes locally of finite type over R induces a natural continuous map f • : X • → Y • that restricts to the pullback of seminorms on open affine subsets; the association f → f • is functorial. There is a natural continuous map X • → X an that is locally induced by the inclusions U • ֒→ U an on open affine subsets U = Spec A of X. By the valuative criteria, this map is injective if and only X is separated and bijective if and only if X is proper. Proposition 3.2. Let X be a toric prevariety over a complete valuation ring R given by a system of fans S = (∆ ij ) i,j∈I . (i) There is a natural continuous and proper tropicalization map trop X : X η → X trop,≥0 whose restriction to a torus-invariant open affine subset U [σ,i] for a cone [σ, i] ∈ Ω(S) is given by (ii) For a toric morphism f : X → X ′ induced by a morphism (F, f) : S → S ′ of systems of fans, the natural diagram There is a natural section J X : X trop,≥0 → X • of trop X such that the composition r X = J X • trop X : X • → X • is a strong deformation retraction onto a closed subset Σ(X • ) of X • (which is automatically homeomorphic to X trop,≥0 ). (iv) Let T • = t ∈ T an |χ m | = 1 for all m ∈ M be the affinoid torus in T an . There is a natural homeomorphism between the underlying topological space of the non-Archimedean analytic stack quotient [X • /T • ] and X trop,≥0 that makes the diagram Proof. Let f : U σ → U σ be a toric morphism between two affine toric varieties associated to the rational polyhedral cones σ and σ ′ . Then the commutativity of (6) implies the commutativity of (7) One now argues as in the proof of Proposition 3.1 in order to show that Parts (i)-(iii). We point out the proof of the properness of trop X for a (separated) toric variety X is easier here, since both X • and X trop are already compact. For Part (iv) we again reduce to the case of a torus-invariant open affine subset U σ of X. We recall from [Uli17b] that the central part of the argument was that U trop σ is the topological 1-colimit of the groupoid presentation T • K × U an σ,K ⇒ U an σ,K of U an σ,K /T • . But this implies that U trop,≥0 σ is the topological 1-colimit of the groupoid presentation T • × X • ⇒ X • and so we find that X trop,≥0 is naturally homeomorphic to the underlying topological space X • /T • of X • /T • (in the sense of [Uli17b, Section 3]). 15 The non-negative tropicalization map is compatible with the usual tropicalization map, as indicated in the following. Proposition 3.3. Let X = X(S) be a toric prevariety over the valuation ring R of a non-Archimedean field K and write X K def = X × R K for its scheme-theoretic generic fiber. Then the natural diagram So, when X is separated, the non-negative tropicalization map is precisely the restriction of the usual tropicalization map to X • ⊆ X an . Remark 3.4. When K is carrying the trivial absolute value, its valuation ring is simply K itself and the Raynaud generic fiber of a scheme X that is locally of finite type over K is nothing but the -space constructed in [Thu07]. The non-negative tropicalization map then turns out to be a special case of the tropicalization map constructed in [Uli17a] for a fine and saturated logarithmic scheme X. This tropicalization map, in turn, can be identified with a retraction to a non-Archimedean skeleton of X , when X is logarithmically smooth over K by [Thu07]. H In this section we show how to associate a reasonably well-behaved scheme to a ring S graded by a finitely generated abelian group D. The construction was invented by Brenner and Schröer [BS03], and it generalizes the usual projective spectrum of an N-graded ring. A more detailed discussion of this material (in particular an analysis of the connection with convex geometry) can be found in [KU19]. Here we will only focus on the construction itself, and the path leading to embeddings of divisorial schemes into simplicial toric prevarietiees. This section is expository. For the duration of this section let D be a finitely generated abelian group, and let S = d∈D S d be a D-graded ring. We will write S d for the degree d homogeneous piece of S, and deg D (f) for the degree of a homogeneous element f ∈ S. = (S f ) 0 will be a geometric quotient. As is well-known, a grading of S by a finitely generated abelian group D corresponds to the action of the diagonalizable group scheme Spec(S 0 [D]) on Spec S. If S is an algebra over a field then geometric invariant theory [MFK94, Theorem 1.1] tells us that the projection morphism Spec(S) → Spec(S 0 ) is a categorical quotient in the category of schemes. Note, however, that the latter space is often very simplistic (e.g. consider the case of a polynomial ring over a field graded by the degree) hence not what we want in general. Brenner and Schröer take a different route: Spec(S) has a quotient in the category of ringed spaces (see [Liu02, Exercise 2.14]), which is often quite different from the quotient in the category of schemes. The ringed space Quot(S) will serve as an 'ambient space' for the construction of of the space we are looking for. It is important to point out that the construction of [BS03] works over arbitrary rings, and not only for algebras over fields as in [MFK94]. Localization by relevant elements yields periodic rings where the two quotients (in the category of schemes and in the category of ringed spaces) agree. Homogeneous localization by relevant elements yields open sub-ringed-spaces of Quot(S) that are schemes themselves. This in turn yields the following construction. Definition 4.4 (Homogeneous spectrum of a multigraded ring). Let D be a finitely generated abelian group, and let S be a D-graded ring. We define the (multi)homogeneous spectrum of S as the scheme As one would expect, we can define the irrelevant ideal of S via and call V(S + ) ⊆ Spec(S) the irrelevant subscheme. This of course depends on D to a large extent. One then obtains an affine projection morphism which is a geometric quotient for the induced action. The points of Proj D (S) correspond to graded (not necessarily prime!) ideals p ⊳ S not containing S + and such that the subset of homogeneous elements H ⊆ S \ p is closed under multiplication. Remark 4.5. If S is an integral domain then Proj D S is an integral scheme by [KU19, Lemma 3.6]. The following example is taken from [KU19]. It illustrates the strong dependence of Proj D (S) on the grading. Example 4.6. Let k be a field. We will compute multihomogeneous spectra of the ring S = k[T 1 , T 2 ] for various gradings. This example serves to illustrate that regradings have a profound effect on the geometry of Proj D S (including separatedness and dimension), and highlights the contrast to the classical N-graded spectrum. (1) First let D = Z 2 , and grade S by degree, that is set S (a,b) = k · T a 1 T b 2 for (a, b) ∈ N 2 . Then homogeneous elements are constant multiples of monomials, and As a consequence, we obtain S (f) ≃ k for any f ∈ Rel D (S), and Proj D S turns out to be a point. (2) Let D = Z 2 again, but now This is the grading induced by the (non-surjective) homomorphism δ : Z 2 → Z 2 , (c, d) → (c + d, 0). Since all degrees lie on the a-axis, no homogeneous element is relevant, hence Proj D S = ∅. (3) For the sake of completeness let now D = Z, and consider the grading of S by total degree. Essentially by definition Proj D S is the usual N-graded projective spectrum of S, that is, Proj D S ≃ P 1 . The grading in this case can be realized via the surjective homomorphism δ : Z 2 → Z given by (a, b) → a + b. (4) Last, we look at the surjective regrading δ : Beginning of Section 3], we have that Proj D S is isomorphic to the affine line with double origin. As we saw above, one issue with the construction is that Proj D (S) might not be separated if rk D ≥ 2. It is in general an interesting question what conditions on D and S would guarantee separability. We will not pursue this here, in fact, the lack of separability is a feature that enables us to study prevarieties as locally closed subschems of toric prevarieties. Example 4.7. Let S = R[T 1 , . . . , T r ] be a polynomial ring over a commutative ring and suppose that it is graded via a surjective group homomorphism φ : Z r → Z m . Then Proj Z m S is a simplicial toric prevariety by [BS03,Proposition 3.4]. The discussion in [ANH01,Section 8] shows that every simplicial toric prevariety arises in this fashion. We now move on to describing morphisms and rational maps to projective spectra. The following is an elaboration of [BS03, Proposition 4.1 and Proposition 4.2], see also [KU19, Section 2]. Let D be a finitely generated abelian group, let X be a scheme, and let Definition 4.8. With notation as above, let s ∈ Γ (X, B d ) be a homogeneous section of degree d. We write X s ⊆ X for the largest open subset such that the multiplication maps X φ(f) . Then (1) the subset U B,φ ⊆ X is non-empty and open; (2) there exists a natural morphism of schemes r B,φ : U B,φ → Proj(S) along with a commutative diagram The unnamed morphisms pointing to the left are natural projections, those pointing to the right are natural inclusion morphisms. Proof. Let f ∈ Rel D (S). Then the D-graded homomorphism φ : Taking the associated morphisms of affine schemes gives rise to the commutative diagram the square on the right-hand side in the Proposition commutes by the definition of U B,φ . Next, by the definition of X φ(f) , the multiplication maps are bijective for all natural numbers m. This way we obtain a homomorphism The composition yields a morphism of schemes These morphisms are compatible with intersections therefore they give rise to a morphism Finally, the morphism in the diagram arises from the ring homomorphisms D The notion of a projective variety has certainly a number of desirable properties, but it does not suffice in a number of situations, especially when it comes moduli theory. Therefore, relaxations of the concept are of interest. Traditionally one can consider quasi-projective varieties or projective schemes, both of which keep the requirement of the existence of an ample invertible sheaf. Here we recall a more recent variant due to Borelli [Bor63,BS03] which asks for a lighter version of this, still maintaining most of the useful properties of projective schemes. At the heart of the definition lies the following generalization of Grauert's criterion for ample sheaves. Let L 1 , . . . , L m be invertible sheaves on a scheme X. For a multiindex d = We refer the reader to [BS03, Proposition 1.1] for a proof. Definition 5.2 (Ample families and divisorial schemes). Let X be a qcqs (quasi-compact and quasi-separated) scheme, L 1 , . . . , L m a collection of invertible sheaves on X. If the invertible sheaves L 1 , . . . , L m satisfy the equivalent conditions of Proposition 5.1 then we call them an ample family or an ample system. A qcqs scheme is called divisorial if it admits an ample family of invertible sheaves. Proposition 5.3. Let X be a divisorial scheme, L 1 , . . . , L r an ample system on X, and let Y ⊆ X be a locally closed subscheme. Then the restriction L 1 | Y , . . . , L r | Y forms an ample system on Y, in particular, Y is a divisorial scheme as well. Proof. We will need to prove that Y is a divisorial scheme, that is, that L 1 | Y , . . . , L r | Y is an ample system. We will prove that the collection of open subsets forms a base of the Zariski topology on Y. To this end, let V ⊆ Y be an open subset of X. Since the Zariski topology of Y is the subspace topology of the Zariski topology of X, there exists an open subset U ⊆ X such that V = U ∩ Y. As L 1 , . . . , L r form an ample family, there exists an index set I, multiindices d i ∈ N r , and global sections s i ∈ Γ (X, L ⊗d i ) such that U = i∈I X s i , and consequently, By Lemma 5.4 below, therefore we are done. Lemma 5.4. Let X be an arbitrary scheme, let L be an invertible sheaf on X, and let Y ⊆ X be a locally closed subscheme. For every s ∈ Γ (X, L) Proof. This is a part of [Bor63, Proposition 2.3]. We point out that his argument (along with those for [Bor63, Proposition 2.1 and Proposition 2.2]) remain valid over an arbitrary scheme. Lemma 5.5. Le X be a qcqs scheme and let L 1 , . . . , L r be an ample family. The collection forms an open cover of X. Proof. Let x ∈ X be an arbitrary point. By Proposition 5.1 (3) there exists a Q-basis d (1) , . . . , d (m) of N m along with global sections f i ∈ Γ (X, L ⊗d (i) ) such that the open subsets X f i are affine neighbourhoods of x. Consider Then f is relevant and X f = X f 1 ∩ . . . ∩ X fm is affine. In analogy with the case of projective schemes arising as homogeneous spectra of singly-graded rings, we will see that homogeneous spectra of ample families are divisorial schemes as well. Proposition 5.7. Let D be a finitely generated abelian group and S a D-graded ring, which is finitely generated as an S 0 -algebra. Then Proj D S is a divisorial scheme. The following statement is a slight variation of [BS03,Theorem 4.4]. Since it does not follow directly from loc. cit., we give a proof. The main point of this exercise is to obtain a proof of the case when R is not required to be noetherian. We rely on Proposition 4.9 along with its notation. Proposition 5.8. Let X be a scheme of finite type over a ring R (not necessarily noetherian), L 1 , . . . , L r a finite collection of invertible sheaves on X. If the invertible sheaves L 1 , . . . , L r constitute an ample family then the following hold. (1) The rational map is everywhere defined and an open embedding. Here we mean (2) There exists a finite index set I, a finite set of degrees {d i | i ∈ I}, a collection of global sections {f i ∈ Γ (X, L ⊗d i ) | i ∈ I}, and an N r -graded polynomial algebra A = R[T i | i ∈ I] such that the rational map Φ : X Proj N r A induced by the N r -graded ring homomorphism coming from T i → g i is everywhere defined and an embedding. Let x ∈ X be an arbitrary point. We will show that x ∈ U B,φ , that is, that the rational map r B,φ : X Proj N r S is defined at x. By part (3) of Proposition 5.1, there exists a Q-basis d 1 , . . . , d m of N r and a collection of sections f j ∈ Γ (X, L ⊗d j ) for 1 ≤ j ≤ m such that the subsets X f j for 1 ≤ j ≤ m are affine neighbourhoods of x. Consider the element Since deg(f) is in the interior of the maximal-dimensional cone spanned by the Q-basis elements , the product f ∈ S is a relevant homogeneous element. As the rational map r B,φ is indeed defined at x. Therefore, r B,φ : X Proj N r S is indeed a morphism. Next, pick a relevant element f ∈ S having the property that X f ⊆ X is an affine open subset. The canonical morphism is an N r -graded isomorphism, hence its restriction to degree 0 is an isomorphism as well. But then is an isomorphism of schemes. Consequently, since L 1 , . . . , L r is an ample family on X, the collection of open subsets forms an open cover of X and hence the morphism is an open immersion. Part (2): This is again a minor modification of [BS03,Theorem 4.4] with an eye on the fact that the base ring R does not need to be noetherian. As far as the exposition goes, we borrow liberally from [KU19, Section 4]. Assume again that L 1 , . . . , L r are an ample family. Then there exists a finite set of relevant global sections f j with 1 ≤ j ≤ m such that the corresponding open subsets X f j form an affine cover of X. Consequently, we have X f j ≃ Spec A i , or, equivalently, Γ (X f j , O X ) ≃ A j as R-algebras, where the A i 's are finitely generated algebras over R. Let h j,1 , . . . , h j,m j be a generating set of A j over R, then for every 1 ≤ j ≤ m and every 1 ≤ i ≤ m j there exist n j,i ∈ N such that f so that the natural homomorphism A → S given by T (i,j) → g i,j is homogeneous. Observe that although the homomorphism A → S above might not be surjective itself, the induced homomorphisms of rings A (T (j,i) ) → S (g j,i ) all are, hence the corresponding morphisms of schemes X g j,i → D + (T j,i ) are closed embeddings. As the X (g (j,i) ) form an open cover of X, we obtain a locally closed embedding X ֒→ Proj D A. If X is proper then this is a closed embedding. Remark 5.9. According to [BS03,Theorem 4.4], Proposition 5.7 (1) implies the ampleness of the family L 1 , . . . , L r without any further condition, and (2) implies the ampleness of L 1 , . . . , L r assuming that X is of finite type over a noetherian ring. Theorem 5.10 (Embedding properties of divisorial schemes, [BS03], Corollary 4.7, [HS04]). Let X be a scheme of finite type over a not necessarily noetherian ring R. If X is divisorial, then (1) X can be embedded into a simplicial toric prevariety over R with affine diagonal. (2) X can be embedded into the homogeneous spectrum of a multi-graded R-algebra of finite type. Proof. For a divisorial scheme X of finite type over a not necessarily noetherian ring R, Proposition 5.7 implies the existence of a locally closed embedding of X into Proj N r A for an N r -graded R algebra A finitely generated over R. This yields (2). Given (2), [BS03, Proposition 3.4] implies the existence of an embedding of X into a simplicial toric prevariety over R, the fact that the latter has affine diagonal is shown in [BS03, Proposition 3.1]. Proposition 5.11. Let X be a simplicial toric prevariety over an algebraically closed field, and let D ρ | ρ ∈ Ω(S)(1) be the set of torus-invariant prime divisors on X. Then there exists positive integers n ρ such that the collection of line bundles O X (n ρ D ρ ) forms an ample system on X. In particular a simplicial toric prevariety is divisorial. Proof. Given that X is simplicial, for every torus-invariant prime Weil divisor D ρ there exists a positive multiple n ρ D ρ , which is Cartier, hence O X (n ρ D ρ ) is a line bundle. Let x ∈ X be an arbitrary point, then we can find a torus-invariant affine open subset U σ which contains x. Since the n ρ D ρ are effective Weil and therefore Cartier-Divisors, there exist global sections s ρ ∈ Γ (X, O X (n ρ D ρ )) such that supp(s ρ ) = D ρ . Consider the global section Then we have X s = U σ , which is by choice affine. Therefore, the collection of line bundles O X (n ρ D ρ ) for ρ ∈ Ω(S)(1) forms an ample system. Corollary 5.12. Let R be a ring (not necessarily noetherian), and let X def = Proj D R T i | i ∈ I be a simplicial toric prevariety over R. Then X is a divisorial scheme. Proof. Consider the simplicial toric prevariety Proj D Q T i | i ∈ I associated to the same Dgrading. According to Proposition 5.11, Proj D Q T i | i ∈ I is a divisorial prevariety. Since the torus-invariant prime divisors D ρ , the associated line bundle L ρ,nρ , and the global sections s ρ are all defined over Z, the scheme Proj D Z T i | i ∈ I (coming from the same D-grading) is again divisorial. But then so is its base change X to Spec R. 24 6. L In this section, we define a notion of tropicalization for a locally closed subscheme of a toric prevariety, and study the topology of a limit of such tropicalizations. The section concludes with a proof of Theorem A from the introduction. For the duration of this section fix K to be a non-Archimedean valued field and write R for its valuation ring. Definition 6.1. Let Y be a divisorial prevariety. We define the tropicalization Trop(Y, i) (resp. nonnegative tropicalization Trop ≥0 (Y, i)) of Y with respect to a locally closed embedding i : Y → X into a toric prevariety X as the image of i an (Y) (resp.i • (Y)) under the tropicalization map X an → X trop (resp. non-negative tropicalization map X • → X trop,≥0 ). Usually there are many ways to embed a scheme Y into a toric prevariety, hence the associated tropicalization will naturally depend on the choice of the embedding i (in particular on the target space X). To get a more intrinsic object, one can instead consider inverse systems I of embeddings of Y into toric prevarieties and form their associated inverse limits Let us elaborate on this: Suppose we are given two embeddings i ′ : Y ֒→ X ′ and i : Y ֒→ X into toric prevarieties. A toric morphism of locally closed embeddings (i ′ : Write trop i for the composition trop •i an (as well as trop ≥0 i for trop ≥0 X •i • ). Then, given a toric morphism between i ′ and i, by Proposition 3.1 (ii) and Proposition 3.2, we naturally have commutative diagrams i when working over K or R respectively. Thus, for a system I of toric embeddings (consisting of toric morphisms), the tropicalization maps Y an → Trop(Y, i) induce continuous maps respectively, and we are interested in criteria under which these maps become homeomorphisms. This is a natural extension to the non-separated case of the work done in [FGP14], where the authors study this question for toric embeddings, i.e., closed embeddings into (separated) toric varieties and obtain sufficient conditions in terms of simple properties of the system of embeddings. We will show that, with some modifications, these conditions also work in our setup. Furthermore, we state the corresponding results for the Raynaud generic fiber and limits of non-negative tropicalizations and finally apply both results to the case of divisorial schemes. Proposition 6.2. Let Y be a scheme of finite type over a complete non-Archimedean field K or over its valuation ring R and let I be a system of locally closed embeddings i : Y ֒→ X of Y into toric prevarieties that is closed under finite products and fulfills the property: (+) For every point y ∈ Y, there is an embedding i ∈ I and a torus-invariant open affine subset U of X containing i(y) and such that i| i −1 (U) : i −1 (U) ֒→ U is a closed embedding. Then the map π I is surjective. Proof. Let x = (x i ) i∈I be a point in the limit. We need to show that π −1 I (x) is non-empty. By (+), there is an embedding i : Y → X and a torus-invariant open affine subset U of X such that x i ∈ U trop and i| i −1 (U) : i −1 (U) → U is a closed embedding, so that it is in particular proper. Then trop −1 i (x i ) ⊆ (i an ) −1 U an and also the composition trop U •i| an i −1 (U) is proper (by [Ber90]). It follows that trop −1 i (x i ) is a non-empty compact subset of X an . Now, given a finite number of embeddings i 1 , . . . , i s ∈ I, we have by hypothesis that i × i 1 × · · · × i s ∈ I and trop −1 (x i 1 ×···×is ) ⊆ trop −1 (x i ) ∩ trop −1 (x i 1 ) ∩ · · · ∩ trop −1 (x is ) so that this finite intersection is non-empty. Since trop −1 i (x i ) is compact, if follows that its intersection with all the other trop −1 j (x j ), which is π −1 I , is also non-empty. This shows that π I is surjective. The same proof works for Y over the valuation ring R and the Raynaud generic fiber Y • mapping to the non-negative troplicalizations. Proposition 6.3. Let Y be a scheme of finite type over a complete non-Archimedean field K or over its valuation ring R and let I be a system of locally closed embeddings of Y into toric prevarieties that satisfies: Then the map π I is injective. Proof. Let η and η ′ be distinct points in the analytification. We have to show that there exists a locally closed embedding i ∈ I such that trop(η, i) = trop(η ′ , i). First, consider the open cover X = U 1 ∪ · · · ∪ U r of X given by condition ( * ). If η ∈ U an k and η ′ ∈ U an k , we chose a locally closed embedding i ∈ I with the property that U k is the preimage of an open invariant subset U = U σ 1 ∪ · · · ∪ U σs of Y = Y(S), where the σ j taken as cones of the maximal fans ∆ ii which do not belong to the mixed fans ∆ ij in the system of fans S. Then the point trop(i an (η)) belongs to U trop σ k and trop(i an (η ′ ) does not. If both ι(η) and i(η ′ ) belong to a common U an [σ,i] and f is the pullback of a monomial a −1 χ u , for some a = 0, then their images under the tropicalization map π i are the monoid homomorphisms S σ → R which map u to η(f) + val(a) and η ′ (f) + val(a), respectively, and hence are distinct elements of Trop(X, i). We conclude that the map π I is injective. The proof for the Raynaud generic fiber and the non-negative tropicalizations goes through in the same way. Theorem 6.4. Let Y be a scheme of finite type over a complete non-Archimedean field K or over its valuation ring R and let I be a system of locally closed embeddings of Y into toric prevarieties that is closed under finite products and satisfies (+) and ( * ). Then π I is a homeomorphism. Proof. By Propositions 6.2 and 6.3, the continuous map π I is bijective, so it remains to show that it is a homeomorphism. Let Y = V 1 ∪ · · · ∪ V r be an open affine cover of Y satisfying ( * ). Let f ∈ O Y (V k ) be a regular function and i ∈ I be a locally closed embedding such that V k is the preimage of an invariant open subset U = U σ 1 ∪ · · · ∪ U σs of X and f is the pullback of a monomial aχ s . Denote by V the preimage of U trop σ 1 ∪ · · · ∪ U trop σs ⊂ X trop under the composition so that V is exactly the image of V an k under the map π I : Y an → lim ← −i∈I Trop(Y, i) (so V is independent of the particular choice of the embedding i). We need to show that V an k is homeomorphic to V. The topology on U an k is the coarsest that makes the evaluation maps ev f : U an k −→ R ≥0 x −→ |f| for all f ∈ O X (V k ) continuous. Consider a locally closed embedding i ∈ I such that f is the pullback of a monomial aχ s on a torus-invariant open subset U = U σ 1 ∪ · · · ∪ U σs . Then the evaluation map is continuous and so is the composition V → U trop → R. This composition is the pullback of ev f along π I | V an k −1 and so this implies our claim. For the Raynaud generic fiber and its non-negative tropicalizations, the argument goes through in the same way. Proof of Theorem A. Let Y be a divisorial scheme of finite type over a ring R. It is clear that embeddings into simplicial toric prevarieties are closed under taking products and, by Theorem 5.10 and Proposition 5.11 above, they also fulfill property (+). We will show that condition ( * ) is fulfilled. Then Theorem A follows from Theorem 6.4 (taking R to be a non-Archimedean field for Part (i) or a valuation ring for Part (ii)). Let i : Y ֒→ X be an embedding into a simplicial toric prevariety (coming from Theorem 5.10), in particular, we can write X as the multihomogeneous spectrum Proj D S of a polynomial algebra S = R[T 1 , . . . , T n ] graded by D = Z r via a surjective group homomorphism Z n → Z r . Let X = U 1 ∪ · · · ∪ U r be the unique cover of X by maximal open affine subsets and set V i := i −1 (U i ). It follows from the construction (see the proof of Proposition 5.8 (2)) that V i is going to be affine as well. Let f ∈ O Y (V i 0 ) for a fixed i 0 . We need to show that there is an embedding i ′ : Y ֒→ X ′ into a simplicial toric prevariety X ′ so that V i is the preimage of a torus-invariant open subset of X ′ and f is the pullback of a monomial. Since V i ֒→ U i is a closed embedding of affine schemes, we will find an element g ∈ O X (U i ) whose pullback is f. Let g ∈ S be given by hg for a monomial h. Consider the surjective homomorphism φ : S ′ := S[x] → S given by x → g. We may endow S[x] with a D-grading such that deg(x) = deg g, since g is homogeneous. Set X ′ = Proj D (S ′ ). The induced map
2021-07-08T01:16:27.807Z
2021-07-07T00:00:00.000
{ "year": 2021, "sha1": "4ed08b4b79b848b91614b3d6903e31a2a3e4ced5", "oa_license": null, "oa_url": null, "oa_status": null, "pdf_src": "Arxiv", "pdf_hash": "4ed08b4b79b848b91614b3d6903e31a2a3e4ced5", "s2fieldsofstudy": [ "Mathematics" ], "extfieldsofstudy": [ "Mathematics" ] }
59576243
pes2o/s2orc
v3-fos-license
The Growth-promoting Effect of Tetrabasic Zinc Chloride is Associated with Elevated Concentration of Growth Hormone and Ghrelin An experiment was conducted to investigate the mechanism for the effect of tetrabasic zinc chloride (TBZC) in enhancing growth performance of weanling piglets. Gut-brain peptides play an important role in the regulation of growth and appetite in animals. This study evaluated the effects of TBZC on blood concentrations of growth hormone (GH), ghrelin, insulin-like growth factorI (IGF-I), cholecystokinin (CCK) and neuropeptide Y (NPY). Seventy-two weanling piglets (Landrace×Large White) with an initial body weight (BW) of 6.7±0.16 kg and aged 24±1 days were assigned to three dietary treatments: i) control diet without TBZC supplement, ii) the control diet supplemented with 2,000 mg Zn from TBZC/kg and iii) TBZC-supplemented diet pair-fed with respect to the control diet. Each treatment had six replications (pens) of four piglets. At the end of a 14-d experimental period, piglets were weighed and feed consumption was measured, and blood samples were collected for assays of GH, ghrelin, IGF-I, CCK and NPY concentrations. The inclusion of TBZC in the diet increased average daily gain (p<0.01), average daily feed intake (p<0.05), and feed conversion ratio (p<0.05). Pair-fed piglets had higher ADG, and lower FCR than (p<0.05) Control piglets. Supplementation of the diet with TBZC increased (p<0.05) serum GH and plasma ghrelin levels in weanling piglets, but did not affect (p>0.05) serum IGF-I and plasma NPY and CCK concentrations. Pair-fed piglets had lower (p<0.05) serum GH levels than TBZC-supplemented piglets, but did not (p>0.05) differ from Control piglets. These data indicated that TBZC elevated the concentration of ghrelin and GH. This observation may partly explain the beneficial effects of TBZC on growth performance of weanling piglets. ( INTRODUCTION Feeding high level of zinc (Zn) to weanling piglets improves growth performance (Hahn and Baker, 1993;Poulsen, 1995;Case and Carlson, 1999).Our previous experiment indicated that feeding pharmacological tetrabasic zinc chloride (TBZC) improved growth performance of weanling piglets (Zhang and Guo, 2007).However, the mechanism of this action remains unclear. The growth hormone (GH)/insulin-like growth factor-I (IGF-I) axis plays a major role in the regulation of body growth in vertebrates (Rosenfeld and Roberts, 1999).The main hormones implicated in the GH/IGF axis are the pituitary GH and the liver-derived endocrine IGF-I.The IGF-I mediates the growth-promoting effects of GH, was produced by the hepatocyte in response to the binding of GH to the GH receptor (GH-R) (Yakar et al., 1999). Circulating ghrelin, the endogenous ligand of GH secretagogues-receptor (GHS-R), is synthesized primarily in the stomach in mammals (Bhatti et al., 2006).Ghrelin acts on the GHS-R, increasing intracellular Ca 2+ levels via inositol 1,4,5-trisphosphate (IP3) to stimulate GH release (Kojima and Kangawa, 2005).The addition of dietary Zn to the weanling piglet diet enhanced serum IGF-I levels (Carlson et al., 2004).Researcher suggested that Zn enhanced the growth performance of weanling piglets through a direct influence on the gastrointestinal tract (Li et al., 2006). Cholecystokinin (CCK) and neuropeptide Y (NPY) have been implicated in the control of feed intake (FI) in a number of species.Gastrointestinal (GI) peptide hormones, most notably CCK, are important factors that control appetite and satiety (Huda et al., 2006).The CCK released from the proximal small intestine, functions as a short-term satiation signal by inducing satiety and decreasing meal size (Havel, 2001).Specific areas in the hypothalamus and the brainstem are important in coordinating GI peptide hormones signals.The arcuate nucleus (ARC) in the hypothalamus contains neurones expressing NPY (Huda et al., 2006).In mammals, NPY is one of the most powerful orexigenic agents, and stimulates feed intake behavior (Stanley and Leibowitz, 1985).Feeding pharmacological level of TBZC increased FI of weanling piglets (Zhang and Guo, 2007). Based on the information, we hypothesized that TBZC stimulated the secretion of gastric ghrelin, which signaled its effects to the hypothalamus and stimulated GH release, and eventually, IGF-I mediated the growth-promoting effect.Simultaneously, we investigated the circulating CCK and NPY levels to address whether the increase of FI by addition of TBZC was related to the alteration in CCK or NPY. Pigs and diets Seventy-two weanling piglets (Landrace×Large White) with an initial body weight (BW) of 6.7±0.16kg and 24±1 days of age were allotted to pens on the basis of similar BW, ancestry, and gender.Each treatment had six replications (pens) of four piglets (half castrated males and half females).Piglets were housed on plastic slotted floors (1.3 m×1.2 m per pens) with self-feeders, and automatic stainless nipple waterers.Feed and water were available for ad libitum.Temperatures (25-28°C) and a cycle of 16 h light: 8 h dark were controlled.The basal diet (Table 1) with approximately 129.7 mg Zn/kg was formulated to meet or exceed the nutrient requirements recommended by NRC (1998).The three experimental diets were: 1) control diet (Control) without TBZC supplemented, 2) the control diet supplemented with 2,000 mg Zn from TBZC/kg (TZn) and 3) diet TZ-supplemented pair-fed with respect to the control diet (Pair-fed).To separate effects of FI from effects of dietary TBZC, we used the TZ-supplemented piglets which were pair-fed to the control group of piglets, as described by Swamy et al. (2004).Briefly, the amount of feed consumed by the piglets fed the control diet was recorded daily.The pair-fed piglets received the same amount of feed that was consumed by the control group the previous day.Pair-fed piglets received the diet twice daily.The TBZC [Zn 5 (OH) 8 Cl 2 ⋅H 2 O] used containing 58% Zn was provided by Xingjia Bio-engineering Co. Ltd., China and replaced wheat bran in the diet.The experiment lasted 14 days. Sample collection Faecal scores were evaluated daily and expressed as percentage for a period of 2 weeks.At the end of the experiment, piglets were weighed after overnight fasting and feed consumption was measured and the average daily gain (ADG), average daily FI (ADFI) and the feed conversion ratio (FCR) were measured.After overnight fasting, one barrow from each pen was selected randomly to collect blood samples via the anterior vena cava.One of blood sample was collected in containing aprotinin and EDTA-coated vacutainer tubes, immediately cooled in ice.Plasma was obtained by centrifugation at 2,500×g for 10 min at 4°C.A second portion blood was collected in containing aprotinin vacutainer tubes, allowed to coagulate for 30 minutes at 4°C.Subsequently, serum was separated by centrifuged at 2,500×g for 10 min at 4°C.The sample was quickly frozen in liquid nitrogen and then determined GH, ghrelin, IGF-I, NPY and CCK concentration. Hormone assays After extraction in reverse phase C18 columns, plasma was measured with a commercial ghrelin radioimmunoassay (RIA) kit (Phoenix Pharmaceutical, Inc., Belmont, CA, USA).The RIA kit uses a polyclonal antibody that recognizes octanoylated and non-octanoylated ghrelin and 125 I-ghrelin as a tracer molecule.Thus, this RIA kit detects total ghrelin.The inter-and intra-assay coefficients of variation were 7.6% and 5.0% respectively.Assay sensitivity was 0.7 pg/ml.Serum IGF-I concentrations were measured by a commercially available human RIA kit (Diagnostic System Laboratories, Texas, USA).For separating IGF-I from its binding protein, an acid-ethanol precipitation technique was used (Daughaday et al., 1980).Samples were added directly to tubes containing high-affinity free IGF-I antibody and 125 I-labeled antibody, incubated for 20 h at 4°C, washed, directed to a second epitope, washed, and counted.Assay standards are recombinant human IGF-I: 10 to 1,000 ng/ml.The sensitivity of the assay was 1.3 ng/ml.The intra-and interassay coefficients of variation were 3.5% and 7.9%, respectively.Serum GH was measured by RIA (Amersham Pharmacia Biotech, Little Chalfont, UK).The sensitivity of plasma GH was 0.35 ng/ml.The intra-and inter-assay coefficients of variation were 6.8% and 3.9%, respectively. For the determination of NPY levels in plasma, samples were extracted with HCl-ethanol (15:1,000, v/v) in a 1:2 ratio (plasma:HCl-ethanol) according to the method of Bauer-Dantoin et al (1991) Statistical analysis Data were subjected to Levene's homogeneity of variances test before the analysis.All p-value for homogeneity of variances test were higher than 0.05.Data were analyzed by ANOVA using the General Linear Model (GLM) procedures of SAS (1999.SAS Inst., Inc., Cary, NC).Pen was considered the experimental unit.The significance of mean differences between treatments was detected by the Duncan's multiple-range test.Differences were considered significant at p<0.05.Scouring data were analyzed after being arcsin transformed.Actual scouring data listed in the table, but SEM was for the transformed data. Growth performance and faecal scores Inclusion of TBZC at 2,000 mg Zn/kg diet increased ADG (p<0.01),ADFI, and FCR (p<0.05) of weanling piglets compared with Control (Table 2).In a previous experiment, Zhang and Guo (2007) indicated that dietary supplementation with TBZC increased ADFI in weanling piglets, and therefore, we conducted a pair-fed treatment to shun the influence of TBZC supplementation on FI.As shown in Table 2, Pair-fed piglets had higher ADG, and lower FCR than (p<0.05)Control piglets.However, ADG was decreased (p<0.05) when compared with TZn piglets.Supplementation of Zn from TBZC reduced the faecal scores.TZn and Pair-fed piglets had lower (p<0.001)faecal scores than Control piglets. GH, IGF-I, and ghrelin concentrations The effect of treatment on GH, ghrelin, and IGF-I was shown in Table 3. Supplementing TBZC to the diet (TZn) increased (p<0.05)serum GH and plasma ghrelin levels in weanling piglets, but did not affect (p>0.05)serum IGF-I concentration.Pair-fed piglets had lower (p<0.05)serum GH levels than TZn piglets, but did not (p>0.05)differ with Control piglets.Pair-fed piglets had a same plasma ghrelin levels as TZn piglets, but higher (p<0.05)than Control piglets. NPY and CCK concentration Supplementation of TBZC into the diet didn't affect (p>0.05)plasma NPY and CCK concentration (Table 3). DISCUSSION Adding TBZC to the diet enhanced growth performance of weanling piglets for week 1 and 2. This confirms previous reports that supplementation of Zn at 1,500 to 3,000 mg/kg diet from TBZC increased ADG of weanling piglets (Mavromichalis et al., 2001;Zhang and Guo, 2007).** Faecal scores (%) were calculated as the percent of the total number of days that signs of scours were evident within the pen on the total number of days (56 d).a, b, c Means on the same row lacking a common superscript letters are different (p<0.05).TBZC = Tetrabasic zinc chloride; ADG = Average daily growth; ADFI = Average daily feed intake; FCR = Feed conversion ratio; Control = Control diet; TZn = Supplemented with 2,000 mg Zn from TBZC/kg; Pair-fed = Pair-fed with respect to the control diet. The pair-fed group indicated that the enhanced growth response was not solely due to increase voluntary FI, but also to improve feed efficiency.The current experiment suggested that supplementation of TBZC at the level of 2,000 mg/kg to piglet diets resulted in lower faecal scores.This result is consistent with our previous reports that high levels of TBZC reduced the incidence and severity of diarrhea, and improved faecal consistency after weanling (Zhang and Guo, 2007). However, the underlying mechanism, by which the high level of Zn enhanced growth performance of weanling piglets, is still a matter of controversy.Hahn and Baker (1993) indicated that high levels of ZnO enhanced growth performance of weanling piglets through a systemic effect, and that the Zn ion is a major causative factor.Surprisingly, Mavromichalis et al. (2001) found that high (93%) or low (39%) bioavailability of ZnO did not substantially influence the growth-promoting efficacy.In addition, the injection of Zn around weaning did not have a positive effect on growth performance of weanling piglets, although serum Zn concentrations tended to increase (Schell and Kornegay, 1994).Thus, this speculation about the mechanism responsible for the growth-promoting effects of high level Zn on weanling piglets was not reasonable.The GI tract (GIT) plays an important role where nutrients and physiological signals are exchanged between the inside and outside of the body, which influences animal digestion, absorption and metabolism, and thereby determines body growth, development and health condition.The TBZC enters into the GIT, and may affect the secretion of GI hormones.This leads to the hypothesis that the addition of TBZC to weanling piglets diets results in higher gastric ghrelin, which in turn results in increasing GH, and eventually improving growth performance. Ghrelin is synthesized and released primarily by endocrine X/A cells in the stomach (Kojima et al., 1999;Bhatti et al., 2006) and plays an important role in the control of GH secretion.Kojima et al. (1999) found that ghrelin specifically stimulated GH release, but did not affect other pituitary hormones.Consistent with this notion, we found that the concentration of circulating GH and ghrelin was increased by the addition of TBZC to weanling piglet diet.The serum GH levels of Pair-fed piglets did not differ with Control piglets, but had a trend elevation.Therefore, this may demonstrate that the inducement of the pituitary GH secretion was due to the effect of TBZC on gastric ghrelin.However, there are many factors that affected blood ghrelin concentration, and only the total ghrelin was measured in the current experiment.Thus, the hypothesis that TBZC stimulated the secretion of gastric ghrelin, which signaled its effects to the hypothalamus and stimulated GH release, was not well established.Further studies are required to determine active ghrelin (acylated ghrelin) and to elucidate the molecular mechanism whereby of TBZC regulates the secretion of gastric ghrelin. The current experiment suggested that serum IGF-I concentrations were not affect by the addition of TBZC to diet, which agreed with Li et al. (2006) reporting that inclusion of 3,000 mg Zn/kg from ZnO in the diet did not affect serum IGF-I concentrations in weanling piglets.In contrast, Carlson et al. (2004) reported that additional dietary ZnO in weanling diets for piglets increased serum IGF-I.This discrepancy may be due to the different source of Zn, the age of piglets and methodology of blood sampling.Sjōgren et al. (1999) indicated that liver-derived IGF-I was not required for postnatal body growth.Several studies have shown that GH has independent activities (Clark et al., 1994;Guler et al., 1988).The current experiment may suggest that the growth-promoting effects of TBZC was the result of a direct action of GH, and that it was not mediated by IGF-I.Previous reports indicated that liver-derived IGF-I exerts a negative feedback-regulation of GH secretion by suppression of GH-releasing hormone receptor expression in the hypothalamic (Wallenius et al., 2001).The lack of effect of TBZC supplementation on plasma IGF-I may be beneficial for the independent activities of GH. Peripheral signals regulating appetite originate primarily from the GI tract.Ghrelin and CCK are the two most important peripheral signals.These peripheral signals modulate the release of appetite-related neuropeptides in the brain, and consequently feed intake.The effect of TBZC on circulating CCK and NPY concentrations has been tested for first time.There was no effect on circulating CCK and NPY levels in current experiment, suggesting that the increase of feed intake by TBZC was not related to the CCK and NPY pathways.Ghrelin contributes to the acute regulation of appetite and satiety.It had been shown to induce hunger and increase FI (Huda et al., 2006).Ghrelin exerts its feeding activity by stimulating NPY/AgRP neurons in the hypothalamus to promote the production and secretion of NPY and AgRP peptides (Kojima and Kangawa, 2005).In the current experiment, only the blood NPY level was determined; whereas the changes in concentration of NPY in specific brain areas responding to TBZC were not studied.However, Tschöp et al. (2004) and Wang et al. (2008) indicated that ghrelin seems to stimulate energy metabolism and increase BW more rapidly and more potently than it affects feed intake.Wu et al. (2008) indicated that ghrelin infusion did not significant influence on feed intake.Further studies are required to verify whether the increase of FI by TBZC was mediated by ghrelin, and elucidate its mechanism. In conclusion, dietary supplementation with a high level of TBZC increased blood ghrelin and GH.This novel finding may partly explain the beneficial effects of TBZC on growth performance of weanling piglets. a, b Means on the same row lacking a common superscript letters are different (p<0.05).GH = Growth hormone; IGF-I = Insulin-like growth factor-I; CCK = Cholecystokinin; NPY = Neuropeptide Y; TBZC = Tetrabasic zinc chloride.Control = Control diet; TZn = Supplemented with 2,000 mg Zn from TBZC/kg; Pair-fed = Pair-fed with respect to the control diet. Table 2 . Growth performance and faecal scores of weanling piglets fed diets with or without supplemental TBZC* * Each value represents the mean of six pens of four piglets. Table 3 . GH, ghrelin, IGF-I, CCK and NPY levels in blood of weanling piglets fed diets with or without supplemental TBZC* Each value represents the mean of six pens of one piglet. *
2018-12-29T11:11:56.644Z
2008-09-04T00:00:00.000
{ "year": 2008, "sha1": "1b01d9ccda24ea06dab1e6861c0634ba12fa6b3c", "oa_license": "CCBY", "oa_url": "https://www.animbiosci.org/upload/pdf/21-205.pdf", "oa_status": "GOLD", "pdf_src": "Anansi", "pdf_hash": "1b01d9ccda24ea06dab1e6861c0634ba12fa6b3c", "s2fieldsofstudy": [ "Agricultural And Food Sciences" ], "extfieldsofstudy": [ "Chemistry" ] }
266409448
pes2o/s2orc
v3-fos-license
Prediction of Chemotherapy Efficacy in Patients with Colorectal Cancer Ovarian Metastases: A Preliminary Study Using Contrast-Enhanced Computed-Tomography-Based Radiomics Ovarian metastasis (OM) from colorectal cancer (CRC) is infrequent and has a poor prognosis. The purpose of this study is to investigate the value of a contrast-enhanced CT-based radiomics model in predicting ovarian metastasis from colorectal cancer outcomes after systemic chemotherapy. A total of 52 ovarian metastatic CRC patients who received first-line systemic chemotherapy were retrospectively included in this study and were categorized into chemo-benefit (C+) and no-chemo-benefit (C−) groups, using Response Criteria in Solid Tumors (RECIST v1.1) as the standard. A total of 1743 radiomics features were extracted from baseline CT, three methods were adopted during the feature selection, and five prediction models were constructed. Receiver operating characteristic (ROC) analysis, calibration analysis, and decision curve analysis (DCA) were used to evaluate the diagnostic performance and clinical utility of each model. Among those machine-learning-based radiomics models, the SVM model showed the best performance on the validation dataset, with AUC, accuracy, sensitivity, and specificity of 0.903 (95% CI, 0.788–0.967), 88.5%, 95.7%, and 82.8%, respectively. All radiomics models exhibited good calibration, and the DCA demonstrated that the SVM model had a higher net benefit than other models across the majority of the range of threshold probabilities. Our findings showed that contrast-enhanced CT-based radiomics models have high discriminating power in predicting the outcome of colorectal cancer ovarian metastases patients receiving chemotherapy. Introduction Colorectal cancer (CRC) is the third primary cause of cancer-related morbidity and mortality worldwide [1].In women, ovarian metastasis (OM) is associated with high mortality rates, and the median overall survival of patients with OM is 23 months.Furthermore, the 5-and 10-year survival rates are 17% and 8%, respectively [2].Ovarian metastasis commonly presents as a cystic solid mass, similar to primary ovarian cancer (POC).Hence, ovarian metastasis is occasionally misdiagnosed as POC if the colorectal lesion is subtle.Previous studies have assessed methods for classification using computed tomography (CT) scan or magnetic resonance imaging to help distinguish OM [3,4].Nevertheless, further clinical validation is required.The survival outcomes of patients improved with systemic chemotherapy, followed by oophorectomy, which is the preferred option for ovarian metastasis from colorectal cancer [5].However, the ovary is less sensitive to systemic chemotherapy than primary lesions and lesions in other metastatic sites (e.g., the liver and lung).Previous studies have shown that <20% of patients respond well to chemotherapy [6,7].Hence, it is valuable to anticipate the effectiveness of chemotherapy before treatment to help clinicians develop individualized treatment plans and prevent unnecessary toxicity caused by chemotherapy and the loss of the possible benefits of radical surgery due to disease progression. Radiomics has promising potential for converting medical images into mineable data and extracting noninvasive virtual characteristics.Significant advancements have been made in lung, breast, and gastric cancers, thereby showing better radiomics values in oncological applications [8][9][10].However, due to the low incidence and poor prognosis of ovarian metastasis, no studies have been conducted to evaluate and predict its chemotherapeutic response.The primary objective of the present study was to develop a radiomics model utilizing CT scan images in order to construct a histological imaging model.This model was intended to predict the efficacy of chemotherapy in cases of ovarian metastatic colorectal cancer, utilizing pretreatment CT scan image features.Then, internal validation of the model was performed to identify a noninvasive and stable prediction imaging method. Patients and Study Design Ethical approval was obtained for this retrospective analysis (2020/1296), and the informed consent requirement was waived.We retrospectively included 68 patients with histologically confirmed ovarian metastases from colorectal cancer from July 2010 to December 2022 at our institution.Inclusion criteria were patients (1) pathologically diagnosed with ovarian metastasis; (2) having received at least two cycles of systemic chemotherapy; (3) with measurable lesions in the ovaries based on the Response Criteria for Solid Tumors 1.1; (4) having complete and available clinical, imaging, and pathological data.The exclusion criteria were as follows: patients with a duration of >1 month from baseline CT scan to chemotherapy, those with incomplete clinical or imaging information, and those with poor-quality CT scan images.In total, 16 patients were excluded due to incomplete clinical data or poor imaging quality, and 52 patients were finally included (Figure 1).Two cycles of chemotherapy were administered to patients in this cohort, and the time interval between baseline and follow-up CT scans was two months.As ovarian metastasis responds poorly to chemotherapy and progresses rapidly, only a few patients achieved partial remission (PR).In this study, stable disease (SD) was defined as disease control, and patients who achieved SD were classified under the chemotherapy-benefit group.Two experienced radiologists evaluated the patient's baseline and postchemotherapy CT scan images using the Response Criteria for Solid Tumors (RECIST v1.1).Then, the patients were divided into the chemo-benefit group (C+, SD + PR) and the no-chemo-benefit group (C−, PD).Disagreements were resolved via a consensus decision or consultation with a third reviewer. Clinical data including age, menopausal status, initial tumor location and histological type, status of gene mutations, dimensions and laterality of ovarian metastases, ascites, and other locations for metastasis were collected.In this study, metachronous OM was defined as a time interval of >6 months from primary tumor diagnosis to the discovery of an ovarian metastasis.The initial tumor originated in the right colon (cecum to colonic liver), left colon (spleen to sigmoid), and rectum.Clinical data including age, menopausal status, initial tumor location and histological type, status of gene mutations, dimensions and laterality of ovarian metastases, ascites, and other locations for metastasis were collected.In this study, metachronous OM was defined as a time interval of >6 months from primary tumor diagnosis to the discovery of an ovarian metastasis.The initial tumor originated in the right colon (cecum to colonic liver), left colon (spleen to sigmoid), and rectum. Image Processing and Radiomics Features Extraction Patients underwent baseline CT scan within 1-3 weeks prior to chemotherapy.During the scan, the patients were requested to suspend their respiration to prevent breathing artifacts.The whole-abdomen CT was performed on patients positioned supine with a slice thickness range of 2-5 mm, and the entire abdomen was scanned (Revolution, GE Healthcare, Milwaukee, WI, USA; Brilliance 64, Phillips, Amsterdam, The Netherlands; SOMATOM, Siemens, Erlangen, Germany).Acquisition and reconstruction parameters were tube current 150-200 mA, tube voltage of 120 kV, pitch 0.8, and matrix size 512 × 512.Section thickness was set at 5 mm and reconstruction section thickness at 1.5 mm.Two experienced diagnostic radiologists performed the segmentation of tumor lesions using ITK-SNAP (version 3.6) software.As the portal lesion was well differentiated from the adjacent tissue, the largest level of the portal lesion was selected for segmentation.Then, segmentation was performed along the border of the tumor, thereby avoiding areas such as the blood vessels and calcifications.If there was a disagreement on segmentation between the two physicians, the two physicians discussed it and reached an agreement. The CT scan images were initially resampled to a target voxel of 1 mm × 1 mm × 1 mm.Subsequently, the radiomics features were obtained in an automated manner from the manually labeled regions of interest (ROIs) using PyRadiomics software (version 3.0.1),according to the latest recommendations of the image biomarker standardization initiative [11].Taken together, 1743 radiomic features including 14 shape features, 342 first-order statistics features, and 1387 texture features were extracted from each CT scan with the bin size fixed to 32 [12].An intraclass correlation coefficient (ICC) analysis was conducted to assess the interobserver reliability of the extracted radiomics features.Radiomics features exhibiting an ICC of 0.75 were considered stable and included in further analysis. A three-step strategy was used to further select the radiomics features to decrease model complexity and prevent overfitting.Univariate analysis was performed first, and the radiomics features with a significant difference between the C+ and C− groups (Mann- Image Processing and Radiomics Features Extraction Patients underwent baseline CT scan within 1-3 weeks prior to chemotherapy.During the scan, the patients were requested to suspend their respiration to prevent breathing artifacts.The whole-abdomen CT was performed on patients positioned supine with a slice thickness range of 2-5 mm, and the entire abdomen was scanned (Revolution, GE Healthcare, Milwaukee, WI, USA; Brilliance 64, Phillips, Amsterdam, The Netherlands; SOMATOM, Siemens, Erlangen, Germany).Acquisition and reconstruction parameters were tube current 150-200 mA, tube voltage of 120 kV, pitch 0.8, and matrix size 512 × 512.Section thickness was set at 5 mm and reconstruction section thickness at 1.5 mm.Two experienced diagnostic radiologists performed the segmentation of tumor lesions using ITK-SNAP (version 3.6) software.As the portal lesion was well differentiated from the adjacent tissue, the largest level of the portal lesion was selected for segmentation.Then, segmentation was performed along the border of the tumor, thereby avoiding areas such as the blood vessels and calcifications.If there was a disagreement on segmentation between the two physicians, the two physicians discussed it and reached an agreement. The CT scan images were initially resampled to a target voxel of 1 mm × 1 mm × 1 mm.Subsequently, the radiomics features were obtained in an automated manner from the manually labeled regions of interest (ROIs) using PyRadiomics software (version 3.0.1),according to the latest recommendations of the image biomarker standardization initiative [11].Taken together, 1743 radiomic features including 14 shape features, 342 first-order statistics features, and 1387 texture features were extracted from each CT scan with the bin size fixed to 32 [12].An intraclass correlation coefficient (ICC) analysis was conducted to assess the interobserver reliability of the extracted radiomics features.Radiomics features exhibiting an ICC of 0.75 were considered stable and included in further analysis. A three-step strategy was used to further select the radiomics features to decrease model complexity and prevent overfitting.Univariate analysis was performed first, and the radiomics features with a significant difference between the C+ and C− groups (Mann-Whitney U test, p < 0.05) were kept.Then, the redundant features were removed via Pearson correlation coefficient analysis.If two radiomics features had a high correlation (|r| > 0.95), the feature with a greater p value as determined using the Mann-Whitney U test was excluded.Finally, the most critical radiomic features were selected using the least absolute shrinkage and selection operator (LASSO).The penalty parameter was established via 10-fold cross-validation using the "minimum mean-square error" method. Model Construction and Evaluation Prior to model development, the values of the identified radiomics features were normalized using z-score normalization.Radiomics models were constructed using five well-known machine learning classifiers, which have been widely used in medical imaging analysis, including logistic regression (LR), naïve Bayes (NBB), random forest (RF), linear discriminant analysis (LDA), and support vector machine (SVM).Due to the limited sample size in this study, the predictive models were constructed and validated via leave-one-out cross-validation.To mitigate the issue of class imbalance, the balanced weight strategy was implemented.This meant adjusting the weights of the chemo-benefit and no-chemo-benefit classes in inverse proportionality to their respective prevalences. The model performance was evaluated via receiver operating characteristics analysis with respect to the area under the receiver operating characteristic curve (AUC).Furthermore, in accordance with the maximal Youden index criteria, the accuracy, sensitivity, specificity, positive predictive value (PPV), and negative predictive value (NPV) were computed using the optimal threshold.The construction and evaluation of the radiomics models were performed with the InferScholar platform (InferVision ver3.5). Calibration and Decision Curve Analysis The calibration of the predictive models was examined via 1000 bootstrapping resampling, and the consistency between the actual observed rate and the predicted probability was evaluated using the Hosmer-Lemeshow test [13].To compare the clinical utility of the predictive models, decision curve analysis (DCA) was additionally conducted by calculating the net benefits for a range of threshold probabilities [14]. Statistical Analysis Regarding the clinical characteristics, frequencies and percentages are used to express descriptive data and mean ± standard deviation to represent parametric variables.Categorical variables were analyzed using Pearson's chi-square test or Fisher's exact test.Continuous variables were compared using the Mann-Whitney U test or Student's t-test.All statistical analyses were performed using the Statistical Package for the Social Sciences software (version 27.0).A p value of <0.05 was considered statistically significant. Patients' Characteristics Finally, 52 patients were included in this study.Table 1 shows the clinicopathologic characteristics of all patients.Changes in the size of the ovarian metastatic tumors of each patient were determined according to the RECIST criteria after chemotherapy.In total, 25 patients who achieved SD and PR were classified in the C+ group, while, 27 patients who achieved PD were included in the C− group.The median age at ovarian metastasis diagnosis was 46 (range: 25-77) years.In total, 55.8% (n = 29) of patients were premenopausal.The mean carcinoembryonic antigen (CEA) and cancer antigen (CA-125) levels upon OM diagnosis were 116.1 (normal range: 0-5) ng/mL and 87.2 (normal range: 0-25) U/mL, respectively. Selection of Radiomics Features After ICC analysis, univariate analysis, and Pearson correlation analysis, 162 radiomics features were retained in LASSO regression analysis.Finally, five radiomics features with nonzero coefficients were selected for model construction under the optimal tuning parameter lambda (Figure 2). Figure 3 shows the heatmap of the selected radiomics features with standardized values. 0.903 (95% CI: 0.778-0.967)on the validation dataset, respectively.In the training dataset, no significant differences were found among the radiomics models (all p values > 0.05), and the AUC of the SVM model was significantly higher than that of the NBB model (p = and the LDA model (p = 0.045) on the validation dataset.Among these machine learning classifier-based radiomic models, only the RF model had a tendency toward overfitting, as its AUC was considerably higher on the training dataset than on the validation dataset (p = 0.026).In addition, the accuracy, sensitivity, specificity, PPV, and NPV of these models under the optimal cutoff point in the validation dataset are shown in Table 2. Clinical Utilities On the validation dataset, the clinical utility of the five radiomics models was assessed through calibration analysis and DCA.All models had good calibration as the nonsignificant statistics of the NBB model, LDA model, LR model, RF model, and SVM model were 0.472, 0.588, 0.098, 0.110, and 0.125 on the validation dataset, respectively (Figure 5).In addition, DCA showed that all radiomics models outperformed both the treat-all and treat-none approaches.The SVM model outperformed alternative models in terms of net benefit for the majority of threshold probabilities (Figure 6). sessed through calibration analysis and DCA.All models had good calibration as the significant statistics of the NBB model, LDA model, LR model, RF model, and SVM m were 0.472, 0.588, 0.098, 0.110, and 0.125 on the validation dataset, respectively (Figu In addition, DCA showed that all radiomics models outperformed both the treat-al treat-none approaches.The SVM model outperformed alternative models in terms o benefit for the majority of threshold probabilities (Figure 6). Discussion CRC is one of the cancers with the highest incidence and mortality rate in th century, with the second (9.4%) and third (9.5%) highest incidence and mortality among all malignancies in female patients [15].Approximately 3-5% of female pa with CRC develop ovarian metastases, and this group of patients commonly has a prognosis, with a median survival of 13-36 months based on previous studies [5,1 Some patients with ovarian metastases present with nonspecific clinical signs such dominal distension, abdominal pain, and anemia [18].Ovarian metastasis is morph cally similar to POC, and 70% of patients with OM have high CA125 levels.Ther ovarian metastasis with an insidious primary tumor is occasionally misdiagnosed as [19,20].There is a lack of large-sample prospective trials on the preferred treatment egy due to the low incidence and poor prognosis of ovarian metastasis.In this co this study was conducted to improve colorectal cancer ovarian metastasis manage by evaluating OM-related imaging features with the combined analysis of clinicop Discussion CRC is one of the cancers with the highest incidence and mortality rate in the 21st century, with the second (9.4%) and third (9.5%) highest incidence and mortality rates among all malignancies in female patients [15].Approximately 3-5% of female patients with CRC develop ovarian metastases, and this group of patients commonly has a poor prognosis, with a median survival of 13-36 months based on previous studies [5,16,17].Some patients with ovarian metastases present with nonspecific clinical signs such as abdominal distension, abdominal pain, and anemia [18].Ovarian metastasis is morphologically similar to POC, and 70% of patients with OM have high CA125 levels.Therefore, ovarian metastasis with an insidious primary tumor is occasionally misdiagnosed as POC [19,20].There is a lack of large-sample prospective trials on the preferred treatment strategy due to the low incidence and poor prognosis of ovarian metastasis.In this context, this study was conducted to improve colorectal cancer ovarian metastasis management by evaluating OM-related imaging features with the combined analysis of clinicopathological features and treatment strategies. The main treatment options for ovarian metastasis include oophorectomy, cytoreductive surgery, hyperthermic intraperitoneal chemotherapy, and systemic therapy.Although metastatic colorectal cancer is predominantly treated with systemic therapy, previous reports have revealed that ovarian metastases exhibit a notably lower sensitivity to systemic chemotherapy in comparison to metastases in other organs, with an objective response rate of <20% [6,21].Some retrospective studies have shown that in patients with ovarian metastasis, the benefit of oophorectomy is significant, with a prolonged median ovary-specific survival (date of ovarian metastasis diagnosis to death, 20.8 months vs. 10.9 months) and progression-free survival (15.6 months vs. 6.1 months) compared with those who did not undergo oophorectomy [7].Other studies reported a significant improvement in survival in patients with complete cytoreduction surgery (5-year survival rate of 47%; median survival of 48 months) [22].The achievement of complete cytoreduction was considered an independent predictor of improved prognosis [23,24].Some studies recommend a prophylactic bilateral ovariectomy because if one ovary proves to be metastatic, there is an probability that the other ovary will also be invaded and may have developed microscopic metastases [25,26].However, current studies are small-scale retrospective studies, and there are no large-scale prospective studies confirming the prognostic improvement effect of oophorectomy in patients with colorectal cancer.By contrast, in women who are premenopausal or have reproductive needs, oophorectomy may have a deleterious influence on their hormonal and psychological health [27].Furthermore, surgery may lead to organ damage or secondary surgical adhesions.Therefore, an in-depth assessment of the risks and benefits associated with the treatment modality is required. Thus far, the mechanism of resistance to chemotherapeutic drugs in ovarian metastasis is not clear.With the development of sequencing technology, tumor-related genes are being increasingly identified, and RAS, BRAF, and PIK3CA mutations are being detected in colorectal cancer.In individuals with chemotherapy-resistant colorectal cancer, RAS mutations increased the risk of ovarian metastases (HR = 3.12) [28].Previous studies have shown that combined treatment with bevacizumab and fluorouracil, irinotecan, leucovorin, and oxaliplatin improves the prognosis of metastatic colorectal cancer [29,30].Cetuximab, as a first-line treatment, reduced the risk for patients with metastatic colorectal cancer who exhibited wild-type progression of KRAS [31].In our study cohort, 18 (34.6%)patients developed RAS gene mutations; 1 (1.9%) developed BRAF gene mutation; and 1 developed PIK3CA gene mutation.No statistically significant distinction was observed between the C+ and C− groups with regard to the administration of targeted drug therapy or chemotherapy regimen.This could be attributed to the fact that genetic status testing was not conducted for all patients in the study, and genetic status monitoring was not carried out as the patients' diseases advanced.Thus far, there are no reliable evaluation tools or predictors in patients who benefit from chemotherapy.Systemic chemotherapy is the preferred regimen for patients who are medically unfit or unwilling to undergo surgery. Radiomics aims to extract an extensive array of quantitative features from clinical images using data feature algorithms that can uncover disease characteristics.Radiomics can predict tumor genotype, microsatellite stability status, and prognosis preoperatively [32][33][34].Only a few studies have investigated the efficacy of systemic chemotherapy for patients with ovarian metastases from colorectal cancer.Conventional imaging has an extremely limited capacity for prediction.Therefore, the current study primarily aimed to establish an accurate prognostic prediction model for ovarian metastasis from colorectal cancer chemotherapy response utilizing a CT-scan-based radiomics model and machine learning.Five models of conventional machine learning models were employed in order to assess and validate the accuracy of each predictive model.The SVM model had a significantly higher AUC on the validation set (0.903, 95% CI: 0.788-0.967),with a sensitivity and specificity of 95.7% and 82.8%, respectively.The remaining four models had an area under the curve of >0.7, which indicated a good predictive value and potential for further application.Therefore, our CT-scan-based radiomics model could be beneficial as an early predictor of the efficacy of chemotherapy against ovarian metastases and identify patients who are primary-drug-resistant and who do not respond well to chemotherapy, thereby allowing for a more personalized treatment plan such as chemotherapy combined with surgery, interventional therapy, and radiotherapy.In the no-chemotherapy-benefit group, it is important to reduce the toxicity caused by chemotherapeutic agents and select radical treatment before disease progression.In the chemotherapy-benefit group, chemotherapy combined with oophorectomy may prevent trauma associated with subtractive surgery, such as vascular, organ, or nerve damage. Several limitations exist in the present study.First, it was retrospective in nature, and confounding variables such as chemotherapy regimen, cycle time, and postimaging modalities may have caused bias.Second, the study's sample size was small, and an imbalance in sample size between groups could have impacted the predictive model's performance.Therefore, prospective studies with larger sample sizes should be performed to verify the study findings.Third, data were obtained from only one medical institution, which might limit the generalizability of the model.Further independent validation sets are required to confirm our findings.Fourth, deep learning was not used to build the model in this study, and we did not apply joint modeling of pathological features.By integrating deep learning or pathological features with CT scan imaging histology, the accuracy of prognosticating the effectiveness of systemic chemotherapy for ovarian metastatic colorectal cancer can be further enhanced.This then reduces the error rate of the prediction model and increases the confidence level. Conclusions Five prediction models for evaluating systemic chemotherapy in patients with ovarian metastasis from colorectal cancer were established using machine learning algorithms.Among them, the SVM model had the best predictive ability, with strong differentiation on both the test and validation sets, and had high sensitivity, specificity, and accuracy.We proposed the use of a CT-scan-based radiomics baseline model for distinguishing potential responders from nonresponders after systemic chemotherapy for ovarian metastases in colorectal cancer.Radiomics profiling can be used as an adjunct to clinical treatment decision making to help oncologists predict chemotherapy response, leading to timely and effective individualized treatment planning for potential nonresponders.We hope that the study results can provide a basis for large-scale cohort studies.Nevertheless, prospective studies should be performed to improve the individualized prediction of responders in patients with ovarian metastatic colorectal cancer. Figure 2 . Figure 2. Radiomics feature selection via LASSO regression analysis.(A) Utilization of 10-fold crossvalidation with the minimal mean squared error criteria to choose the optimal tuning parameter lambda.(B) The coefficient profile plot of five nonzero coefficients against the optimal log (lambda) sequence. Figure 2 . Figure 2. Radiomics feature selection via LASSO regression analysis.(A) Utilization of 10-fold cross-validation with the minimal mean squared error criteria to choose the optimal tuning parameter lambda.(B) The coefficient profile plot of five nonzero coefficients against the optimal log (lambda) sequence.Diagnostics 2023, 13, x FOR PEER REVIEW 7 of 13 Figure 4 . Figure 4. ROC analysis of the predictive models on the training dataset (A) and the validation dataset (B). Figure 4 . Figure 4. ROC analysis of the predictive models on the training dataset (A) and the validation dataset (B). Figure 5 . Figure 5. Calibration analysis of the LR model (A), NBB model (B), LDA model (C), SVM mod and RF model (E) on the validation dataset. Figure 6 . Figure 6.Analysis of the predictive models' decision curves on the validation dataset. Table 1 . Clinicopathological characteristics of all the enrolled patients. Table 2 . Detailed performance of the NBB model, LDA model, LR model, RF model, and SVM model on the validation dataset.
2023-12-21T16:20:02.749Z
2023-12-19T00:00:00.000
{ "year": 2023, "sha1": "22079eecb2980f68f7f71892bf359d4ba1f6839e", "oa_license": "CCBY", "oa_url": "https://www.mdpi.com/2075-4418/14/1/6/pdf?version=1702998645", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "1b1c2d12ccd0390805977ad0f707fd0df2d17694", "s2fieldsofstudy": [ "Medicine" ], "extfieldsofstudy": [] }
257812453
pes2o/s2orc
v3-fos-license
The description of Haematococcus privus sp. nov. (Chlorophyceae, Chlamydomonadales) from North America An enormous body of research is focused on finding ways to commercialize carotenoids produced by the unicellular green alga, Haematococcus , often without the benefit of a sound phylogenetic assessment. Evidence of cryptic diversity in the genus means that comparing results of pigment studies may be confounded by the absence of a phylogenetic framework. Moreover, previous work has identified unnamed strains that are likely candidates for species status. We reconstructed the phylogeny of an expanded sampling of Haematococcus isolates utilizing data from nuclear ribosomal markers (18S rRNA gene, 26S rRNA gene, internal transcribed spacer [ITS]-1, 5.8S rRNA gene, and ITS-2) and the rbc L gene. In addition, we gathered morphological, ultrastructural and pigment data from key isolates of Haematococcus . Our expanded data and taxon sampling support the concept of a new species, H. privus , found exclusively in North America. Despite overlap in numerous morphological traits, results indicate that ratios of protoplast length to width and akinete diameter may be useful for discriminating Haematococcus lineages. High growth rate and robust astaxanthin yield indicate that H. rubicundus (SAG 34-1c) is worthy of additional scrutiny as a pigment source. With the description of H. privus , the evidence supports the existence of at least five, species-level lineages in the genus. Our phylogenetic assessment provides the tools to frame future pigment investigations of Haematococcus in an updated evolutionary context. In addition, our investigation highlighted open questions regarding polyploidy and sexuality in Haematococcus which demonstrate that much remains to be discovered about this green flagellate. INTRODUCTION Recent molecular phylogenetic evidence led to major taxonomic and systematic revision that stripped the green algal genus, Haematococcus Flotow, of all but one species, (H.lacustris Flotow) (Buchheim et al. 2013).In addition, a number of investigations revealed multiple lineages among taxa originally ascribed to that single species, H. lacustris (Girod-Chantrans) Rostafinski (Buchheim et al. 2013, Allewaert et al. 2015, Chekanov et al. 2020).Buchheim et al. (2013) originally identified five lineages that were referred to as Pluvialis A-E.Haematococcus pluvialis is now regarded as a synonym of H. lacustris (Nakada and Ota 2016), but at the time of our publication the consensus was that H. pluvialis was the legitimate species name, hence, the reference to "Pluvialis" isolates in Buchheim et al. (2013).Members from two of these lineages and Pluvialis E) are now recognized as new species, H. rubicundus Allewaert et Vanormelingen and H. rubens Allewaert et Vanormel-summarize studies of H. lacustris (as H. pluvialis in most cases) exclusively, either ignoring the recent revelations regarding Haematococcus diversity (Buchheim et al. 2013, Allewaert et al. 2015, Mazumdar et al. 2018, Chekanov et al. 2020) or accepting suspect identifications reported in the myriad publications that serve as the basis for the review articles.For example, a number of pigment investigations relied solely on analyses of the 18S rRNA gene for species identification (e.g., Wang et al. 2019, Guo et al. 2021, Yu et al. 2021, Karuppan et al. 2022).Our research has shown that the highly-conserved 18S rRNA gene sequences fail to unambiguously resolve most strains that have been identified as distinct by other markers (e.g., internal transcribed spacer [ITS]-2).Lastly, some publications do not present any phylogenetic evidence validating their application of the name, H. lacustris or H. pluvialis, to the strain(s) used in the investigation (e.g., Ashokkumar et al. 2021, Du et al. 2021, Fang et al. 2022).A cursory survey of the literature indicates that at least seven strains (GXU-A23, H 2 , HP5, IBCE H-17, LUGU, NBU489, and ZY-18) used in a number of investigations have not been phylogenetically validated as H. lacustris.While this nomenclatural issue does not impugn the experimental design of an investigation, it does confound our ability to make comparative assessments of the experimental results among research groups and over time. Our intent with this publication is to provide a substantive update to an assessment of phylogenetic and taxonomic diversity in Haematococcus that can inform both basic and applied research programs.To that end, the focus of this investigation is a formal description, as Haematococcus privus sp.nov., of the "Pluvialis B" organisms (sensu Buchheim et al. 2013) that have been collected exclusively from North America.Although the results from molecular phylogenetic analysis provide the bulk of support for describing a new species of Haematococcus, some morphological traits (e.g., akinete diameter) and pigment data (e.g., astaxanthin yield) may help to set this lineage apart from other species of Haematococcus.We also offer additional insight into the importance of several intriguing observations regarding the basic biology of Haematococcus that would be relevant for both basic and applied research. MATERIALS AND METHODS All Haematococcus taxa included in various analyses undertaken in this investigation are presented in Supplementary Table S1.Isolates targeted for microscopy are ingen, respectively (Allewaert et al. 2015).In addition, Allewaert et al. (2015) affirmed the distinctiveness of the Pluvialis D (SAG 44.96) lineage but persuasively argued that SAG 34-1f should also be accorded status as a distinct lineage.At present, neither of these two lineages has been described as new species.Mazumdar et al. (2018) revealed yet a fourth new species, H. alpinus Mazumdar et Gopalakrishnan, that was isolated from a high elevation habitat in New Zealand.Chekanov et al. (2020) did not introduce a new species but their data revealed the existence of a distinct clade of polar isolates of H. lacustris.Allewaert et al. (2015), who focused exclusively on European isolates, noted that allies of Pluvialis B (sensu Buchheim et al. 2013) were not encountered in their collections or surveys.Neither Mazumdar et al. (2018) nor Chekanov et al. (2020) reported any strains that could be ascribed to the Pluvialis B lineage in their survey of alpine New Zealand and the White Sea polar region, respectively.Lastly, work that focused on collecting and identifying Haematococcus isolates in South America (González et al. 2009, Gómez et al. 2016) have similarly failed to yield isolates that can be attributed to the "Pluvialis B" lineage in Haematococcus.In short, Haematococcus is now comprised of four species coupled with evidence of the existence of three additional lineages.Although Allewaert et al. (2015) presented evidence of distinct morphological and physiological traits, they also noted that much variability overlapped across lineages.Chelebieva et al. (2018) recorded developmental and biochemical variability among three isolates of H. lacustris (IMBR-1, IMBR-2, and IMBR-3) despite near molecular phylogenetic identity among the isolates.Moreover, all of the isolates and species produce an akinete stage that accumulates astaxanthin.Thus, from a practical standpoint, much of the morphological and physiological variation across the genus can be regarded as cryptic. As noted above, just a handful of scientists have focused on expanding our understanding of Haematococcus diversity.In contrast, the overwhelming majority of effort and resources directed towards the study of Haematococcus-more than 100 citations since 2021 (Google Scholar search on May 19, 2022)-is being expended to optimize induction, extraction, and commercialization of valuable carotenoids.Unfortunately, it appears that at least some of this applied research is being done in the absence of an understanding of the phylogenetic diversity that was revealed in the last decade.Review articles on pigment analysis and optimization (e.g., Ahirwar et al. 2021, Oslan et al. 2021, Kim et al. 2022, Mota et al. 2022) Transmission electron microscopy Motile cells of the HP136 isolate were harvested by centrifugation, fixed, embedded, and thin-sectioned for transmission electron microscopy (TEM) analysis using the protocol of Pegg et al. (2015).All images were digitally recorded using a Hitachi H7000 transmission electron microscope operated at an accelerating voltage of 75 kV (Hitachi High Technologies America, Schaumburg, IL, USA). Molecular phylogeny Extraction of nucleic acid, amplification of target DNA, and DNA sequencing was conducted as described previously (Buchheim et al. 2001(Buchheim et al. , 2005(Buchheim et al. , 2010(Buchheim et al. , 2013)).All sequences were assembled and edited using Sequencher vs 4.9 (Gene Codes, Ann Arbor, MI, USA).Edited sequences were imported into alignments where indels and substitutions were checked for accuracy.All sets of sequences except rbcL, ITS-2, and 5.8S rRNA were initially aligned using MUSCLE (Edgar 2004) as implemented in Mesquite 3.61 (Maddison and Maddison 2019).The ITS-2 data were initially aligned with the aid of secondary structure models (see below).The rbcL and 5.8S rRNA data were aligned manually.Manual alignment adjustments were completed using Mesquite.All data sets were analyzed using neighbor-joining (NJ), maximum likelihood (ML) as implemented in PAUP* (Swofford 2003) and Bayesian inference (BI) as implemented in MrBayes 3.2.7 (Ronquist et al. 2012).Models of nucleotide substitution were selected by Akaike best fit analysis as implemented in PAUP* (Swofford 2003).Bootstrap analysis (Felsenstein 1985) was used to identify relative support for nodes of the trees generated by ML and NJ analyses.Bootstrap proportions and posterior probabilities were mapped to identified and GenBank accession numbers are listed for taxa included in the six different molecular phylogenetic analyses (18S rRNA, 26S rRNA, rbcL, ITS-1 rRNA, 5.8S rRNA, ITS-2 rRNA) (Supplementary Table S1). Light microscopy Cells for light microscopy (differential interference contrast [DIC] and phase contrast [PC] optics) were grown in glass tube culture (5 mL) or flask culture (50 mL) using liquid Volvox medium (McCracken et al. 1980).Cells were either viewed as living specimens or fixed in 2% (v : v) glutaraldehyde in growth medium contained in 1.5 mL Eppendorf tubes.All cells were photographed at 40× (DIC or PC) using an AmScope MU1000 camera and software (AmScope x64, 4.11.1786420201020;Amscope, Irvine, CA, USA) attached to an Olympus BH2 chassis (Olympus America, Center Valley, PA, USA) equipped with a halogen source (12 V, 100 W, 3200K color temperature).Recorded images were measured using the tools that accompany the AmScope x64 software application.Eight different cell dimensions (Supplementary Table S2) or attributes were recorded for 43 randomly selected motile (flagellated) cells for three H. privus isolates (HP136, HP137, and HP138), two H. lacustris isolates (NIES-144 and SAG 49.94), one isolate of H. rubicundus and one isolate of H. rubens .Using data collected by Allewaert et al. (2015) as a guide, ratios of cell length to width and protoplast length to width were calculated for motile cells from all isolates in the comparison.Motile cell length was measured from the cell wall apex at the midpoint between the emerging flagella to the posterior-most point on the outer cell wall boundary.Motile cell diameter was measured outer cell wall boundary to outer cell wall boundary at the broadest point and normal to the line of cell length.Protoplast length of motile cells was measured from the cell apex where flagella emerge from the protoplast to the posterior-most point of the protoplast.Protoplast diameter of motile cells was measured protoplast boundary to protoplast boundary at the broadest point and normal to the line of protoplast length.Cell diameter (wall to wall) was recorded for 43, randomly-selected akinetes for the same set of Haematococcus isolates. Statistical analysis In advance of statistical analysis of measurements data, the shape of the distribution was evaluated using the Shapiro Wilk test (Shapiro Wilk test calculator, Statistics King-in all 5.8S rRNA analyses.A list of the 30 Haematococcus and Ettlia taxa included in the analyses of the ITS-1 data and their provenances are presented in Supplementary Table S1.All trees were rooted with data from C. applanata (Polytominia sensu Nakada et al. 2008). ITS-2.The aligned ITS-2 data set was comprised of 255 sites.Raw primary sequence data containing the ITS2 gene were annotated by comparison with other fully-annotated ITS2 sequences from existing alignments (Buchheim et al. 2013, Pegg et al. 2015).Secondary structure models for a number of ITS2 gene sequences from Haematococcus isolates (Buchheim et al. 2013, Pegg et al. 2015) were used as templates for homology modeling of secondary structure as implemented in the ITS2 Database V (Ankenbrand et al. 2015).Secondary structures for new ITS2 sequences from isolates of Haematococcus that cluster with existing species were modeled using corresponding templates .Homology modeling identified the template for Haematococcus sp.SAG 34-1f as producing the highest helix transfer percentages for the ITS2 from new Haematococcus (H.privus) isolates that form a monophyletic group based on preliminary analyses.Homology modeling identified the templates for Haematococcus sp.SAG 34-1f and H. rubicundus SAG 34-1m as producing the highest helix transfer percentages for the ITS2 from H. alpinus.Unfortunately, some short sequences plus the presence of ambiguous base calls reflecting either sequencing error or intragenomic variability (Alanagreh et al. 2017) in a number of ITS-2 sequences confounded our ability to obtain reliable sequence-structure alignments.Consequently, we used a secondary-structure-guided ITS-2 alignment in a final round of manual sequence alignment.All trees were rooted with data from C. applanata (Polytominia sensu Nakada et al. 2008). Induction of carotenoid accumulation To examine carotenoid content, select strains of Haematococcus (H.lacustris [NIES-144], H. rubicundus , H. rubens , H. privus [HP136], and H. privus [HP137]) were first grown in Volvox medium to bring all cells to the motile stage.Cell densities were measured using a Countess III automated cell counter (Invitrogen, Waltham, MA, USA) with disposable counting cells.Carotenoid induction experiments were initiated with three replicates for each strain where a total of 9,180 cells per mL, or approximately 45,900 motile cells were inoculated into 5 mL of Volvox medium (minus glycero-the corresponding nodes for each tree.All within and between group p-distances were calculated using MEGA7 (Kumar et al. 2016). 18S rRNA.The aligned 18S rRNA data set was comprised of 1,727 sites.A total of 83 taxa representing a broad sampling of chlamydomonadalean taxa were included in the analysis of 18S rRNA data.A list of the 16 Haematococcus and Ettlia Komárek taxa included in the analyses of the 18S data and their provenances are presented in Supplementary Table S1.All trees were rooted with data from sphaeroplealean taxa (i.e., part of a sister group to the Chlamydomonadales). 26S rRNA.The aligned 26S rRNA data set was comprised of 2,037 sites.A total of 20 taxa comprising select members of the Chlorogonia clade (sensu Nakada et al. 2008) were included in the analyses.A list of the 15 Haematococcus and Ettlia taxa included in the analyses of the 26S data and their provenances are presented in Supplementary Table S1.All trees were rooted with data from Chlamydomonas applanata Pringsheim (Polytominia sensu Nakada et al. 2008). rbcL.The aligned rbcL data set was comprised of 1,125 sites for all but three sequences.Despite using primer pairs that generally yielded a product comprising more than 1 kb of the rbcL gene, our amplification protocols produced much smaller polymerase chain reaction (PCR) products for H. privus isolates (<700 bases using ChlorRbclF and ChlorRbclR) (Kim et al. 2015) and for Ettlia carotinosa Komárek (<500 bases; rbcL1 and rbcL4) (Nozaki et al. 1995).Partial overlapping sequences from two strains of H. privus (HP137 and HP138) were combined into a single composite taxon for analysis of the rbcL data.A total of 26 taxa comprising select members of the Chlorogonia clade (sensu Nakada et al. 2008) was included in all analyses.A list of the 17 Haematococcus and Ettlia taxa included in the analyses of the rbcL data and their provenances are presented in Supplementary Table S1.All trees were rooted with data from C. applanata (Polytominia sensu Nakada et al. 2008). ITS-1.The aligned ITS-1 data set was comprised of 449 sites.A total of 34 sequences representing principally Haematococcus and Ettlia taxa were included in all ITS-1 analyses.A list of the 31 Haematococcus and Ettlia taxa included in the analyses of the ITS-1 data and their provenances are presented in Supplementary Table S1.The trees were rooted with data from C. applanata (Polytominia sensu Nakada et al. 2008). 5.8S rRNA.The aligned 5.8S rRNA data set was comprised of 155 sites.A total of 33 sequences representing principally Haematococcus and Ettlia taxa were included of a gradient of acetonitrile : methanol : dichloromethane (44 : 44 : 12) (v : v : v) from 0 to 11 min, ramping up to acetonitrile : methanol : dichloromethane (35 : 35 : 30) from 11-21 minutes followed by isocratic conditions through 30 min, and then a return to initial conditions in minutes 30-33.The samples were monitored with a UV-Vis diode array detector at 445 and 480 nm.Astaxanthin was identified and quantified by comparison to an authentic external standard (gift of DSM Inc.). Light and electron microscopy Results from morphometric analysis of a suite of traits are presented in Supplementary Table S2.In addition to data for Haematococcus sp.(Pluvialis 5 lineage), comparison data from two strains of H. lacustris, one strain of H. rubens, one strain of H. rubicundus and the published data for H. rubens and H. rubicundus are included in Supplementary Table S2. Motile cell shape for all three isolates of H. privus studied varied from ovate (Fig. 1A-E) in younger cells to nearly spherical in older cells or sporangia (Fig. 1F).Motile cell length ranged from 19.2 to 42.5 µm and motile cell width ranged from 15.8 to 36.9 µm.All three strains produced at least some motile cells bearing a wall papillum (Fig. 1D) but the presence of a wall papillum varied between strains.The cell wall of motile cells is thin (Figs 1A-D, 2D, 2F, 3A, B & E) and is separated from the protoplast by periplasmic spaces generally bounded by thin, cytoplasmic strands (see below).Save for the cytoplasmic strands, light microscopy does not reveal evidence of substance in the periplasmic space, but thin sections suggest the presence of a quasi-fibrous or granular material (Figs 2D, F & 3A-E).Protoplast length in motile cells ranged from 19.1 to 31.8 µm and protoplast width ranged from 11.4 to 28.0 µm.Protoplast shape in motile cells varied from shallowly lobate (Fig. 1A) to ovate / obtuse (Fig. 1B & D) to elliptic (Fig. 1C, E & F).Most cells that were observed bore a plasma papillum (i.e., acuminate end) at the cell apex where the flagella are inserted and the flagellar collars meet the protoplast (Fig. 1B & D).The chloroplast generally did not extend into the plasma papillum.Cells lacking a wall papillum were generally associated with larger and more spherical protoplasts (e.g., Fig. 1C).Cytoplasmic strands traversing the periplasmic space were generally present (Figs 1A, B, D, 2D & 3B), often branched near the cell wall boundary (Fig. 1A, arrows).When cytoplasmic strands phosphate) and grown in high light (714 μmol photons m -2 s -1 ) using a MAXSISUN MF1000 Grow Light, 100 Watt LED panel on a 12 h : 12 h LD cycle at 23°C for 14 days. Carotenoid analysis Carotenoids were extracted from each replicate sample at the end of 14 days.Cell densities for each replicate were measured using a Countess III automated cell counter (Invitrogen) with disposable counting cells.Pelleted cells for each replicate were resuspended in a 0.9% NaCl solution (500 µL) with absolute ethanol (200 µL).Cells were subjected to three rounds of grinding with 0.1 g of 2 mm diameter zirconium beads for 180 s at 4,000 Hz using a Beadbug homogenizer (Benchmark Scientific, Sayreville, NJ, USA).A total of 500 µL of a 1 : 1 (v : v) solution of hexane : methyl tertiary-butyl ether (MTBE) was added to the extract and subjected to one round of homogenization (180 s at 4,000 Hz).The solvent layer was recovered after centrifugation (12,000 rpm for 3 min) in a 9 mL screw cap tube and following each round of homogenization.A fresh 500 µL of the 1 : 1 solution of hexane : MTBE was added to the homogenate tube and the process repeated two more times.The solvent layer from the three rounds of extraction was dried under a stream of nitrogen gas and stored at -80° under nitrogen gas.All raw extracts were prepared for analysis by mild saponification with 1,000 µL of 0.02 M NaOH in methanol for four hours in the dark in a 9 mL glass tube capped under nitrogen.The saponified extract was then re-extracted by adding 1,000 µL of saturated NaCl solution, followed by 2,000 µL of ultrapure water and then 2,000 µL of hexane : MTBE (1 : 1) and mixing by vortex at each step.This solution was then centrifuged for 5 min at 2,500 rpm using an Eppendorf 5417C benchtop centrifuge (Eppendorf Manufacturing Corp., Enfield, CT, USA).The solvent layer was recovered to 2 mL screw cap tubes, dried under nitrogen and stored at -80° under nitrogen. High-performance liquid chromatography (HPLC) was used to identify and quantify the carotenoid content in the samples.The dried and saponified extracts for each replicate sample were resuspended in 500 µL of methanol : acetonitrile (1 : 1) by heating to 50°C for up to 2 h and vortexing intermittently to dissolve any remaining pigment.A total of 50 µL of each sample was then injected into an Agilent 1200 series HPLC (Agilent, Santa Clara, CA, USA) and separated by reverse-phase chromatography on a C30 column (5.0 µm, 4.6 mm × 250 mm, YMC, CT99S05-2546WT) heated to 30°C.The mobile phase was pumped at a constant rate of 1.2 mL min -1 and consisted connect with thylakoids in the chloroplast (Fig. 2B & C).A stigma was not always detected, but when visible, generally appeared as an elongate, acuminate shape (Fig. 1E, arrow).Thin sections revealed a stigma comprised of a single or double layer of osmiophilic globules (Fig. 2F).Motile cells generally contained multiple, large, noncontractile vacuoles (Figs 1A, 1C-E & 2C-F) as well as multiple, small, putative contractile vacuoles (Fig. 1D, arrow).Light microscopy demonstrated that the latter were randomly scattered along the periphery of the protoplast and never observed at the apex of the cell.A flagellar collar was observed at both the light and electron microscope levels (Figs 1B, D & 3A-E).Flagellar collar length is variable with a maximum of ca. 5 µm.Light (Fig. 1B) and electron microscopy (Fig. 3B) suggests that the flagel-could not be detected or were few in number (Fig. 1C & E), the corresponding protoplasts were generally larger and manifest a much smaller periplasmic space.The chloroplast is generally cup-shaped, surrounds a central nucleus (Fig. 2E) and bears lobes that appear to be defined, at least in part, by the presence of the large, aqueous vacuoles (Figs Thin sections of pyrenoids revealed multiple starch plates surrounding the pyrenoid matrix (Fig. 2A-C).Tubular invaginations of the pyrenoid matrix (Fig. 2A-C) could be seen to traverse sutures of adjacent starch plates and A C D B E F Mean cell lengths were determined and were always greater than mean cell width for all comparisons (Supplementary Table S2 & Fig. S1A).Mean protoplast length was also always greater than mean protoplast width (Supplementary Table S2 & Fig. S1B).The protoplast length to width ratio varied across all taxa surveyed (Supplementary Table S2) with H. rubens (SAG 34-1h) exhibiting a much higher ratio than all other taxa (Supplementary Fig. S2).Results from Wilcoxon Signed-Ranks tests comparing protoplast length and protoplast width confirmed that the difference in protoplast length and width within each taxon or isolate is statistically significant for all taxa or isolates (data not shown).Results from Mann-Whitney U tests indicated that the mean protoplast lengths for each of the three isolates of Haematococcus sp.(Pluvialis 5) are not statistically significantly different from one another (Supplementary Fig. S3A).All of the between isolate comparisons of mean protoplast length (Supplementary Fig. S3A) involving Haematococcus sp.(Pluvialis 5, the lar collar flares near the cell wall boundary.Thin sections (Fig. 3A-E) indicate a dense, fibrillar nature that is lacking in any regular striations lining the inner wall adjacent to the flagellar axoneme.The flagellar collar is broadly connected with the thin cell wall (Fig. 3B & E) but appears to end before reaching the cell membrane at the base of the flagellum (Fig. 3A & B).Mean flagellar length varied from 17.0 to 20.9 µm (Supplementary Table S2) with a range of 14.9 to 23.5 µm.The ratio of cell length to flagellar length ranged from 1.1 to 2.3.A single section of the flagellar apparatus was observed showing one basal body in transverse section and the second basal body in longitudinal section (Fig. 3A).Putative flagellar root microtubules adjacent to the basal bodies are present (arrows in Fig. 3A & D).Considerable electron dense material is present in the space directly adjacent to the basal bodies (Fig. 3A), but a serious reconstruction of the flagellar apparatus will require evaluation of additional planes of section that is beyond the scope of this investigation.that the usage of the term in phycology was intended to describe the production of numerous, non-motile cells in a sporangium.As a consequence, many phycology textbooks refer to the red, thick-walled stage of Haematococcus as an 'akinete.'Most phycology texts define the term 'akinete' as a vegetatively-derived, dormant, or resistant stage that bears a wall derived from the parental cell.Thus, the application of the term 'akinete' for the thickwalled red stage of Haematococcus is appropriate since one can argue that the thick-walled red stage is the product of development from a thick-walled green (palmella or aplanospore) stage.Pyrenoids are not typically visible with the light microscope in akinetes (Fig. 1I).No akinetes were examined by electron microscopy as part of this investigation. Results from the Shapiro Wilk test for akinete diameters indicated that most measurement distributions should not be treated as normal.Consequently, all but ANOVA comparisons were completed using non-parametric tests.Mean akinete diameter (Supplementary Table S2 & Fig. S5) varied from 19.55 µm (SAG 49.94) to nearly 32 µm (HP136).Results from Mann-Whitney U tests indicated that the mean akinete diameters for HP136 and HP138 were not statistically significantly different from one another (Supplementary Fig. S6).However, akinete diameters for HP137 vs. HP138 and HP136 vs. HP137 were statistically significantly different (Supplementary Fig. S6).All of the between isolate comparisons of akinete diameter (Supplementary Fig. S6) involving isolates of the putative new species were statistically significant except for one comparison with H. rubens (SAG 34-1h vs. HP137).All of the remaining akinete diameter comparisons were statistically significantly different except for the SAG 34-1c vs. SAG 49.94 pairing (Supplementary Fig. S6).Box and whisker plots (Supplementary Fig. S7) are included to illustrate the presence of numerous large outlier akinetes for most strains.If normal distributions are assumed for akinete diameters, single factor ANOVA fails to discriminate three isolates (HP136, HP137, and HP138) of the putative new species (Supplementary Fig. S8A).Single factor ANOVA comparisons of the three isolates with each of the remaining taxa (Supplementary Figs S8B, C & S9) indicate that akinete diameters in H. privus are significantly different from akinete diameters for all other isolates / taxa. 18S rRNA phylogeny Results of phylogenetic analysis of 18S rRNA data (Fig. 4) from a broad sampling of chlamydomonadalean taxa resolved a monophyletic Haematococcus plus E. putative new species, H. privus) were statistically significant except for comparisons with H. rubens (SAG 34-1h).In addition, comparisons of protoplast length in H. rubicundus (SAG 34-1c) with the two H. lacustris isolates failed to detect significant differences.Results from Mann-Whitney U tests indicated that the mean protoplast widths for each of the three isolates of H. privus are not statistically significantly different from one another (Supplementary Fig. S3B).None of the between isolate comparisons of mean protoplast width (Supplementary Fig. S3B) involving H. privus were statistically significant except for comparisons with H. rubens .In fact, only the comparisons of protoplast width involving H. rubens (SAG 34-1h) were statistically significantly different (Supplementary Fig. S3B).Results from Mann-Whitney U tests indicated that the ratios of mean protoplast length to mean protoplast width in comparison of the three strains of H. privus (HP136, HP137, and HP138) were not statistically significant (Supplementary Fig. S4).The same test demonstrated that all but one of the between isolate comparisons of mean protoplast length to width ratios were statistically significantly different (Supplementary Fig. S4).The one exception was the comparison of ratios for SAG 34-1c (H.rubicundus) and NIES-144 (H.lacustris) that did not differ significantly (Supplementary Fig. S4). The green palmella stage is spherical and has acquired a thick cell wall boundary lacking in any appreciable periplasmic space (Fig. 1G & H).Pyrenoids are present at this stage (Fig. 1G), but the cell thickness may obscure them (Fig. 1H).The wall of the motile stage within which palmella cells arise is occasionally present surrounding the palmella cell (Fig. 1H).Bright red pigment accumulation generally proceeds from the center where it presumably surrounds the nucleus which is obscured by the dense cytoplasm (Fig. 1H).Measurements of palmella diameter for HP137 revealed a mean of 19.8 µm with a standard error of the mean of 0.78 µm.No additional comparative data on palmella stage cell diameters and morphometric analyses were prepared for this investigation. Cells of the akinete stage (often referred to as the aplanospore stage in many recent publications) were generally spherical, varied in diameter (Supplementary Table S2 & Fig. S5), retained the thick cell wall and had accumulated red pigment that fills the entire protoplast (Fig. 1I).Most definitions of the term 'aplanospore' emphasize that the wall of the non-motile stage is not derived from the parent cell.Although this describes the nature of the thick wall that develops in the green, non-motile, palmella stages of Haematococcus (see Fig. 1H), it is our opinion Fig. 4. Phylogeny of Haematococcus-based 18S rRNA data from a taxonomically broad sampling of chlamydomonadalean genera.The maxi- mum likelihood (ML) tree was reconstructed using 10 heuristic searches with random taxon addition and the tree bisection-reconnection branchswapping algorithm as implemented in PAUP*4.0a169(Swofford 2003) (Model: GTR + I + G; nst = 6; rclass = abcdef; rmatrix = 1.0075713 2.6530925 1.3737438 0.60045607 3.9176836; basefreq = 0.22460816 0.2293864 0.28391317; rates = gamma shape = 0.44499062; pinv = 0.54795489).Data from members of the proposed new species, H. privus, are highlighted in boldface.The ML tree is presented with bootstrap proportions (>69) from ML analyses (100 replicates), bootstrap proportions (>69) from neighbor-joining analyses (100 replicates) and posterior probabilities (>0.94) from Bayesian inference are mapped to the corresponding branches.The tree is rooted with data from Acutodesmus and Ankistrodesmus.Scale bar represents: number of substitutions per site. clade, but without robust support (Fig. 5).All H. rubicundus isolates formed a monophyletic group with robust support (Fig. 5).An unnamed Haematococcus isolate from Norway (SAG 34-1f ) was resolved with robust support as the sister group to H. rubicundus (Fig. 5).The two isolates of H. privus included in the analysis of 26S rRNA data formed a monophyletic group with robust support.The 26S rRNA data identified the H. privus and H. rubicundus clades as sister taxa, but the alliance lacks robust support (Fig. 5). rbcL phylogeny Results from phylogenetic analysis of rbcL data for a focused set of taxa (principally Chlorogonia taxa) resolved a monophyletic Haematococcus with robust support for ML and BI analyses (Fig. 6).Brachiomonas, Rusalka, and E. carotinosa formed a robust grade with the latter resolved as the sister group to Haematococcus (Fig. 6).A monophyletic Chlorogonium clade was resolved as sister group to the above alliance, but without robust support (Fig. 6).Data from the sole strain of H. alpinus was robustly resolved as the sister group to all remaining isolates of Haematococcus.All H. lacustris isolates and one H. rubens isolate (BE05_11) formed a monophyletic carotinosa clade with the highest support.Rusalka was resolved, with strong support as the sister group to the Haematococcus + Ettlia alliance (Fig. 4).Chlorogonium, Brachiomonas, Chlamydomonas gleophila, and Chlamydomonas perpusilla were allied with Rusalka, Haematococcus, and Ettlia as the Chlorogonia clade (sensu Nakada et al. 2008).Only the three Haematococcus sp.(H.privus) isolates included in the analysis formed a monophyletic group with robust support within the Haematococcus and Ettlia alliance (Fig. 4). 26S rRNA phylogeny Results from phylogenetic analysis of 26S rRNA data for a focused set of taxa (principally Chlorogonia taxa) resolved a monophyletic Haematococcus with robust support (Fig. 5).Chlorogonium, Brachiomonas, and E. carotinosa form a robust grade with the latter resolved as the sister group to Haematococcus (Fig. 5).All H. lacustris isolates form a monophyletic group with robust support (Fig. 5).The one strain of H. rubens (SAG 34-1h) included in this analysis was resolved with strong support as the sister group to H. lacustris (Fig. 5).An unnamed Haematococcus isolate from South Africa (SAG 44.96) was resolved as the sister group to the H. rubens and H. lacustris Fig. 5. Phylogeny of Haematococcus based on 26S rRNA data from a taxonomically constrained sampling of species and isolates.The maximum likelihood (ML) tree was reconstructed using 10 heuristic searches with random taxon addition and the tree bisection-reconnection branchswapping algorithm as implemented in PAUP*4.0a169(Swofford 2003) (Model: TrN + I + G; nst = 6; rclass = abaaca; rmatrix = 1 2.3892541 1 1 7.6413502; basefreq = 0.25978574 0.21583794 0.30357923; rates = gamma shape = 0.64254561 pinv = 0.60988237).Data from members of the proposed new species, H. privus, are highlighted in boldface.The ML tree is presented with bootstrap proportions (>69) from ML analyses (100 replicates), bootstrap proportions (>69) from neighbor-joining analyses (100 replicates) and posterior probabilities (>0.94) from Bayesian inference are mapped to the corresponding branches.The tree is rooted with data from Chlamydomonas applanate.Scale bar represents: number of substitutions per site. formed a monophyletic group with robust support (Fig. 7).An unnamed isolate from Norway (SAG 34-1f ) was resolved, without robust support, as the sister group to the H. rubicundus clade.All isolates of the putative new species, H. privus, were robustly resolved as a monophyletic group (Fig. 7).The H. privus alliance was resolved as sister group to the H. rubicundus and Haematococcus sp.(SAG 34-1f ) alliance but without robust support (Fig. 7).The H. alpinus isolate was resolved as the sister taxon to the unnamed Haematococcus isolate from South Africa (SAG 44.96) but without robust support.The H. alpinus and Haematococcus sp.(SAG 44.96) alliance was identified as the sister group to the H. privus + Haematococcus sp.+ H. rubicundus alliance but without robust support (Fig. 7). 5.8S rRNA phylogeny Results from phylogenetic analysis of 5.8S rRNA data for a focused set of taxa (principally Haematococcus taxa) resolved a monophyletic Haematococcus with robust (p ≥ 0.95) support from only the BI analysis (Fig. 8).E. carotinosa was resolved as the sister group to Haematococcus group with robust support (Fig. 6).The H. rubens isolate and two H. lacustris isolates (UTEX 2205 and CCAP 34/7) formed a robust sub-clade within the broader H. lacustris alliance (Fig. 6).The composite taxon for H. privus was robustly resolved as part of a clade that includes all strains of H. rubicundus (Fig. 6). ITS-1 phylogeny Results from phylogenetic analysis of ITS-1 rRNA data for a focused set of taxa (principally Haematococcus taxa) resolved a monophyletic Haematococcus with robust support (Fig. 7).E. carotinosa was robustly resolved as the sister group to Haematococcus (Fig. 7).All isolates of H. lacustris were robustly resolved as a monophyletic group (Fig. 7).The data resolve two robust, sub-clades of H. lacustris (Fig. 7).One of the H. lacustris sub-clades was exclusively comprised of taxa from the White Sea region (Fig. 7).The data resolved the two isolates of H. rubens included in the analysis (SAG 34-1h and BE05_11) as the sister group to H. lacustris but only the BI analysis exhibited robust statistical support.All isolates of H. rubicundus Fig. 6.Phylogeny of Haematococcus based on partial rbcL data from a taxonomically constrained sampling of species and isolates.The maxi- mum likelihood (ML) tree was reconstructed using 10 heuristic searches with random taxon addition and the tree bisection-reconnection branchswapping algorithm as implemented in PAUP*4.0a169(Swofford 2003) (Model: GTR + G; nst = 6; rclass = abcdef; rmatrix = 0.52965762 2.0634922 5.0120517 0.4799857 8.3114761; basefreq = 0.27672136 0.17217936 0.21749192; rates = gamma shape = 0.16518312).Data from a composite taxon derived from partial sequences from two strains representing the proposed new species, H. privus, are highlighted in boldface.The ML tree is presented with bootstrap proportions (>69) from ML analyses (100 replicates), bootstrap proportions (>69) from neighbor-joining analyses (100 replicates) and posterior probabilities (>0.94) from Bayesian inference are mapped to the corresponding branches.The tree is rooted with data from Chlamydomonas applanate.Scale bar represents: number of substitutions per site. http://e-algae.orgdata for a focused set of taxa (principally Haematococcus taxa) resolved a monophyletic Haematococcus with robust support from only the NJ and BI analyses (Fig. 9).E. carotinosa was robustly resolved as the sister group to Haematococcus (Fig. 9).All isolates of H. lacustris were resolved as a monophyletic group (Fig. 9) but the alliance lacks robust support.The data resolved two sub-clades of H. lacustris (Fig. 9).One of the H. lacustris sub-clades was exclusively comprised of taxa from the White Sea region (Fig. 9) but support for the sub-clade was robust only for the NJ analysis.All isolates of H. rubicundus were allied in a clade with robust support (Fig. 9).The unnamed Haematococcus isolate from Norway (SAG 34-1f ) was identified as the sister group to H. rubicundus but support was not robust (Fig. 9).The H. alpinus isolate and the unnamed Haematococcus isolate from South Africa (SAG but only the BI data are robust (Fig. 8).All isolates of H. lacustris and H. rubens were resolved as a monophyletic group but only the NJ and BI results evidenced strong support (Fig. 8).All isolates of H. rubicundus, the two unnamed Haematococcus isolates and the one H. alpinus isolate share a node with the H. lacustris and H. rubens clade but only the BI results were robust (Fig. 8).All isolates of H. privus formed a robust, monophyletic group; the H. privus clade was the sister group to the remainder of Haematococcus species and isolates but only the BI results indicate high statistical probability (p ≥ 0.95) (Fig. 8). ITS-2 phylogeny Results from phylogenetic analysis of ITS-2 rRNA Fig. 7. Phylogeny of Haematococcus based internal transcribed spacer-1 data from a taxonomically constrained sampling of species and isolates.The maximum likelihood (ML) tree was reconstructed using 10 heuristic searches with random taxon addition and the tree bisectionreconnection branch-swapping algorithm as implemented in PAUP*4.0a169(Swofford 2003) (Model: TVM + G; nst = 6 rclass = abcdbe; rmatrix = 1.2383809 4.4338084 2.6430715 0.37224268 4.4338084; basefreq = 0.2040875 0.28227467 0.25382161; rates = gamma shape = 1.0941179).Data from members of the proposed new species, H. privus, are highlighted in boldface.The ML tree is presented with bootstrap proportions (>69) from ML analyses (100 replicates), bootstrap proportions (>69) from neighbor-joining analyses (100 replicates) and posterior probabilities (>0.94) from Bayesian inference are mapped to the corresponding branches.Taxa with duplicate sequence data that were excluded from the analysis are listed to the right of the tree and paired, by connecting lines, with the included strain.The tree is rooted with data from Chlamydomonas applanate.Scale bar represents: number of substitutions per site. Although the mean for total carotenoid yield was highest in H. rubicundus (SAG 34-1c) at 14.32 µg per 10 5 cells, this difference was not statistically significant from the other strains (F 4,10 = 0.371, p = 0.824) (Fig. 10B).The total carotenoid yield did not vary greatly between the remaining strains with a mean of ca. 8 µg per 10 5 cells (Fig. 10B).Astaxanthin was the most abundant carotenoid type in all strains ranging from 62.7 to 67.9% of total carotenoid content.The mean for total astaxanthin yield was high-44.96)were resolved as sister taxa but the alliance was not robust (Fig. 9); this clade was identified as the sister group to the H. rubicundus + Haematococcus sp.(SAG 34-1f ) alliance but the support was not robust (Fig. 9).All isolates of the putative new species, H. privus, were robustly resolved as a monophyletic group (Fig. 9).The H. privus alliance was resolved, without robust support, as the sister group to the H. alpinus + Haematococcus sp.(SAG 44.96) + Haematococcus sp.(SAG 34-1f ) + H. rubicundus clade (Fig. 9).The two isolates of H. rubens (SAG 34-1h and BE05_11) were robustly resolved as a monophyletic group (Fig. 9) which were, in turn, resolved as the sister to the H. privus + H. alpinus + Haematococcus sp.(SAG 44.96) + Haematococcus sp.(SAG 34-1f ) + H. rubicundus clade, but without robust support (Fig. 9). Carotenoid content Results of the pigment induction experiments failed to Fig. 8. Phylogeny of Haematococcus based 5.8S rRNA data from a taxonomically constrained sampling of species and isolates.The maximum likelihood (ML) tree was reconstructed using 10 heuristic searches with random taxon addition and the tree bisection-reconnection branchswapping algorithm as implemented in PAUP*4.0a169(Swofford 2003) (Model: K80; tratio = 10.119339;basefreq = equal).Data from members of the proposed new species, H. privus, are highlighted in boldface.The ML tree is presented with bootstrap proportions (>69) from ML analyses (100 replicates), bootstrap proportions (>69) from neighbor-joining analyses (100 replicates) and posterior probabilities (>0.94) from Bayesian inference are mapped to the corresponding branches.The tree is rooted with data from Chlamydomonas applanate.Scale bar represents: number of substitutions per site.S7) under the indicated culturing conditions.These measurement data, for the motile cells in particular, will require further testing that takes into account possible influence of development or time of day. Transmission electron microscopy Ultrastructural features of the H. privus motile cell are generally similar to observations recorded in previous investigations of Haematococcus.Specifically, pyrenoid ultrastructure in H. privus appears to be similar to that in H. alpinus (Mazumdar et al. 2018), H. lacustris (Santos and Mesquita 1984, Damiani et al. 2006, Pegg et al. 2015, unpublished observations) and E. carotinosa at 9.58 µg per 10 5 cells, but was not statistically significantly greater than other strains (F 4,10 = 0.393, p = 0.809) (Fig. 10C).The mean total for astaxanthin yield did not vary greatly between the remaining strains with a mean of ca.6 µg per 10 5 cells (Fig. 10C). Light microscopy The basic features of the motile stage (Fig. 1A-E) and akinete stage (Fig. 1H & I) of H. privus (sp.nov.) vary little from the other recognized species of Haematococcus.Similarly, the palmella stage (Fig. 1G) does not appear to be distinct, but no comparative measurement data were recorded for this stage.However, simple morphometric analyses of the motile stage indicate that the ratio of Fig. 9. Phylogeny of Haematococcus based on a manually-adjusted, secondary structure-guided alignment of internal transcribed spacer-2 data from a taxonomically constrained sampling of species and isolates.The maximum likelihood (ML) tree was reconstructed using 10 heuristic searches with random taxon addition and the tree bisection-reconnection branch-swapping algorithm as implemented in PAUP*4.0a169(Swofford 2003) (Model: SYM + G; nst = 6; rclass = abcdef; rmatrix = 3.1124081 5.8032147 3.5500665 0.68624069 10.534802; gamma shape = 1.0425506).Data from members of the proposed new species, H. privus, are highlighted in boldface.The ML tree is presented with bootstrap proportions (>69) from neighbor-joining (NJ) analyses (100 replicates), bootstrap proportions (>69) from NJ analyses (100 replicates) and posterior probabilities (>0.94) from Bayesian inference are mapped to the corresponding branches.The tree is rooted with data from Chlamydomonas applanate.Scale bar represents: number of substitutions per site. published observations) indicates that the flagellar collar of H. privus appears to be similar to that in other species or lineages of Haematococcus.However, the length of these unusual features seems likely to vary as motile cells enlarge during maturation towards non-motile phases.At least some sporangia of H. privus (like other species of Haematococcus) remain motile during zoosporogenesis where one of the daughter cells may retain the parental flagella (Pocock 1960).The development and fate of the flagellar collar during motile cell maturation and sporogenesis requires further investigation. Molecular phylogeny Results from the phylogenetic analyses presented here support the view that Haematococcus is comprised of at least five distinct, species-level lineages (Allewaert et al. 2015, Mazumdar et al. 2018).All of the ribosomal markers (Figs 4,(5)(6)(7)(8)(9) robustly identify the H. privus isolates as comprising a lineage that is distinct among all other Haematococcus isolates including representatives of the four, currently-recognized species, H. alpinus, H. lacustris, H. rubens, and H. rubicundus.Of particular note are the results from 18S rRNA data analysis.As noted in the Introduction, the 18S rRNA data have shown little variation among isolates of Haematococcus (Buchheim et al. 2013, Klochkova et al. 2013, Pegg et al. 2015, Kim et al. 2021) and was largely abandoned by us in favor of collecting data from the ribosomal spacers.The results presented here still indicate that the 18S rRNA gene exhibits little variation among all Haematococcus isolates plus E. carotinosa (overall mean p-distance = 0.002).Thus, it is noteworthy that phylogenetic analysis of 18S rRNA provides robust support (albeit from just two informative sites) only for the H. privus alliance (Fig. 4). Our results also confirm that E. carotinosa is the closest ally of Haematococcus, but deserves to retain its taxonomic status as part of a distinct genus.In addition to the five lineages currently comprised of named Haematococcus species (H. alpinus, H. lacustris, H. privus [sp. nov.], H. rubens, and H. rubicundus), our data also affirm that Haematococcus sp.(SAG 34-1f ) is distinct from the named lineages and is likely destined to be described as new species.Another unnamed strain (SAG 44.96) is phenetically distinct from other lineages, but ITS-1 (Fig. 7) and ITS-2 (Fig. 9) data indicate a possible sister status with H. alpinus. While the ribosomal data identified H. privus as distinct, the rbcL data placed the composite taxon representing H. privus as an ally in the H. rubicundus clade (Fig. 2015).However, our TEM data provide new insights into the unusual flagellar collar in Haematococcus, a structure that was first noted by Hazen (1899) and Herrick (1899).Although similar structures have been detailed in Chlamydomonas (Ringo 1967, Harris 2009), the flagellar collar in Haematococcus is much longer than that in Chlamydomonas simply because the distance between protoplast and cell wall differs greatly between the two species.The first ultrastructural investigation of the flagellar collar (or flagellar tube) was reported by Bowen (1964) who noted that these structures were observed in Haematococcus and Balticola Droop, both of which have substantive periplasmic spaces between the cell wall and protoplast.However, the flagellar collars of Haematococccus were much longer even than those observed in Balticola (Bowen 1964).A comparison of light microscopic data (un- A C B and H. rubicundus.The data we presented here is based on a much smaller set of strains and our induction protocol (starting with exponential stage motile cells rather than stationary stage palmella; high light and reduced phosphate rather than just reduced phosphate) and analysis protocol (HPLC rather than spectrophotometry) differed from that of Allewaert et al. (2017).Allewaert et al. (2017) noted that astaxanthin production could be relatively high in H. rubicundus (e.g., CZD1_08), but higher rates for astaxanthin productivity were reported in strains of H. lacustris (e.g., BE02_09).The remaining strains in our analysis did not show unambiguous differences in total carotenoid and astaxanthin yields (Fig. 10B & C).On the other hand, cell counts indicate that the strains we studied exhibited different rates of cell division under our induction conditions (Fig. 10A).Again, H. rubicundus (SAG 34-1c) showed the highest average growth rate of all strains in our analysis.This result appears to be consistent with observations by Allewaert et al. (2017) where more strains of H. rubicundus than H. lacustris showed relatively high growth rates.Researchers focused on optimizing astaxanthin production for commercialization generally must first enhance growth of Haematococcus and then, alter conditions that favor carotenogenesis (e.g., Rizzo et al. 2022).Thus, a strain that has both high growth rates and high astaxanthin productivity is desirable.Despite high values for variance, our carotenoid yields suggest that H. rubicundus (SAG 34-1c) is worthy of additional study as a possible source for astaxanthin and other valuable pigments (Fig. 10B & C). Development Reproduction in all strains of H. privus was characterized by asexual means only.When routinely (every 3 weeks) sub-cultured into fresh medium, the motile stage remains the dominant form.As the culture ages, motility is lost and cells transition to green palmella and eventually red akinetes.No sexual reproductive stages (i.e., microzooids sensu Hazen 1899) were ever detected.A number of early researchers reported gametogenesis, syngamy, zygote formation, and zygote germination in Haematococcus (Peebles 1909, Schulze 1927).Droop (1956) concluded that sexual reproduction in Haematococcus lacustris is heterothallic which may account for the rarity of sexuality in the laboratory.Nonetheless, Droop (1956) and other researchers concluded that Haematococcus is rarely if ever sexual (Hazen 1899, Herrick 1899, Elliott 1934, Pocock 1960).Most recently, Triki et al. (1997) and Bai et al. (2016) reported gametogenesis in Haemato-6).Unfortunately, this result must be regarded with some caution since it is based on a composite sequence that is also relatively short (see below).On the other hand, the 26S rRNA data (Fig. 5), the ITS-1 data (Fig. 7), and the ITS-2 data (Fig. 9) identify H. privus and H. rubicundus as sister taxa.While support for this alliance of H. privus and H. rubicundus is weak, none of the ribosomal analyses offers a robust assessment identifying any other lineage as the sister group to H. privus.Thus, the rbcL tree may not be in conflict with the ribosomal data. rbcL anomalies Our results with rbcL require special mention in that our amplification efforts frequently failed to produce a workable product despite using a variety of primers designed for chlorophycean flagellates in general (Nozaki et al. 1995) or specifically for Haematococcus (Kim et al. 2015).For example, to date, we have been unable to produce an rbcL amplification product for SAG 34-1f (Haematococcus sp.) or SAG 44.96 (Haematococcus sp.).Although these failures may be a consequence of our somewhat limited experience with rbcL, two lines of evidence indicate that the chloroplast genome of Haematococcus is unusual.Genomic studies recently identified Haematococcus as possessing what may be the largest chloroplast genome on record at more than a megabase (Bauman et al. 2018, Smith 2018).In addition, our experience with rbcL amplification suggests that the chloroplast genomes of E. carotinosa and H. privus likely bear some interesting features given that the PCR product size was anomalous (see Materials and Methods).Furthermore, the nuclear ribosomal data identify SAG 34-1f and SAG 44.96 as comprising distinct branches (Allewaert et al. 2015, present investigation).Resolving the problems associated with obtaining homologous rbcL reads is certainly needed to help with the possible phylogenetic conflict between plastid and nuclear ribosomal data.Moreover, it will likely be necessary to sample many more markers (nuclear, plastid, and mitochondrial) to address the potential conflict between datasets.Obtaining chloroplast genome data from other lineages has huge potential to aid in understanding the origins of the large genome in H. lacustris (and perhaps other lineages) and likely will provide additional clarity on phylogeny of the genus.Allewaert et al. (2017) found considerable variability in carotenogenesis among numerous strains of H. lacustris been proposed as a mechanism that organisms might exploit to escape Muller's ratchet through gene conversion (Maciver 2016).The most famous case of organisms escaping Muller's ratchet, the Bdelloid rotifers, may also be using gene conversion due to polyploidy to eliminate deleterious mutations (Flot et al. 2013).Thus, polyploidy in Haematococcus may explain how it escapes Muller's ratchet while simultaneously offering an explanation for the relatively high level of intragenomic variation in the ITS-2 ribosomal spacer in numerous lineages of Haematococcus (Alanagreh et al. 2017). Biogeography The scope of sampling for Haematococcus diversity remains rather narrow.For example, Africa and Asia remain largely unsampled with respect to available cultures.We know from the work of Pocock (1960) that southern Africa had a myriad of collecting sites for H. lacustris.Unfortunately, we are not aware of any extant material from the Pocock surveys.Europe, North America, and South America have the most sampling records at the moment.The limited sampling that has been done indicates that H. lacustris and H. rubicundus are distributed on a global scale.Of those two, H. lacustris is the most frequently sampled lineage and our data indicate that it also exhibits the greatest amount of molecular diversity among the recognized species where taxon sampling includes at least six lineages.Maximum within group p-distance for ITS-1 data was 0.043, 0.000, and 0.011 for H. lacustris, H. rubicundus, and H. privus, respectively.Maximum within group p-distance for ITS-2 data was 0.026, 0.005, and 0.010 for H. lacustris, H. rubicundus, and H. privus, respectively.In addition, the data from both internal transcribed spacers (Figs 7 & 9) indicate that a number of White Sea polar isolates (Chekanov et al. 2020) form a sub-clade within H. lacustris.Although the support for the White Sea clade is weakest with the ITS-2 data (Fig. 9), the ITS-1 data resolve this alliance with robust support (Fig. 7).This lineage may be worthy of elevation to formal status as a variety of H. lacustris. The principal finding from this investigation is that a new species of Haematococcus has been discovered and described.The new species, H. privus, is currently known only from Oklahoma and Wisconsin (USA).This brings to five, the number of recognized species of Haematococcus that have been characterized at the molecular level.These new observations regarding Haematococcus diversity will aid in the interpretation of any differences in carotenoid production and yield that may have a phylogenetic coccus but neither found evidence of syngamy.It seems likely that Haematococcus has retained at least some of the genetic machinery for gametogenesis, but the resistant, akinete stage removes or reduces the need to produce zygotes as a means to survive extreme conditions.If, as the evidence suggests, Haematococcus reproduction is largely or exclusively asexual, then one may ask whether the Haematococcus lineage is affected by Muller's ratchet (Muller 1932).A time-calibrated phylogeny indicated that the lineage that includes Haematococcus and the lineage that contains the sister genus, Rusalka (see Fig. 4), diverged no more recently than 45 million years ago (Munakata et al. 2016).Thus, it seems perfectly legitimate to question why this relatively ancient lineage is resistant to Muller's ratchet. The issue of sexuality in Haematococcus is obscured, though, as a consequence of several studies that reported doubling (or more) of DNA content during what is ostensibly vegetative development.Lee and Ding (1994) reported that cells of H. lacustris UTEX 16 underwent fusion at a point in the cell cycle where the zoospores ceased dividing and that this corresponded to a doubling of DNA content in most cells.Since no micrographs illustrating cell fusion were presented it appears that cell fusion was inferred from the increase in DNA content.This conclusion is further supported by a passage in the Results section "…Fusion of cells (i.e., doubling in DNA content) was observed between 65 and 113 h of incubation…" (Lee and Ding 1994).The only detailed report of motile cell (macrozoid) fusion or syngamy (Peebles 1909) was rejected by most other investigators who noted that Haematococcus is prone to produce motile cells that are the product of incomplete cytokinesis and that fusion was only associated with gametes or microzooids (Pocock 1960).Even if this phenomenon is unlikely to be the product of fusion or syngamy, the DNA doubling requires explanation.Reinicke et al. (2018) studied development in H. lacustris and concluded that this alga exhibits a hybrid form of cell cycle that is intermediate between the Chlamydomonas type and the Scenedesmus type.Of particular note is the observation that the H. lacustris cell cycle is characterized by a transient polyploid phase where genome duplication may be as much as or even more than 16C (Reinicke et al. 2018).Following the polyploid phase, the cells of H. lacustris proceeded to mitosis and a transient polynuclear phase in the palmella or aplanospore stage and finally cytokinetic production of individual motile cells (Reinicke et al. 2018).Reinicke et al. (2018) suggested that the polyploid phase may be a large factor in genome expansion in Haematococcus.Polyploidy has also http://e-algae.orgoverlapping with H. lacustris, H. rubens, and H. rubicundus.On average, protoplasts of motile cells had a greater length : width ratio and greater akinete diameter than H. lacustris, H. rubens, and H. rubicundus.The protoplast apex was more often acuminate forming a plasma papillum.No sexual stages were observed. Etymology.privus (Latin for rare, uncommon, and distinctive) for its limited distribution. Additional material.Strains HP137 and HP138.Distribution.Specimens were found in the United States. Fig. 2 . Fig. 2. Transmission electron microscope images for Haematococcus privus. (A) Flagellate stage with pyrenoid in chloroplast. (B) Flagellate stage with pyrenoid bearing thylakoids traversing the starch plate through multiple sutures and entering the pyrenoid matrix.(C) Flagellate stage with pyrenoid bearing thylakoids traversing the starch plate through a single suture and entering the pyrenoid matrix.V, vacuole.(D) Flagellate stage illustrating cell boundary with cell wall (CW), cytoplasmic strand (CS), and prominent vacuole (V).(E) Flagellate stage showing entire cell with multiple vacuoles (V) and nucleus (N).(F) Flagellate stage illustrating cell boundary with cell wall (CW), starch grains (SG), and osmiophilic granules of the stigma.Scale bars represent: A-D & F, 800 nm; E, 2 µm. Fig. 3 . Fig. 3. Transmission electron microscope images for Haematococcus privus. (A) Flagellar apparatus with numerous mitochondria (Mt), flagel- lum (F), basal body, flagellar collar (FC), and cell wall (CW).Flagellar root microtubules are highlighted with arrowheads.(B) Longitudinal section through flagellum (F) projecting through the flagellar collar (FC) and cell wall (CW).Oblique section of a cytoplasmic strand (CS) is visible.BB, basal body.(C) Cross section through flagellum (F) and flagellar collar (FC).(D) Portion of cell wall (CW) is visible.Cytoplasmic microtubules are highlighted with arrows.Cross section through flagellum (F) and flagellar collar (FC).(E) Section showing both flagella, one in cross section and the other in oblique section.Flagella (F) and flagellar collars (FC) are present.Scale bars represent: A, 400 nm; B & E, 800 nm; C & D, 200 nm. 2017) to test for normality.Since a few sets of data failed the Shapiro Wilk test, all comparisons were completed using non-parametric tests using online tools for the Wilcoxon Signed-Ranks test (Wilcoxon Signed-Ranks test calculator, Statistics Kingdom 2017) and the Mann-Whitney U test (Mann Whitney U test calculator, Statistics Kingdom 2017).Parametric statistical analyses (single factor analysis of variance [ANOVA]) were conducted using the tools in Excel.Standard error of the mean was calculated manually in Microsoft Excel for 365 (v.2210 Build 16.0.15726.20188;Microsoft Corp., Redmond, WA, USA).All graphs were generated in Excel.Alpha was set to 0.05 for all relevant analyses. http://e-algae.orgdom
2023-03-30T15:03:25.626Z
2023-03-15T00:00:00.000
{ "year": 2023, "sha1": "c7c0327a3b46a1c55d8d22c210d69584c04c0fef", "oa_license": "CCBYNC", "oa_url": "https://doi.org/10.4490/algae.2023.38.3.9", "oa_status": "CLOSED", "pdf_src": "ScienceParsePlus", "pdf_hash": "1f6fe1aeb72f47b99747d399a8ee3281c2925a0e", "s2fieldsofstudy": [ "Biology", "Environmental Science" ], "extfieldsofstudy": [] }
252757266
pes2o/s2orc
v3-fos-license
Innovating tuberculosis prevention to achieve universal health coverage in the Philippines Summary To contribute to tuberculosis (TB) elimination, TB preventive treatment (TPT) should integrate innovative approaches including tele-contact investigation (TCI), mathematical modelling, and participatory governance. Aligning with the World Health Organisation's primary health care framework, supply is provided by the provincial health system, demand is cultivated by the community, while governance is represented by the governor, who oversees the health leadership structure, local policies, and allocation of resources. A healthy dynamic between these three components is required to achieve universal health coverage (UHC). Because of their potential to integrate health systems and engage communities, primary health care principles underpin an effective approach to TB prevention. First, the provincial health system should connect with the community through TCI to transform the status quo of passive service delivery. Second, community participation should strengthen the linkage between the health system and governance, which ensures that community action plans are aligned with provincial TPT targets. Third, governance should leverage mathematical modelling to allocate resources to those with greatest need. Central to this is a reliable TB information system that should validate a robust mathematical model to measure cost-effectiveness of the intervention. Collectively, this holistic approach to TB prevention could provide a proof-of-concept that investing in primary health care is the key to UHC. Background and context The Philippines accounts for 11% of global tuberculosis (TB) cases, with an estimated 591,000 new TB cases emerging every year. Based on the 2021 Global Tuberculosis Report, four Filipinos die of TB every hour. 1 Over the years, efforts have focused on TB treatment, testing, and diagnosis, but prevention has been neglected. From 2018 through 2020, TB preventive treatment (TPT) coverage declined from 11% to 2%. 2 When community transmission of COVID-19 was reported in March 2020, quarantine protocols were enforced. Health facilities redirected their efforts to outbreak response, resulting in limited services for nonemergency care, including TB case finding activities, causing a 20% reduction in TB case notification. 3 To recover from COVID-19's impact on TB, current efforts need to be optimised and new tools introduced to strengthen the country's TB elimination efforts, which include treating latent TB infection (LTBI). 4 Studies have demonstrated that TPT can prevent progression to active TB disease. 5 Thus, the national government expanded its TPT eligibility not only to childhood contacts who are under-five and people living with HIV, but also to all household contacts regardless of age. 6 The TB Innovations and Health Systems Strengthening Project, funded by USAID, partners with the Department of Health (DOH) to introduce integrated, innovative, and viable organisational systems and approaches to achieve national TB program targets, including TPT coverage. Prior to the COVID-19 pandemic, TPT was not routinely offered in facilities. The project therefore developed a Field Implementation Guide on contact investigation and TPT. It was later disseminated by the DOH through a policy document (i.e., Department Circular 2021-0512). However, its implementation on a nationwide scale appears to be slow, owing in part to the country's decentralised governance structure. 7 Local health systems have not yet been fully integrated to respond to UHC challenges, one of which is TPT coverage. 8 The Field Implementation Guide positions contact investigation as a strategy to identify household contacts of index TB cases that are eligible for TPT. 9 However, the COVID-19 pandemic revealed that contact tracing has been the weakest link of the government's pandemic response. 10,11 Community health workers were not adequately mobilised because of the strict lockdown measures that the national government enforced in 2020, which negatively impacted contact tracing of TB cases as well. 10 In addition to health facilities focusing their efforts on the COVID-19 response, there was also a notable decline in the number of clients going to hospitals due to travel restrictions and lack of sense of urgency to seek care. 11 TPT innovation To respond to COVID-19-related challenges, tele-contact investigation (TCI) was piloted in a tertiary hospital in April 2020 to find people with LTBI amid the pandemic. It comprised a modified strategy of contact tracing where a health facility staff member proactively screened household members of bacteriologically-confirmed drug-susceptible index TB cases via mobile phone, instead of through conventional home visits. Using a standardised TCI script, the facility's assigned personnel obtained consent from the index case, initiated tele-consultation of all household members by enquiring about the presence of TB symptoms, elicited the presence of TB risk factors (HIV, diabetes, smoking, malnourished, immunosuppressive condition), and discussed the risks of TB exposure and benefits of TPT. Screening of contacts was then performed to rule-out active TB disease. This included symptom screening for under-five contacts, with chest X-ray for other household members. Staff performed tuberculin skin testing for those without TB risk factors, and bacteriologic diagnostics for those presumptive of TB. The team then coordinated referral to a facility physician for those requiring further clinical evaluation and offered eligible clients TPT with specific treatment regimen based on patient preference-daily isoniazid for 6 months (6H), daily isoniazid-rifampicin for 3 months (3HR), or weekly isoniazid-rifapentine for 3 months (3HP). 6 Of the 333 household contacts that completed screening between April 2020 and July 2022, 210 (63%) were eligible for TPT and 89% of those initiated TPT, demonstrating potential for replication at the provincial level. Provider-initiated contact tracing facilitated the identification of both latent and active TB in the community as an effective tool in the armamentarium to control infectious diseases. 12 Using contact investigation as an entry point for TPT implementation, 9 TCI could provide critical household-level data on different stages of TB disease progression, including subclinical TB. 13 Interestingly, TCI's added feature of extracting information on TB risk factors (e.g., HIV, diabetes mellitus, and malnutrition) enhances the opportunity for its use as an integrated ("universal") screening tool. However, the approach to scaling up TCI to increase TPT coverage should be aligned with the Philippine Health Sector Reform (Republic Act 11223), otherwise known as the UHC Law. 14 With TB positioned as a tracer program for UHC implementation in the country, TCI should be used as a universal screening tool to integrate health services within provinces, with resulting TPT indicators providing a snapshot of the healthcare access of a given locality. Therefore, the TB prevention cascade should be prioritised because of its potential for community impact, integration with health services, and government response to improve TPT coverage. 15,16 As such, reforms aimed at improving health system performance should translate to increased TPT uptake, especially in provinces with high TB burden. The prolonged latency period of TB is epidemiologically significant as evidenced by the global LTBI burden, with nearly a quarter of the world population infected. 17 Mathematical modelling data in the Asia Pacific suggests that LTBI is one of the key drivers of TB transmission. Therefore, TPT is a crucial tool to help empty the LTBI reservoir, which will greatly contribute to TB elimination goals. 18 Additionally, with the current lack of an effective TB vaccine, TPT is the primary modality available to prevent progression to active TB disease. Modelling studies in South-East Asia show that increasing TPT coverage could reduce annual TB incidence and mortality rates. 19 Interestingly, the timely publication of The Global Plan to End TB 2023−2030 has provided high-TB burden countries like the Philippines a clear roadmap and guide for high-level decision-making on TB elimination strategies with cost implications. Considering the current political climate of the country, a practical approach in subnational settings is to start with the least resource-intensive intervention to obtain political buy in. Since the prevention cascade constitutes less than 5% of the total resources needed to operationalise the Global TB Plan, 16 TPT strategies should be creatively packaged to sensitise provincial stakeholders towards contributing to a TB-Free Philippines. Global TB targets (including TPT coverage) could be achieved if TB elimination strategies are "provided within the context of progress towards UHC." 1 To improve TPT uptake in the community, individualbased health services like TCI should be seamlessly integrated into the province's primary care provider network. 14 Utilising the World Bank's "accountability triangle," TPT provision could be likened to a service delivery chain with three sets of actors, namely, household contacts of TB index cases as the clients, provincial health system as the provider, and elected government officials as the policymaker. 21 Although contact investigation has historically been one of the strategies of the National TB Control Program (NTP), 6 accountability measures were not established, causing the fragmentation of services. This resulted in a lack of routine TPT services in facilities. To mitigate its effects on TPT coverage, the provincial health system's direct response should be to implement TCI within its network, and to hold government leaders accountable to the TB-affected communities through improved information systems. 15 Involvement of communities as partners in this paradigm underscores the importance of the primary healthcare approach towards UHC. 22 Juxtaposing this approach with the World Health Organisation's (WHO) primary healthcare framework, an innovative approach to TPT should combine TCI with community participation and mathematical modelling (see Figure 1). 23 While the DOH has consistently advocated for people-centred care to support UHC, the provincial health system continues to use tiered governance and treats communities as recipients instead of partners. This three-pronged innovative approach to TPT will support a paradigm shift wherein communities are empowered to decide about their own health, and participatory governance becomes instrumental in establishing a platform for community participation. 24 The approach would allow community members to demand for responsive and culturally sensitive programs, and in the process, create a dynamic relationship between the service provider, community, and local governance. Instead of a hierarchical governance approach, communities should demand high-quality health services that are inclusive and transformative. 15 Partnership with a local civil society organisation (CSO) and linkage with the provincial health office should be established to ensure community voices are translated into community action plans. Purposive involvement of communities in government-led initiatives has sustained initial gains in health. 24,25 Locally driven solutions enforced by leadership and policies have been demonstrated to increase health utilisation indicators. 25,26 Thus, participatory governance should ensure community involvement across provinces to improve access to TB preventive services. Based on the Western Pacific Regional Framework to End TB in 2021−2030, lack of accountability and weak coordination mechanisms are two of the overarching challenges in TB control and elimination (the other two being inadequate financing and lack of planning in emergency situations). 27 With the landmark passage of the UHC law in 2019, the provincial government is required to be more accountable for the care of their constituents and management of their health systems. 8,15 To manifest UHC reform, the provincial governor should own the problem by recognising the interconnectedness of the causes and effects of TB, contact relevant stakeholders to address social determinants of health, collaborate with TB-affected communities, and co-create solutions with the people to improve access to TB diagnosis, care, and prevention. 25−28 Guided by the WHO Framework for TB Programming (2021), elected government leaders should ensure that the provincial health system targets people not accessing the health system and guarantees universal access especially among the marginalised who are often overlooked in national health programs. 23 Considering the complex nature of health systems, government leaders cannot do this alone and would need the active participation of communities. 21 New relationships between provincial leaders and CSOs should be built, such that Figure 1. Integrating tele-contact investigation, community participation, and mathematical modelling to achieve universal health coverage through the primary health care approach. Using TB as a tracer program for UHC implementation, TPT coverage should be employed as a proxy indicator for UHC. 23 linkage between the provincial health system and affected communities are strengthened, and platforms for trust-building dialogue between the governor and communities are established. 16,27 The need for human resource capacity in data collection and management, for example, should be addressed by CSO involvement. TB indicators at the community level should be crosschecked with the barriers to early TB diagnosis, whereby community organisers contextualise cultural, socio-economic, and health system factors and translate them to quantitative data. 15,28 Information generated through community-led monitoring activities should be integrated in mathematical models for TB programming. 20,29 In this way, the provincial health system would be able to manifest its patient-centred care approach in TPT implementation. Communities can hold policymakers accountable for public services through community voice, and the most powerful means is through better information. 20,24 However, establishing this relationship of accountability is complex because of problems in monitoring health services, which are both "discretionary and transactionintensive." 20 Recording and reporting TB data has been a challenge because, in addition to keeping statistics, health personnel are required to provide immunisations, conduct screening and testing, monitor compliance, make decisions about diagnosis and treatment of specific patients, etc. Nevertheless, recent advances in technology could potentially tackle the "transactionintensive" challenges through a mobile digital application, which could help health personnel in automating the processes of patient data encoding, linking with index cases, symptom screening of household contacts, tagging for the presence of risk factors, and intuitively identifying the next step of the TB prevention cascadeall in one go. It should simplify data segregation, collation, and evaluation of TPT indicators, which would otherwise take weeks to complete if performed manually. The uptake of this tool should be extended to partner CSOs who could provide context to TB barriers, streamline the feedback mechanism, and create a platform for community participation to unfold. However, alleviating the "discretionary-intensive" space is more complex; nonetheless, intelligent solutions such as computeraided TB detection in X-rays, connectivity of molecular diagnostic tools, and video-observed treatment monitoring could help by decreasing turnaround time for optimal diagnosis and management. As the Philippines is a hub for digital health innovation, 30 these new technologies should accelerate progress toward the country's End TB goals. Yet, greater government accountability should be explored for optimised monitoring of TPT implementation at subnational settings. By strengthening health information systems using digital technologies, relevant TPT indicators such as contact investigation coverage and TPT coverage rate could readily reflect health system performance. While TPT coverage is the proposed indicator for UHC progress in the province, use of mathematical modelling could provide a more comprehensive analysis on how several variables affect the entire health system. 20 In 2016, Villasin et al 31 analysed TB transmission dynamics in the Philippines based on an earlier model developed by Trauer et al 18 for high-TB burden settings in the Asia Pacific. Using localised data, parameters such as partial immunity (either acquired from vaccination or developed from previous TB disease), treatment success rate, and treatment duration were identified to significantly affect projected TB incidence rates from 2013-2023. Another modelling study in 2018 utilised optimal control framework to assess NTP strategies with minimum intervention implementation costs for 2016−2035. Results demonstrated that while distancing is the most effective single control strategy and case holding the least, latent case finding is more effective than active case finding but are both complementary. 32 Moreover, the combination of strategies, such as distancing, active case finding, and latent case finding, instead of case holding, provide the most optimal TB control with minimum implementation costs. 32 Therefore, TCI and TPT provision, together with the new tools for active case finding, should be advocated to provincial leaders using this evidence. During the COVID-19 pandemic, mathematical modelling has been used by different countries worldwide, including the Philippines, to aid governments in their policy-and decision-making functions through analysis of viral reproduction rates and transmission patterns. 33 But the manner by which mathematical models are closely integrated in decision-making processes is not as established in low-and middle-income countries as in high-income nations. Due to the diversity of subnational contexts, mathematical models developed from national data should be validated by local data using a user-friendly tool to allow the coordination between local administrators and high-level officials. 34 Based on experience from India, the use of COVID-19 simulators should be done in a collaborative manner early on, so that epidemiologists or data scientists who are involved in the analysis would correctly interpret complex parameters, establish rapport with provincial stakeholders, and provide follow-up training and guidance as needed. 35 To apply lessons learnt from COVID-19 on TB control in the Philippines, 36 a TB modelling resource should be established, taking into consideration the two Philippine modelling studies done by Villasin et al 31 and Soyoung et al 32 as well as the TIME Impact model done by Houben et al 36 to assist TB policy development. Consistent with the UHC Law, development of a collaborative network should involve the provincial health system, taking into account the optimisation of the decentralised governance structure through synergy of decision space, capacity, and accountability. 7 It is in the Viewpoint DOH's purview to maintain the health managerial standards of provincial leaders (e.g., strategic planning, priority setting, evidence-informed policy making) and their institutional capacities (e.g., multistakeholder processes, private sector engagement), but the Philippine Health Insurance Corporation (PhilHealth) should play a role in the accountability space through financing. 38 Considering that financing has the greatest influence on quality of care, the most effective use of performance-based incentives is when TCI is linked to the active and latent case finding activities of the province. Therefore, the use of PhilHealth's Konsulta package as a performance-based incentive for TCI and TPT provision should be explored. Quality measurement should be closely linked with accountability and action; otherwise, it can burden the health system. 15 The community should hold the governor accountable to reach TPT coverage, who in turn holds the provincial health system accountable for the quality of TCI and TPT provided-incentivising success while regulating failure. 15,37 In measuring complex systems that warrant policy change, 39 mathematical modelling should be employed to better understand the role of several parameters on the projected TB incidence. Because of the ability to illustrate cost effectiveness of proposed interventions, it should be used as a tool for political support at different levels of governance. Additionally, modelling data on the cost of inaction linked with disability-adjusted-life-years are additional tools to engage provincial governors from being a passive supporter of TCI and TPT implementation to becoming a TB champion eventually. 20 Future directions Among the four countries that were modelled in the Global TB Plan 2023-2030, the situation in Indonesia closely resembles that of the Philippines in the context of a fragmented private sector. Improvement of TPT coverage should create an institutional memory for the health system to respond to emerging challenges. Key strategies to curtail the spread of TB should include case finding (with emphasis on early diagnosis of TB), and promotion of public private mix to enhance mandatory notification of TB coupled with provision of TPT to eligible household contacts. 20 While TB vaccine development is ongoing, research and development should focus on subclinical TB, which has not been considered in the previous mathematical models. 17 Localised data, subnational contexts, and key health system changes should challenge existing TB models used for national TB programming. Further refinement of TB transmission dynamics should provide evidence on additional strategies to accelerate the country's race to end TB. Tackling TB challenges in the country requires a whole-of-government and whole-of-society approach. The experience during COVID-19 has caused major setbacks in TB; yet, it created opportunities for governments to strive for better quality health systems. 40 The political will of key government leaders to embody the primary health care approach provides the opportunity to manifest universal actions on governing for quality, redesigning service delivery, transforming health work force, and igniting people's demand. 15 It is through these channels that the local government units, collectively, would be able to achieve universal health coverage-one province at a time. Contributors J.S.C. wrote the drafts of the manuscript, developed the conceptual framework (Figure 1), reviewed local and regional data, conducted extensive review of literature, and edited the final version. K.E.P. structured the outline, reviewed and provided detailed comments and text inputs on various drafts of the manuscript. S.S.T. provided technical guidance during the pilot phase of telecontact investigation, reviewed local and national data, and contributed insight on the use of mathematical modelling. L.L.S. extensively reviewed the manuscript, analysed local and regional data, and validated proposed innovations based on national TB programming. Declaration of interests J.S.C., K.E.P., S.S.T., and L.L.S. are employed by FHI 360 (Family Health International). J.S.C. is a founding member of the Philippine Society of Public Health Physicians. We declare no other competing interests.
2022-10-08T15:01:20.169Z
2022-10-01T00:00:00.000
{ "year": 2022, "sha1": "75e3c927b3813a2b4a049cbaaaa6b778f088356c", "oa_license": "CCBYNCND", "oa_url": "http://www.thelancet.com/article/S2666606522002243/pdf", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "627821203cd302271a07dd9a4f4e170bf9111b7f", "s2fieldsofstudy": [ "Medicine", "Environmental Science" ], "extfieldsofstudy": [ "Medicine" ] }
205906748
pes2o/s2orc
v3-fos-license
A research agenda for curing chronic hepatitis B virus infection for in Kinex and RiboSciences. a curative therapy can be developed to supplement the protective effect of the vaccine. (3,5,6) This document introduces the challenge and presents a roadmap for the discovery of a cure. Development of a cure for any viral infection requires a sufficiently deep understanding of the virus life cycle and its interaction with the host to identify vulnerabilities that can be exploited to eradicate the cccDNA from infected cells. The key word here is "eradicate." In general, the targetable steps in the HBV life cycle include entry, uncoating, delivery of the viral DNA genome to the nucleus, establishment and maintenance of the covalently closed circular DNA (cccDNA) transcriptional template in the nucleus of the cell, followed by transcription, translation, replication, viral and subviral particle assembly, transport and release, and the recycling of cytoplasmic capsid particles to the nucleus to amplify the intracellular pool of cccDNA. Eradication also requires engagement of the host immune response to kill infected cells, to prevent viral spread from any residual infected cells and to counteract any evasive strategies deployed by the virus to defeat the host response. Despite its discovery 50 years ago, most steps in the HBV life cycle and the nature of its interaction with its host are only partially understood because the experimental systems required for such experiments have not been available. (7) Furthermore, despite the ability of currently available direct-acting antiviral drugs to suppress HBV DNA replication, they are rarely curative because they do not prevent the establishment or maintenance of the long-lived HBV cccDNA transcriptional template-the stable nuclear form of the viral genome, which must be eliminated or permanently silenced to achieve a durable HBV cure. (8) Luckily, experimental systems that permit detailed analysis of cccDNA biogenesis, homeostasis, and decay; and all other steps in the viral life cycle were recently developed. (9) Thus, we are now on the threshold of a period of exploration that, if focused on eliminating the cccDNA transcriptional template of the virus, could lead to a cure of chronic HBV infection, once and for all. We encourage the scientific community to focus on research leading to discovery of a cure for chronic HBV infection based on these principles, as summarized here and highlighted in Fig. 1: The surest way to cure HBV is to eliminate or permanently silence its cccDNA. The most important impediment to this achievement is our limited understanding of the fundamental molecular mechanisms that control cccDNA biogenesis, homeostasis, and decay. Understanding these mysteries is now within reach, thanks to recent technological advances that enable definition of these mechanisms. Vulnerabilities in the cccDNA "life cycle" that are discovered in the course of these studies can be exploited to develop small molecule and other molecular strategies to eradicate or permanently silence the cccDNA. Because these studies will explore the unknown, the outcome, like all great adventures, cannot be predicted. Thus, we suggest that in addition to approaches that directly target cccDNA, independent approaches that target other vulnerabilities in the viral life cycle and either indirectly repress HBV cccDNA or safely establish a curative antiviral immune response be pursued in parallel. Such projects could include genetic approaches to cccDNA mutagenesis, epigenetic modification, or other strategies that can suppress cccDNA transcription (e.g., HBV-targeted antisense and small interfering RNA, HBV X protein inhibition, etc.) or to prevent its recycling (e.g., capsid inhibitors). Of course, a vigorous basic and translational research effort to better define the nature of the immune response to HBV in chronically infected patients is also essential. Ideally, the antibody response would be examined at the single B-cell level to reveal the extent to which neutralizing antibodies are produced by chronically infected patients and to determine whether patients who are "cured" of HBV by direct-acting antivirals will require active immunization to prevent intrahepatic viral spread from any infected cells that remain after treatment and to protect them from future exposure. Similarly, the functional and phenotypic characteristics of the HBV-specific T-cell response must be studied before, during, and after curative treatment to determine the extent to which it contributes to the durability or the failure of a given therapeutic strategy. In addition, a wide variety of immune-based strategies that can induce T cell-mediated elimination of FIG. 1. HBV life cycle, emphasizing opportunities to suppress viral cccDNA and restore immune control. Host and viral functions that could be exploited for therapeutic purposes are illustrated, beginning with binding of the virus to the sodium-taurocholate cotransporting polypeptide receptor on hepatocytes (1), followed by translocation of the nucleocapsid to the nucleus and formation of cccDNA (2) and synthesis steps (3,4), leading to either egress (6) of newly formed virions or recycling of cccDNA-containing nucleocapsids to the nucleus (7). Opportunities for cccDNA suppression and immune control are categorized as either acting upon the viral gene products (white boxed text) or acting upon host innate and adaptive immune systems (green boxed text), noting that in many cases these different pathways overlap. Humoral responses are also indicated (Abs). Orange, red, and brown circles indicate small, medium, and large hepatitis B surface proteins, respectively; yellow triangle (core protein), blue circle (pol), "X" (x protein), red semicircles, cccDNA and black line, HBV 3.2 kb and subgenomic HBV RNA; 22-nM in diameter spherical and rod-shaped subviral envelope particles and infectious 42-nM virions are also illustrated. The examples of virus life cycle steps and immune modulators are representative and not comprehensive. Toll-like receptor 3 is shown because it is present in hepatocytes, but other toll-like receptors may also be exploited therapeutically. Abbreviations: Abs, antibodies; CTLA4, cytotoxic T lymphocyte antigen 4; IFNAR, type 1 interferon receptor; NTCP, sodium-taurocholate cotransporting polypeptide; PD1, programmed death 1; TLR3, Toll-like receptor 3. HBV-infected cells should be explored, including diverse approaches including therapeutic immunization, targeted delivery of antiviral effector molecules to infected cells, T-cell checkpoint blockade of T cell inhibitory checkpoint pathways, and targeted delivery of HBV-specific effector T cells to the HBV-infected liver by T cell receptor-based or chimeric antigen receptor (CAR)-T cell technology, etc, should be explored. These studies should be iterative, where results from the clinical work guide the laboratory work and vice versa. In this way, the immunobiology, number of infected cells, and other clinical parameters of chronic hepatitis B as a function of medical intervention can be followed. We also note that HCC can be a consequence of chronic hepatitis B. Therefore, to comprehensively address the problems associated with chronic viral hepatitis B, an improved understanding of the molecular basis of HCC to guide early detection and treatment is vital. Clinical collaborative networks should also be reinforced and expanded to allow for evaluation of new early detection strategies of HCC and therapeutics of HCC and HBV. It is also important to note that any intervention that directly or indirectly activates the cytotoxic T-cell response to HBV could kill all the infected hepatocytes. This would be good if only a few hepatocytes in a given patient are infected and the functional hepatic reserve in that patient is robust. On the other hand, it could be fatal, inducing an acute on chronic liver disease event, if many hepatocytes are infected and hepatic reserve is tenuous. Thus, if the cytotoxic T-cell response is activated by any therapeutic intervention, it must be in a "Goldilocks zone," where it kills just enough hepatocytes at just the right rate to clear the infection without either triggering acute hepatic insufficiency or worsening the underlying chronic liver disease. It is imperative, therefore, to do these studies if we hope to predict how infected patients will respond immunologically to curative therapy before treatment begins, keeping in mind that the physician's first responsibility is primum non nocere, "first, do no harm." A recent review article from Revill et al. specified broad goals for HBV research and has since led to the establishment of an international coalition of scientists, clinicians, and stakeholders committed to the elimination of HBV (International Coalition to Eliminate HBV; http://ice-hbv.org/). (10) Our intention is to support and build upon their effort by adding detail to create a roadmap for policy makers from government and other funding institutions and for planning long-term research. A cure for hepatitis B is also likely to greatly reduce morbidity and mortality associated with hepatitis delta virus infection, end-stage liver disease, and HCC, although it is appreciated that these clinical problems deserve a specific research agenda of their own. It is clearly important to explore multiple viral gene products and life cycle steps for intervention opportunities. To date, of all of the candidate approaches considered, elimination of HBV cccDNA is most likely to produce a durable cure of chronic HBV infection, after a finite course of antiviral therapy. The extent to which this can be achieved with drugs, biologicals, genetic manipulations, immunomodulation, etc. is the major question to be answered. While transcriptional silencing of cccDNA may be easier to achieve than physical cccDNA elimination, it would probably require lifelong treatment to produce lifelong effects unless it triggers some unpredictable durable downstream effect like immune-mediated destruction or noncytolytic elimination of cccDNA from the infected cells. Thus, a vigorous, comprehensive, adequately funded research effort involving multiple, complementary approaches must be taken, with the results being shared in the public domain as quickly as possible. Luckily, experimental systems that permit detailed analysis of the cccDNA and other steps in the viral life cycle are now available to the scientific community for these challenges. In our opinion, a concerted discovery effort that is both encouraged and enabled by governmental and nongovernmental funding agencies can make a huge difference in the lives of hundreds of millions of people worldwide. Let us not let this chance to do so much good slip away.
2018-03-25T14:29:30.868Z
2018-01-24T00:00:00.000
{ "year": 2018, "sha1": "7676d8baad3cb0971da393c509318b2cc22a44f1", "oa_license": "CCBYNC", "oa_url": "https://aasldpubs.onlinelibrary.wiley.com/doi/pdfdirect/10.1002/hep.29509", "oa_status": "HYBRID", "pdf_src": "PubMedCentral", "pdf_hash": "7cf1f4f54542727713bc380b4a19bfd849e2739b", "s2fieldsofstudy": [ "Biology" ], "extfieldsofstudy": [ "Medicine" ] }
237296697
pes2o/s2orc
v3-fos-license
Carry-free Addition in Resistive RAM Array: n-bit Addition in 22 Memory Cycles The movement of data between processing and memory units, often referred to as the ‘von Neumann bottleneck’ is the main reason for the degraded performance of contemporary computing systems. In an effort to overcome this bottleneck, methods to ‘compute’ at the location of data are being pursued in many emerging memories, including Resistive RAM (ReRAM). Although many prior works have pursued addition in memory, the latency of n-bit addition has not been judiciously optimized, resulting in O(n) or at best O(log(n)). Computing with three states can enable carry-free addition and result in a latency which is independent of operand width (O(1)). In this work, we propose a method to perform carry-free addition completely in memory (a storage array, a processing array and their peripheral circuitry). The proposed technique incurs a latency of 22 memory cycles, which outperforms other in-memory binary adders for n ≥ 32. This speed is achieved at the cost of increased peripheral hardware. I. INTRODUCTION The movement of data between processing and memory units is the major cause for the degraded performance of contemporary computing systems, often referred to as the 'von Neumann bottleneck' or 'memory wall'. 'Computation energy' is dominated by 'data movement energy' since the energy for memory access grows exponentially along the memory hierarchy (from cache to off-chip DRAM). There has been an ongoing effort (for 15-20 years) to combat the memory wall by bringing the processor and memory unit closer to each other. For example, 3D stacking of DRAM dies over logic die (enabled by Through-Silicon-Via technology) was used to reduce the energy and latency of data movement between processor and memory, in what was called nearmemory computing [1]. Going a step further, efforts are being made to move computing not just near memory, but to the memory itself i.e., the memory array. Resistive RAMs (ReRAMs) are two terminal devices (usually a Metal-Insulator-Metal structure) capable of storing data as resistance. When subject to voltage stress, it's resistance can be switched reversibly between a Low Resistance State (LRS) and a High Resistance State (HRS). The change of resistance is due to the formation or rupture of a conductive filament in the insulator, depending on the direction of the current flow through the structure. The word 'memristor' is also used by researchers to denote such a device, because it is essentially a resistor with memory. The word memristor and Resistive RAM are used interchangeably in this work. However, it must be noted that the word memristor can refer to a broader class of devices which have the capability to change their resistance in response to voltage/current stress (e.g. Phase Change Memory (PCM), Spin Transfer Torque-Magnetic RAM (STT-MRAM)). Although ReRAM (memristor) was initially experimented as a non-volatile memory technology, it was later discovered that certain Boolean logic operations (IMPLY, NOR, NAND [2]) can be implemented in the memory array. Boolean gates were implemented by modifying the structure of the memory array or modifying the peripheral circuitry or a combination of these. Arithmetic operations like addition/subtraction were implemented as a sequence of Boolean NAND/NOR/IMPLY operations. It was found that if arithmetic operations can be implemented in memory, the 'memory wall' could be overcome since the costly data transfer (both energy and latencywise) between processor and memory units is eliminated. This paradigm shift in the way the data is processed heralded a new era in computing -'in-memory computing' or 'processingin-memory' or 'compute-in-memory'. A survey of research motivated by this paradigm can be found in [3], [4]. It must be noted that the term 'in-memory computing' may also refer to cognitive tasks like machine learning and pattern recognition performed in memory. In this work, our focus is 'in-memory arithmetic'. Although there had been a plethora of works on in-memory arithmetic, it is evident that latency of such in-memory adders has not been carefully studied and optimized. As a result, they require hundreds of cycles to perform 32-bit addition in memory [5]. This exorbitant latency (O(n) for adding two nbit numbers) can be attributed to rippling of carry and weak logic primitives used (IMPLY/NOR). To overcome this latency hurdle, two paths were pursued-stronger logic primitives and parallel-prefix configurations. Majority logic primitive proved to be stronger than NAND/NOR/IMPLY primitives making in-memory addition fast [6]- [8]. To avoid rippling of carry, parallel-prefix adders were investigated. Parallel-prefix adders could reduce the latency of in-memory addition to 8·O(log(n)) and 4·O(log(n)) using OR/AND primitives [9] and MAJORITY/NOT [5], respectively. The conventional approach to addition in both CMOS technology and in emerging non-volatile memories is binary -operands are represented in binary format and processed in binary to get the sum in binary. In this work, we take a different approach to tackle the exorbitant latency of in-memory addition. It has long been known that computing with three states enables 'carry-free addition' in which two operands with wordlength of n can be added in constant time, i.e., independent of n in O(1), whereas binary adders can perform that only in O(log(n)) steps, if reasonable hardware resources are used. In [10], usefulness of multi-level cell memristive devices for ternary computing was shown for the first time. In [11], [12], carry-free adders were implemented in a hybrid manner -ReRAMs were used for storing ternary values but were processed in conventional CMOS after analog-to-digital conversion (ADC). The resulting digit after CMOS processing had to go through digital-to-analog conversion (DAC) before they are stored in ReRAMs. This ADC and DAC costs energy and latency, making carry-free addition less efficient. In this work, we propose a methodology to implement carry-free addition completely in non-volatile memory (a ReRAM array where ternary data are stored, a processing array and peripheral circuitry of the arrays). The rest of the paper is organized as follows. In Section II, we introduce Stored-Transfer Number Representation (STNR) which lays the foundation for carry-free addition. The methodology to implement carry-free addition in memory is presented in Section III. Section IV elaborates the detailed circuit-level implementation of carry-free addition and the verification of the circuit's functionality by simulation. We compare the proposed addition with state-of-the-art in section V followed by conclusion (Section VI). CARRY-FREE ADDITION The idea of redundant number representations goes back to the 1950s and 1960s [13], [14] when electronic circuit integration technology was not so strongly developed as today and one was forced to save transistors as much as possible for the realization of fast arithmetic circuits, e.g. adders. By using redundancy in one digit either by introducing a -1 in addition to the binary values 1 and 0, called balanced ternary representations (digit d i ∈ {−1, 0, 1}), or by introducing a 2 in addition to 0 and 1 in Stored-Transfer Number Representation (digit d i ∈ {0, 1, 2}), one could avoid time-consuming carry chains while adding two operands. The idea of using the ternary digits 0, 1, and 2 for carry-free addition was first published in 1959 by [13]. Later, a full adder solution for Signed Digit (SD), a type of redundant number representation, was published by Jaberipur et.al. in 2006 [15]. However, without loss of generality, we focus here on a method proposed by us to perform carry-free addition using STNR. Consider two ternary numbers X, Y ∈ {0, 1, 2}, as depicted in Fig. 1. They can be added in a carry-free manner in three stages. In each stage, we calculate a sum digit Z i and a transfer digit T i+1 according to the truth tables depicted for the corresponding stage. The transfer digit is denoted T i+1 because for a particular digit position i, the transfer digit is similar to the 'carry' propagated to the next digit. But unlike the carry propagated in conventional adders, the transfer digit T i+1 can be computed without waiting for the lower digits, i.e., for 8-digit addition, T 5 depends solely on X 4 , Y 4 and NOT on X 4,3,2,1,0 , Y 4,3,2,1,0 . This attribute of T i+1 implies that, for ndigit addition, all n T i+1 values can be computed in parallel. Z 1 and T 1 are calculated in parallel from X, Y in stage 1 following the rule, In stage 2, Z 2 and T 2 are calculated in parallel from Z 1 , T 1 following the same rule, The sum S is computed by adding Z 2 and T 2 in stage 3. In this section, we give an overview of steps to implement carry-free addition in memory array. The architecture of the proposed in-memory computing system is depicted in Fig. 2. To perform addition, the ternary data stored in the memory array is transferred to the 'processing array' which performs addition by processing the data in binary format. As depicted in Fig. 2, addition is performed as a sequence of READ and WRITE operations. Reading involves sensing the stored resistance state, and, in ReRAM technology, sensing ternary data (or differentiating between more than 2 states) is a challenge due to low resistance margin and random variations in the resistive states [16]. Therefore, the ternary data is converted to binary data and processed and the final sum is written again to the memory array as ternary data. Both the memory array and the processing array are 1T-1R array configurations 1 . We preferred a 1T-1R configuration over a 1S-1R since ternary data (three states) can be stored reliably by varying the gate voltage of the transistor during SET process [17]. Moreover, due to the absence of sneak paths, both writing and reading can be performed energy-efficiently in a 1T-1R array [6]. As depicted in Fig. 2, two ternary numbers can be added in 11 steps. We shall illustrate the 11 steps by considering two ternary numbers: X = (2120100) and Y = (1222110) and elaborate each step (Fig. 3). We have deliberately chosen X and Y such that all the six combinations of trits which occur during addition (X i , Y i = (0,0),(0,1),(1,1)(0,2),(1,2),(2,2)) are taken into account. In the memory array, the numbers to be added are present as ternary numbers i.e., X i , Y i ∈ 0,1,2 and in a resistive RAM, ternary data is encoded as resistance. Trit 0,1,2 are stored as three distinct resistances. First, the ternary data is read out of the memory array by a ternary sense amplifier which can distinguish between the three states. The resistance corresponding to trits 0,1,2 are converted to bits (MSB,LSB) = (0,0),(0,1),(1,1) respectively (Step 1). Then, X and Y are written to the processing array in Steps 2 and 3. Since each digit of (X, Y ) is two bits, the MSB of X is written first and the LSB of X is written in the row below. In this manner, the numbers to be added X and Y are available in binary format (rows 2-5 of Fig. 3). To calculate the sum digit Z i and transfer T i+1 , we employ a READ & COUNT Circuit (RCC). The RCC is basically an array of 3-bit counters, one for each column of the processing array. When a row is activated, the RCC reads the bit in that row and if it is '1', it produces a negative pulse (if '0', no pulse is produced). If rows 2-5 are read consecutively in 4 cycles, C 2 C 1 C 0 will have the number of '1's present in the column (see Fig. 3). A special case occurs when both the inputs are '2' (Refer truth table for stage 1). In this case, T i+1 must be '2' and Z i must '0'. Since we want to process purely in binary, T i+1 must be '11'. To accomplish this, the 3-bit counter is designed to count 000 → 001 → 010 → 011 → 110. Consequently, after cycle 9, T 1 will be available at (C 2 C 1 ) and Z 1 will be available at C 0 . In Cycle 10, Z 1 is written to the processing array followed by T 1 in cycle 11 and 12. Note that T 1 is not written exactly below Z 1 , but left-shifted by one position because T 1 is transfer digit. Again, rows 3-5 are read consecutively in 3 cycles to produce Z 2 and T 2 at (C 1 C 0 ) of the RCC. Z 2 and T 2 are written into the array in cycles 16 and 17 followed by read and count at rows 4,5. At the end of 19 cycles, sum S will be available at (C 1 C 0 ). Writing S to the ternary array requires 3 cycles because each trit is stored as a distinct resistance. Hence to write (021111010) to the ternary array, (0 0 0) is written in cycle 20 followed by ( 1111 1 ) in cycle 21 followed by ( 2 ) in cycle 22. IV. CIRCUIT-LEVEL IMPLEMENTATION ReRAM is an emerging technology and devices with diverse properties are being reported i.e., the LRS, HRS, resistance window, threshold voltage at which the device switches vary from device to device. To have a realistic investigation, we considered the Resistive RAM devices manufactured at IHP 2 . The 1T-1R is constituted by NMOS transistor manufactured in IHP's 130 nm CMOS technology, whose drain is connected in series to the RRAM (T iN/Hf 1−x Al x O y /T i/T iN ). The cells have a mean HRS of 133.3 kΩ. During SET process (HRS → LRS) transition, the device can be programmed to a LRS of 11 kΩ or 7.5 kΩ with a gate voltage of 1.2 V and 1.6 V , respectively. Hence, the three resistance states are 133 kΩ, 11 kΩ, and 7.5 kΩ and they are used for representing the trits 0, 1, and 2, respectively. Fig. 4 shows our solution to read trit from the 1T-1R array. As stated, resistance 133 kΩ, 11 kΩ and 7.5 kΩ have to be read as (MSB,LSB) = (0,0), (0,1) and (1,1) respectively. Differentiating between 11 kΩ and 7.5 kΩ needs a robust Sense Amplifier (SA). Furthermore, ReRAMs are prone to [18], [19]. spatial (device-to-device) and temporal variations (cycle-tocycle) [16]. Hence, we considered a SA proposed for STT-MRAM which have a narrow margin between its states [18]. The SA proposed in [18] is used to differentiate between two states (binary sensing) and we augmented it with extra circuitry to differentiate between three states (Fig. 4). A. Ternary Sense Amplifier By activating the wordline W L a certain cell is selected. I READ is injected into the selected MLC cell and transforms the resistance stored inside into an equivalent voltage V BL . The signal V BL and the enable signal EN are the inputs for the ternary sense amplifier. As depicted in Fig. 4, the time-based sensing has a voltage-to-time converter followed by a time-domain comparator (D-flip flop). Voltage to time conversion is achieved by a current-starved inverter (consisting of the transistors M 1 − M 5 ) which converts the bitline voltage V BL into a proportional time delay. Therefore, the EN signal reaches the input of the D flip-flops (I F F ) at different times, according to V BL . Transistor M 6 followed by the buffer is used to shape the EN signal to have a steep slope when it reaches I F F . Since the edge-triggered flip flops have EN delay1 and EN delay2 as the clocks, the outputs MSB and LSB of the sense amplifier can sense the ternary state as a binary vector (MSB, LSB). In time-based sensing [18], V BL controls the arrival of the EN signal at I F F , i.e., a higher resistance results in a large value for V BL and I F F goes low earlier. In this manner, the EN signal is delayed (in proportion to V BL ) and will be available at I F F . I F F is fed to two flipflops, which produces output signals MSB and LSB. Note that the flip-flops are edge triggered by different clock signals EN delay1 and EN delay2 . ( EN delay1 and EN delay2 are set to different time steps t delay1 and t delay1 + t delay2 by inverter delay chains). As depicted in Fig. 4, for 133 kΩ, I F F is low at the rising edge of EN delay1 and EN delay2 and consequently sensed as (0,0). But for 11 kΩ, I F F is high at the rising edge of EN delay1 and low at the rising edge of EN delay2 and hence sensed as (0,1). Thus, according to the stored resistance values 133 kΩ, 11 kΩ, and 7.5 kΩ, the SA produces the output vectors (0, 0) for trit 0, (0, 1) for trit 1, and (1, 1) for trit 2. The ternary SA was designed in IHP's 130 nm CMOS technology and correct sensing of the three states was verified by simulation. B. WRITE circuit The WRITE circuit is the part of the peripheral circuitry responsible for writing data to the memory and processing array. To minimize latency, the WRITE circuit must have the capability to program multiple cells in parallel. An operational amplifier is used to accomplish this, as depicted in Fig.5. The operational amplifier acts as a voltage regulator and is also able to deliver the required current to program eight 1T-1R cells simultaneously [5]. Writing binary data is straightforward in ReRAM technology -the cell is programmed to LRS (the filament is formed between the electrodes) by applying a positive voltage to the BL while SL is grounded. The cell is programmed to HRS (the filament is ruptured) by applying a positive voltage to the SL while BL is grounded. A voltage of opposite polarity is needed across the RRAM cell to break the filament (see Fig. 5). For writing ternary data, we need one more state. This can be accomplished in 1T-1R configuration by varying the gate voltage of the transistor which in turn changes the compliance current during the SET process (HRS → LRS). As depicted in Fig. 5, a voltage of 1.2 V at the gate programs the cell to a LRS of 11 kΩ. A higher gate voltage of 1.6 V during the SET process programs the cell to a LRS of 7.5 kΩ. A higher gate voltage results in a higher compliance current and consequently a thicker filament [17]. A thicker filament between top and bottom electrodes forms a wide conductive path, thus lowering the resistance. To verify the WRITE circuit, the 1T-1R cell was modeled by fitting the Stanford-PKU model to characteristics of IHP's RRAM [17]. V W RIT E of 1.2 V was used and simultaneous writing of 8 1T-1R cells was verified by simulation. The READ and COUNT circuit is the crucial part of the proposed methodology to perform carry-free addition in the memory array. As explained in Section III, the sum digit Z i and transfer digit T i+1 are computed by reading out and counting the number of '1's. As explained in the previous section, '0' is stored as 133 kΩ and '1' is stored as 11 kΩ in the processing array. Since the number of '1's in a column have to be counted, a resistance of 11 kΩ has to be differentiated from 133 kΩ and then counted. In essence, we need a sense amplifier followed by a counter. As depicted in Fig. 6, the Schmitt-Trigger (ST) circuit functions as a sense amplifier. To perform READ and COUNT operation in a column, I READ is injected to the 1T-1R cell as in a memory READ operation. The resistance of the ReRAM cell is transformed to an appropriate voltage ( V BL ≈ I READ × R 1T −1R ) and fed to the six-transistor ST circuit which produces a negative pulse if V BL is below a certain threshold voltage. This negative pulse is fed to a three-bit counter which outputs C 2 C 1 C 0 (flip-flops Q 2 , Q 1 , Q 0 in Fig. 6 are negative edge-triggered). The RCC was designed in IHP's 130 nm CMOS technology and I READ = 5 μA 3 was used during RCC operation. The ST circuit has an upper threshold voltage (V T H ) of 0.7 V and a lower threshold voltage of 0.5 V (V T L ) [20]. It must be noted that four consecutive rows must be read and counted to compute Z 1 , T 1 (three rows for Z 2 , T 2 and two rows for Sum, S). When a '0' is read (133 kΩ), V BL is 0.6 V which is above V T L . Hence the output of ST is held high at V DD . When a '1' is read (11 kΩ), V BL is 0.075 V which is below V T L . Hence the ST outputs goes low. At the end of the READ operation in the first row, corresponding W L is deactivated, and V BL goes high (access-transistor of cell is switched OFF). Consequently, ST output goes high again producing a negative pulse. In this manner, the number of '1's in a column are converted into negative pulses and are counted by the three-bit counter. V. COMPARISON WITH BINARY IN-MEMORY ADDERS To the best of our knowledge, this is the first work to propose in-memory carry-free addition methodology. In this section, we compare our work with other in-memory adders which use different logic primitives and adder architectures like parallel-prefix configurations to minimize latency of addition. Table I compares the latency and area/peripheral requirement of our carry-free adder with the best performing binary adders (other binary adders with O(n) latency do exist, but not compared here since their latency ≈ 200 cycles for 32-bit addition, see [5]). As plotted in Fig. 7, carry-free addition outperforms best binary adders for 32-bits and more. The Majority+NOT based parallel-prefix adder [5] is the only binary adder which competes well with the speed of carry-free addition (26 cycles for 32-bit addition) while not requiring huge peripheral modifications. For the 7-trit addition (7-trit is equivalent to 8-bit), we need a 9×9 processing area and 9 RCC circuit and a WRITE circuit, as illustrated in Fig. 6-(b). In Fig. 6-(b), Z 2 , T 2 could have been overwritten on Z 1 , T 1 , requiring a lesser area, but this was not pursued due to limited endurance of ReRAM devices i.e., each cell in the 9×9 area is switched once during 7-trit addition. We are not able to compare the energy of adders since the energy depends on the switching energy of ReRAM cell and vary from device to device. The energy of our adder will consist of the energy to READ and WRITE, which can be achieved energy-efficiently in 1T-1R configuration due to the absence of sneak currents [2]. WRITE latency (≈ 50-100 ns) is greater than the READ latency (≈ 20 ns) for our adder and this true of all ReRAMbased adders since this is typical of ReRAM technology. Parallelprefix 8log 2 (n)+13 (5+log 2 (n))×n of main array with row/column decoders modified to be able to apply inputs [9] Majority+NOT Parallelprefix 4log 2 (n)+6 6×(8n+16) * * of main array with minor modification to row-decoder [5] Count Carryfree 22 One 9×(n + 1) processing array with (n + 1) READ and COUNT Circuits (RCC) and a WRITE circuit * RIMP/NIMP stands reverse implication and inverse implication in a Complementary Resistive Switch (CRS) based adder. * * This huge array area requirement is because the authors considered the area of the Sense Amplifier (8 columns share a SAs, pitch-matching) which others works have not considered. VI. CONCLUSION In this work, we have proposed, for the first time, a method to implement carry-free addition in the memory array. The carry-free addition was accomplished by STNR as opposed to conventional adders which pursue a binary representation. Carry-free addition could accelerate in-memory computation since they require O(1) latency. The proposed carry-free addition can be reliably performed in 1T-1R Array and this was verified by simulation. The proposed technique incurs a latency of 22 memory cycles, which outperforms all binary in-memory adders for n ≥32. The price one has to pay is increased peripheral hardware since n-bit addition requires a 9×(n + 1) processing array with its peripheral circuitry (WRITE circuit and n + 1 RCC). The proposed adder will be
2021-08-26T13:09:54.963Z
2021-07-01T00:00:00.000
{ "year": 2021, "sha1": "48f077396a7ce4c7a76bbd0f323f834b194ae035", "oa_license": "CCBYNC", "oa_url": "https://opus4.kobv.de/opus4-fau/files/16670/CarryFreeAdditionInRRAMemoryArray.pdf", "oa_status": "GREEN", "pdf_src": "IEEE", "pdf_hash": "097e6adce8dfda5e8719b3a82798ed33d6469578", "s2fieldsofstudy": [ "Computer Science" ], "extfieldsofstudy": [ "Computer Science" ] }
251903770
pes2o/s2orc
v3-fos-license
Enabling The Feed-Forward Design Model in OpenCL Using Pipes Over the past few years, there has been an increased interest in using FPGAs alongside CPUs and GPUs in high-performance computing systems and data centers. This trend has led to a push toward the use of high-level programming models and libraries, such as OpenCL, both to lower the barriers to the adoption of FPGAs by programmers unfamiliar with hardware description languages (HDLs), and to allow to seamlessly deploy a single code on different devices. Today, both Intel and Xilinx (now part of AMD) offer toolchains to compile OpenCL code onto FPGA. However, using OpenCL on FPGAs is complicated by performance portability issues, since different devices have fundamental differences in architecture and nature of hardware parallelism they offer. Hence, platform-specific optimizations are crucial to achieving good performance across devices. Inthis paper, we propose using the feed-forward design model based on pipes in order to improve the performance of OpenCL codes running on FPGA. We show the code transformations required to apply this method to existing OpenCL kernels, and we discuss the restrictions to its applicability. Using popular benchmark suites and microbenchmarks, we show that the feed-forward design model can result in higher utilization of the global memory bandwidth available and increased instruction concurrency, thus improving the overall throughput of the OpenCL implementations at a modest resource utilization cost. Further concurrency can be achieved by using multiple producers and multiple consumers. INTRODUCTION Over the past several years, there has been an increasing trend toward using heterogeneous hardware in single machines and largescale computing clusters. This trend has been driven by demands Permission to make digital or hard copies of all or part of this work for personal or classroom use is granted without fee provided that copies are not made or distributed for profit or commercial advantage and that copies bear this notice and the full citation on the first page. Copyrights for components of this work owned by others than ACM must be honored. Abstracting with credit is permitted. To copy otherwise, or republish, to post on servers or to redistribute to lists, requires prior specific permission and/or a fee. Request permissions from permissions@acm.org. for high performance and energy efficiency. Initially, heterogeneity has mostly involved the use of GPUs and Intel many-core processors alongside multi-core CPUs [6]. More recently, due to their compute capabilities and energy efficiency, the trend has evolved to include Field Programmable Gate Arrays (FPGAs) [18] in highperformance computing clusters and data centers. Today, Microsoft Azure and Amazon Web Services include FPGA devices in their compute instances [3][1]. Hardware heterogeneity involves significant programmability challenges. Without a unified programming interface, not only are users required to become familiar with multiple programming frameworks, but they also need to understand how to optimize their code to various hardware architectures. To address this challenge, the Khronos group has introduced a unified programming standard called OpenCL, which is intended for accelerated programming across different architectures [4]. This programming model initially targeted CPUs and GPUs. At the same time, programming FPGAs using low-level hardware description languages (HDLs) has traditionally been considered a specialized skill. To facilitate the adoption of FPGAs, vendors have spent substantial resources on the design and the development of OpenCL-to-FPGA toolchains, including runtime libraries and compilers allowing the deployment of OpenCL code on FPGA. Intel and Xilinx, two major FPGA vendors, are now providing their own OpenCL-to-FPGA development toolchain and runtime system [5] [10]. Although OpenCL increases portability and productivity, there is often a significant performance gap between an OpenCL and a hand-optimized HDL version of the same application [14]. Bridging this performance gap while limiting the development effort requires exploring existing OpenCL-to-FPGA optimizations and designing new ones. Several papers have aimed to improve the efficiency of existing OpenCL code (often tailored to GPUs) on FPGA through platform-agnostic and specific compiler optimizations and scheduling techniques [15] [11][9] [20]. Performance portability is one of the major issues when using OpenCL-to-FPGAs SDKs, especially for applications originally encoded for a different device (e.g., a GPU). It has been shown that OpenCL code tailored to one platform often performs poorly on a different platform [21]. The origin of the performance portability issues between GPUs and FPGAs lies in the different architectural characteristics of these two platforms. Specifically, there are three fundamental factors that affect the performance portability between GPUs and FPGAs. First, the form of parallelism that these devices offer. FPGAs leverage deep pipelines to exploit parallelism across OpenCL work-items, while GPUs rely on concurrent, SIMD execution of threads (or work-items). Second, the off-chip memory bandwidth of current FPGA boards is much lower than that offered by high-end GPUs, which results in inefficient memory operations and lower overall application performance. Third, while GPUs provide relatively efficient support for synchronization primitives like barriers and atomic operations, barriers on FPGAs result in a full pipeline flush, leading to significant performance degradation. In this work, we explore and evaluate the use of the feed-forward design model to improve the performance of OpenCL code on FPGA. The proposed model splits each kernel into two kernels -a memory kernel and a compute kernel -connected through pipes. At a high level, the model aims to increase the memory bandwidth utilization, reduce the memory units' congestion, and maximize the instructions concurrency within the application. We show that the feed-forward design model allows the offline compiler to generate designs with more efficient memory units and increased instruction parallelism, leading to better performance with a low resource utilization overhead. A simplified version of this scheme has been explored in [22] on simple micro-kernels, in most cases leading to performance degradation over the original single work-item version of the code. In this work we show that, when generalized and applied to more complex and less regular kernels, this technique can achieve up to a 65× speedup over the single work-item version of the code, and an average 20× speedup across a set of diverse applications from popular benchmark suites [8] [7]. Our exploration is structured as follows. First, based on recommendations from Intel's OpenCL-to-FPGA documentation [2], we convert SIMD-friendly code into serial code (i.e., a single work-item kernel). Second, using the the feed-forward design model, we split each kernel into two kernels (memory and compute kernels), thus separating global memory reads from the rest of the instructions inside the kernel. In order to minimize the data communication latency, we connect these two kernels through pipes. Lastly, we explore increasing the concurrency by having multiple versions of memory and compute kernels working on different portions of the data. In our experiments, we first compare the performance and resource utilization of the original kernels and the versions using the feed-forward design model. Then, we analyze the impact of the feed-forward model on other best-practice optimizations. In summary, this work makes the following contributions: • Showing how the feed-forward programming model can improve performance of OpenCL codes on FPGA by addressing one fundamental performance bottleneck for single workitem kernels, namely, memory bandwidth utilization; • Proposing a systematic method to use the feed-forward design model in OpenCL kernels; • Identifying limitations of the explored design model; • Exploring optimizations enabled by the feed-forward design model. The rest of the paper is organized as follows. In Section 2, we provide background information on OpenCL for FPGA. In Section 3, we discuss the feed-forward design model, we show how it can be applied to existing OpenCL code, and we discuss its applicability and limitations. In Section 4, we present an experimental evaluation covering micro-benchmarks and applications from popular, opensource benchmark suites [8] [7]. In Section 5, we discuss prior work in this area. In Section 6 we conclude our discussion. BACKGROUND 2.1 OpenCL for FPGA OpenCL allows programmers to write platform-agnostic programs and deploy them on a wide range of OpenCL compatible devices. An OpenCL application consists of two types of code: host code and device code. The host code is responsible for: data allocation on the host machine and accelerators (devices), communication setup and data transfer between host and devices, configuration of the accelerators, and launching the device code on them. The device code contains the core compute kernels, is written to execute on one or multiple platforms, and is often parallelized. In OpenCL terminology, a kernel consists of multiple work-items evenly grouped in work-groups. When deployed on GPU, work-items correspond to threads, and work-groups to thread-blocks. OpenCL kernels for FPGA can be in two forms: NDRange or single work-item. NDRange kernels consist of multiple work-items, distinguishable through their local and global identifiers, launched by the host code for parallel execution. This model is widely used for programming CPUs and GPUs; on FPGAs, concurrent execution of work-items is enabled through pipeline parallelism. Single work-item kernels have a serial structure, with only one work-item launched by the host code. The single work-item model is preferred when the NDRange version of the kernel presents fine-grained data sharing among work-items. Single work-item kernels are often recommended by FPGA vendors [10], partially because writing the same kernels in NDRange fashion might require expensive atomic operations or synchronization mechanisms to ensure correctness. The major FPGA vendors -such as Intel and AMD/Xilinxnow provide OpenCL-to-FPGA SDKs to facilitate FPGA adoption by a wide range of programmers with different skills. However, the automatic generation of efficient FPGA code often incurs performance portability issues, especially when the OpenCL code was originally optimized for a different device, such as a GPU. To bridge the performance gap between FPGAs and other devices, it is critical to understand the performance limiting factors on FPGA, and design FPGA specific optimizations. OpenCL Memory Model In order to better understand how different compiler or scheduling techniques and optimization can improve the performance of an OpenCL kernel on FPGAs, it is crucial to have a comprehensive knowledge of the OpenCL memory model and how they map on the system. The Memory model that defines the hierarchy of memory sections used by OpenCL applications consists of five sections. Figure 1 shows the hierarchy between these five sections. First is the host machine's global memory, which is directly accessible by the host processor and is used to store the data that further will be transferred to or read from the device(s). The second one is the device's global memory. This region is accessible to both the host processor and all the work-groups and work-items of the kernel and coordinates the data transfer between host and device memory. The third region is the constant global memory which is the section of the system memory that the host code carries read and write access to it, while the kernels only have read access. The device's global and constant memories are usually memory chips connected to the FPGA device. However, in some cases it might also include distributed memories within FPGA fabric [21] [5]. The last two regions are high-throughput and low-latency memory regions known as local and private memory regions. The former region is shared and accessible among work-items inside a single work-group. The latter region is only visible to each work-item executing within an OpenCL processing element. These two memory regions are often implemented using BlockRAMs or registers in the FPGA fabric. Many of the previous works, such as [16][23], aimed to reduce the performance limitations resulting from the external memory bandwidth of FPGA boards. In order to better understand the effect of compiler optimizations and scheduling techniques on memory operations, it is essential to know how OpenCL-to-FPGA compilers implement memory operations using load/store units(LSU). For the rest of the paper, we refer to Intel's OpenCL-to-FPGA SDK as the offline compiler. The offline compiler can instantiate one of several types of LSUs depending on the inferred memory access pattern of the memory operations. Whether the accesses are to/from global or local memory, and which types of LSUs are available on the target FPGA platform are the parts of the information that the offline compiler uses to choose the most efficient available LSU type. For non-atomic memory operations on the global memory region, the offline compiler often instantiates from one of the multiple LSU types. First LSU type used by the offline compiler is the burst coalesced LSU. The offline compiler often uses burst coalesced LSU as the default type. This type of LSU is the most resource-hungry memory module, designed to buffer memory requests until the largest possible burst of data read/write request can be sent to the global memory. Second, known as prefetching LSU, leverages a FIFO to read large blocks of data from global memory and tries to keep the buffer full of valid data. This type best fits the memory operations with a sequential memory access pattern. Further, for the memory operations on the local region, the offline compiler instantiates a pipelined LSU, which submits memory accesses in a pipeline manner as soon as they are received. The offline compiler can also use pipelined LSU as an alternative for global memory accesses, resulting in slower but more resource-efficient memory units. IMPLEMENTING THE FEED-FORWARD DESIGN MODEL As we mentioned in the background section, different LSU modules can result in a different hardware implementation with different resource utilization and throughput. Moreover, memory operations on global memory are known to be one of the main throughput bottlenecks for kernels implemented on FPGAs. While sometimes the offline compiler allows the programmer to control the type of the LSU, there is a lack of underlying hardware and implementation knowledge from programmers to leverage these customizations. Moreover, the offline compiler takes a relatively conservative approach regarding loop carried dependencies for global memory operations, resulting in a sub-optimal global memory bandwidth utilization. Programmers can verify this by checking the early stage analysis report file generated by the offline compiler. Hence, there are two primary outcomes of the offline compiler memory analysis that significantly affect the kernel's final performance. The first one is the type and setting of each LSU assigned to one or multiple global memory instruction(s). Moreover, the second is the presence of loop carried dependencies through global memory. In their presence, the offline compiler serializes the execution of loops with loop carried dependencies, resulting in a high initial interval (II) and low throughput for that specific loop. Initial interval is the number of clock cycles between the launch of successive loop iterations calculated for each loop in the design. This infers that the better the offline compiler understands the memory access patterns and behavior of global memory instructions inside the kernel, the more optimized the offline compiler will generate the final implementation. Wang et al. [17] measured the memory bandwidth for sequential and random memory accesses for different variable types and concluded that kernels containing random memory accesses will suffer from low memory bandwidth drastically. Also, many OpenCL for FPGA baseline designs they used in their work had sub-optimal throughput due to severe lock overhead or memory bandwidth overhead. Through this work, we realized that programmers could help the offline compiler better understand the different characteristics of memory operations and dependencies by isolating the memory operations. For instance, using the feed-forward design model indicates that there is no loop carried dependencies between load and store instructions from/to global memory pointers. Providing this information to the offline compiler can significantly improve the performance in different applications. After converting the ported design to FPGA from the NDRange programming model to a single work-item programming model, using this technique can result in synchronization-free kernels with high memory bandwidth utilization. In this design model, the resulting kernel responsible for loading values from global memory (memory kernel) should connect to the second kernel (compute kernel) through a hardware mechanism that does not involve using global memory. This restriction results in a second kernel that is free of global memory load instructions. Therefore, in this design, it is not possible to use the local memory objects for this data sharing abstract because they are separate kernels. However, the OpenCL standard provides a concept of a memory object that is an ordered sequence of data items called pipes. Pipes are the primary mechanism for passing data between kernels, enabling concurrent execution of multiple kernels using pipeline parallelism across multiple kernels. Each pipe has a write and read endpoint, which allows a single OpenCL kernel to write to one endpoint of the pipe and another kernel to read from the read endpoint of the pipe. It is worth mentioning that, in an OpenCL programming model host and device(s) can also have dynamic communication through pipes which is not the focus of this paper. Following the OpenCL specifications, Intel provides an OpenCL extension called channels and introduces it as a mechanism for data communication between kernels which can also help kernels synchronize efficiently [2]. This extension will allow concurrently running kernels to communicate without getting the host processor or device's global memory involved. Programmers can define the depth of the channels as an input attribute. At the same time, the offline compiler considers this input to be the minimum depth of that specific channel. The offline compiler may increase the depth if there is a need to balance the reconverging paths through multiple kernels or to achieve a lower initial interval for the loops inside the kernel [2]. In an OpenCL kernel written for Intel FPGAs, channels can be used in two formats: blocking and non-blocking. The blocking channel operations (read/write) will stall the kernel until the operation returns successfully. In contrast, non-blocking channel operations return the results of the operation as successful or failed through an additional flag immediately. Intel recommends FPGA programmers use the single workitem programming model to design their OpenCL kernel. This programming model can maximize the throughput in applications that require fine-grained data sharing among parallel work-items. Moreover, Intel suggests using single work-item kernels over NDRange kernels when designing an application using channels/pipes while creating the feed-forward data path between the two kernels connected using pipes. Using channels for concurrent kernel execution can improve the efficiency of the design. In this case, the host code launches the kernels concurrently, and kernels communicate through channels where applicable. Programmers can exploit this improved efficiency using the feed-forward design model with a producer-consumer approach in which one or multiple producer kernels send the data to one or multiple consumer kernels. In the feed-forward design model, a manager module and synchronization mechanism are required to ensure the design's functionality is maintained while chasing performance improvement. In cases where synchronization is needed, programmers can use blocking channels to implement a synchronization mechanism among producer, consumer, and manager kernels. Our approach in this work follows the same feed-forward design. We use the host code and hard-coded sections in the producer and consumer kernel to address the need for a manager kernel between the producer and consumer kernel. This approach simplifies the design and reduces the number of busy waits between kernels. For the rest of the paper, we call the producer memory kernel and the consumer compute kernel. Channels have been used in the past mostly to connect kernels in order to explore intra-kernel optimization space [12] or to explore data partitioning and memory bandwidth limitation on simple hand-written kernels [22] [17]. However, the performance advantage is usually limited in these kernels due to global memory bandwidth limitations. This work focuses on transforming kernels from various programming model domains with or without control-flow divergence to a producer/consumer base implementation using channels. This transformation involves managing global read instructions in a separate kernel and reducing the unnecessary control-flow divergence due to the kernel's behavior. While focusing on this design model, it is crucial to understand the limitations of our suggested design. To have an easier understanding of these limitations, we use the single work-item version of kernels as the baseline to check whether this approach is feasible to be implemented or not. Assuming the kernel is in NDRange form like the baseline implementation of the benchmarks in [7] and [8], programmers can construct the single work-item version by embedding the body of the NDRange baseline kernel within a nested loop. The outer and inner loops must have the work-group and work-item sizes as the loop iteration count, respectively. Analyzing the single work-item kernel, if the programmers identify true loop carried dependencies that involve global memory operations inside the kernel, a simple feed-forward design model cannot be applied. This shortcoming is due to the lack of a well-defined technique to introduce global intra-kernel synchronizations between concurrent kernels in OpenCL to FPGA toolchains. We will elaborate on loop carried dependencies later in this section. While having this information regarding efficient OpenCL kernel design, Figure 2 shows an example of how programmers can leverage the Channels extension (OpenCL pipes) to convert a complex single work-item baseline kernel into the feed-forward design model implementation. In what follows we explain this process step-by-step and match each step with the provided example in Figure 2 (in the example in Figure 2 we used notations from Intel OpenCL to FPGA SDK). (1) Remodeling the kernel to a single work-item programming model if the baseline is in NDRange format. (2) Identifying instructions that read from a global memory pointer (lines 2, 7, 12, and 13 from the baseline in 2a. with the Enqueue of all memory and compute kernels on separate queues. Limitations -Recall that as we mentioned earlier, while converting the NDRange kernel into a single work-item programming model in case the conversion results in loops with true loop carried dependencies through global memory, the kernel is not suitable for this design model. Using this design model for kernels with this characteristic will result in inaccurate outputs generated by the application due to the concurrency of dependent load and store instructions. Our proposed technique can improve the kernels' performance due to optimizing the following characteristics of the kernel. loop carried dependencies -In order to better understand the impact of the loop carried dependencies (LCD) on the performance of different kernels, it is crucial to understand different types of LCD and how the offline compiler deals with them. Typically LCD can be in the form of memory loop carried dependency (MLCD) or data loop carried dependency (DLCD). MLCD applies when a single value or vector data stored in the global memory in one iteration of a for loop inside the kernel is further requested by a later iteration of the same loop. This data dependency implies that the device needs to schedule the read and write operations serially to ensure the correctness of the results generated from the kernel. Figure 3(a) shows an example of MLCD in which the operation in line 4 has a global memory loop carried dependency to the operation in line 2. The offline compiler will serialize such a loop to ensure that the correctness is guaranteed. While converting NDRange kernels to single work-item kernels, the results from the offline compiler report files indicate that the offline compiler takes a more conservative approach and considers MLCD where it cannot determine whether there is one or not. We realized that this would result in a significantly higher execution time for different benchmarks throughout our experiments. For each MLCD that the offline compiler implies, it will serialize the whole loop to ensure the correctness of the kernel results, which drastically increases the initial interval (II) of the loop and results in the performance degradation, as mentioned earlier. Figure 3: Loop carried dependencies While we propose this technique to help programmers write more efficient and optimized codes for FPGAs, we believe that programmers should have such knowledge regarding cases of MLCD inside their application. Programmers must only use this design model when they can guarantee that there is no true MLCD involved in the algorithm or application they are implementing. This design model ensures the offline compiler that there is no true MLCD in the application. The offline compiler can generate an implementation with substantial performance benefits compared to the baseline code. For instance, while applying the feed-forward design model on Maximal Independent Set (MIS) application from Pannotia, removing the false MLCDs results in improving the maximum global memory bandwidth utilization from 208 MB/s to 2116 MB/s and results in 6.35× speedup over the single work-item baseline. DLCD is another form loop carried dependencies which results in the serialization of the loops by the offline compiler. Figure 3(b) shows an example of DLCD in which the operation on line 5 implies a loop carried dependency for the loop on line 3, and the offline compiler will serialize the load and arithmetic instruction in this example. This serialization will result in an initial interval higher than one for the loop, which has a significant negative effect on the performance of the kernel. This serialization will also result in a lower memory bandwidth utilization for load instruction in the loop, especially load instructions with regular memory access patterns. However, after applying our technique to the baseline kernel, this DLCD is moved to the compute kernel which is free of memory operations. This will allow the offline compiler to schedule load instructions in the Memory kernel earlier since now these instructions are in a loop with no DLCD. Figure 3(c) and 3(d) show how the DLCD in the loop in Figure 3(b) is moved only to the compute kernel and the offline compiler can schedule the memory instructions in memory kernel in a pipeline manner. Simplified control flow for load instructions -While applying this technique, the memory kernel can have a more simplified control flow graph (CFG) compared to the original baseline. A less complex CFG for the kernel with memory operations can result in less stall for read instruction on global memory, hence higher memory bandwidth utilization. Consequently, improved global memory bandwidth utilization can improve the overall performance of the implementation as respectively. Enabling feed-forward design model with multiple producers and consumers -The most significant advantage of our proposed technique is enabling using the feed-forward design model to increase the memory bandwidth utilization for load instruction by increasing concurrency among memory operations. In the feed-forward design model, data traverses from the producer to the consumer in one or multiple words. For each word written on a channel by the producer kernel, the consumer will process the data and free up the memory space assigned to the channel. The producer kernel can reuse this memory region further to transfer the following word(s) of data to the producer kernel. Having multiple producers and consumers in the design could potentially increase the maximum global memory bandwidth achieved by design. Having multiple producers and consumer kernels require the programmer to make various decisions regarding their number, the load balancing mechanism, and buffer management. Having more replications adds concurrency while increasing the complexity of the design and the use of channels to implement data transfers between producers and consumers. Intel recommends limiting the number of channels used in the design, as they can add complexity and limit overall performance. Also, having a large number of kernels to read data from global memory concurrently can increase global memory congestion and result in poor global memory bandwidth utilization. In our experiments across benchmarks, we did not find significant performance improvements beyond two producers and two consumers. By limiting the use of channels and the global memory contention, this setting allows for increasing the performance of the design at a limited resource utilization overhead. Moreover, we explored using a single producer and multiple consumers during our experiments on a limited number of benchmarks. However, results indicated that having separate producer kernels will results in higher concurrency compared to the case with one producer and multiple consumer kernels. Programmers can use different load balancing mechanisms to implement their programs using one or multiple producer/consumer kernels. These mechanisms can be classified as either static or dynamic. Unlike dynamic algorithms, a static load balancing algorithm does not take into account the state of the system to make decisions regarding the distribution of tasks. Many of the dynamic load balancing algorithms require busy-wait or feedback mechanism implementations involving more than two kernels by polling on non-blocking channels. This form of busy wait can result in sub-optimal performance for the designs on FPGA. We use static load balancing to connect producer and consumer kernels in this work. However, static load balancing will simplify the design and avoid using busy waits or non-blocking channels, which also increases the resource utilization of the design. By following this method, programmers can further improve the execution time of the feed-forward design model up to 93% by improving the global memory bandwidth utilization and concurrency among instructions(increased performance number for Hotspot application from Rodinia by increasing the global memory bandwidth utilization from 7340 MB/s to 13660 MB/s). Moreover, programmers can use a single memory location or a FIFO buffer to manage data transfer between kernels. From our experiments with the feed-forward design model, we concluded that the channel's depth does not significantly affect the performance improvement or degradation. Hence, we use a single memory location for data transfer between each producer and consumer kernel. Moreover, we designed a set of automatically generated microbenchmarks to explore the baseline kernel features that would impact the design speedup by using the feed-forward design model. We will discuss the characteristics of these microbenchmarks in the Section 4. Performance Metrics -For performance, we measure and report the speedup over the original single work-item version of each benchmark. For resource utilization comparison, we exhibit the increase in logic utilization and use of block RAMs (BRAMs) in the transformed version compared to the single work-item version. Logic utilization represents an estimation of how many half ALMs (adaptive logic modules) the compiler used to fit the design. The compiler presents this number as a percentage of total half ALMs on the FPGA board. ALMs are the basic building blocks in an FPGA. The simplified version of ALMs contain a lookup table (LUT) and an output register. The compiler can use ALMs to build different Boolean logic on the board. Benchmarks -In this work, we first started by evaluating our design on widely used open-source benchmarks from Rodinia [8] and Pannotia [7] benchmark suites from different domains. We use table 1 to summarize the main characteristics of these applications in addition to resource utilization and execution time of the baseline codes. Experimental Result In what follows, we will discuss the kernels' main characteristics, which have gained the most speedup from our proposed technique, in two steps. First, we will discuss which benchmarks benefit from the transformation of the base kernel to the feed-forward design, and later we will discuss how these kernels can benefit from increased concurrency by having multiple producers and consumers. Feed-forward model comparison with the baseline -Table 2 shows the speedup impact of transforming the single workitem baseline of each benchmark to the feed-forward design model. In all cases, we used the naive code without any optimizations in order to isolate the impact of this design model in the final results. Moreover, we obtained the best result from running the same experiments with channels having three different depths of one, 100, and 1000 in the design. We should recall that the offline compiler considers channel depth input as the minimum depth of that specific channel and may increase if there is a need to balance the reconverging paths through multiple kernels or achieve a lower initial interval for the loops inside the kernel. During our experiments, we realized that the depth of the channel does not significantly affect the speedup for the feed-forward design, and there is no notable trend for increase/decrease in any specific metric using channels with higher or lower depth. As shown in table 2, among the benchmarks we explored, Breadth-First Search (BFS), Floyd-Warshall (FW), Back Propogation, Maximal independent set (MIS), and Needleman-Wunsch (NW) benefit drastically from converting the single work-item baseline to the feed-forward model. In all these benchmarks, the main driver for the speedup is removing false loop carried dependency between operations on the device's global memory region. For Floyd-Warshall (FW), the false LCDs detected by the offline compiler results in a large initial interval (II) of 285 for the main loop inside the kernel. In similar cases, the offline compiler often uses burst coalesced LSU type for memory operation involving global memory and results in low memory bandwidth for operations with regular memory access patterns. Using the feed-forward design model will result in resolving those dependencies while converting the same loop to a fully pipelined loop with an II of one. It will also enable the offline compiler to use a prefetching LSU for one of the three global load operations with a regular memory access pattern and increases the maximum global memory bandwidth of the kernel from 630 MB/s to 3130 MB/s. These changes will result in up to 65X speedup compared to the single work-item baseline. Following the same trend, the Back Propogatation benchmark benefits from this design in the same way. The main loop that degrades the performance of the kernel in the single work-item version with II of 416 will transform to a pipelined loop with II of one. This decrease in the II will increase the maximum global memory bandwidth used by the kernel and result in a significant speedup of 44X over the single work-item baseline. Other three benchmarks also benefit from this model in the same way. Here, we should notice that the baseline version of the Needle-Wunsch (NW) benchmark carries a true MLCD inside the main loop of the kernel. However, this LCD is for a read memory operation that in iteration K which is dependent on the write memory operation in the iteration K-1. In this case, this LCD can be resolved in the baseline kernel using a local variable in the private memory of the device. Storing the dependency value at the end of each iteration can remove the existed loop carried dependency with the global memory operations. Hence, the kernel can read the same value at the beginning of the next iteration (except the first iteration). Adding this private variable will result in a single work-item baseline kernel with no MLCD. Then we can apply the feed-forward design model to get up to 50X speedup by decreasing the II of the main loop and increasing the global memory bandwidth of the application. While the feed-forward design model enables removing ML-CDs and increasing the maximum global memory bandwidth of the kernel. It also enables using multiple producers/consumers to increase the concurrency among the instructions in the application. However, a resource utilization overhead is associated with this increase in concurrency. Hence, it is crucial to analyze the profiling data before increasing the number of producers and consumers. In this work, we use the Intel OpenCL-to-FPGA profiler to analyze the throughput and execution time of each kernel. In order to avoid high resource utilization overhead, we only instantiate multiple versions of the producer and consumer kernels for the kernel with the dominant execution time in the application. This rules out kernels commonly used for initialization only once at the beginning of application execution. We also tried multiple values as the number of producers and consumers in multiple applications. Results from these experiments indicated that having more than two producers and two consumers per kernel in the design will result in high resource utilization overhead and either non-significant speedup or even throughput degradation due to memory congestion among concurrent memory instructions. Figure 4 shows the speedup from having two memory and two compute kernels (M2C2) alongside resource utilization overhead. In this part, we only duplicate the memory and compute kernel for the kernels that dominate the execution time of the application in the feed-forward design model. We compare the speedup to the feed-forward design baseline, which indicates that the M2C2 version is much faster than the single work-item baseline model in most applications. Results show an average of 39% speedup over the feed-forward design model baseline with a 31% average increase in logic utilization and a 26% average increase in the number BRAMs used by the implementation also compared to the feedforward design baseline. In the case of Pagerank and Back propogation benchmarks, the profiling data from the feed-forward design baseline indicates highly optimized memory operations with high global memory bandwidth utilization. This characteristic of these applications hinders further performance improvement from using multiple memory and compute kernels. Moreover, it is worth mentioning that analyzing the results from different sections of the experiments indicates no obvious increase/decrease trend for the maximum frequency of the final implementation for different versions of each application. A case study with vector variable type -While using pipes to enable the feed-forward design model, we also tried to improve the memory bandwidth utilization by using vector type operations. Using vector-type operations can potentially decrease the number of memory read and write requests and data transfers. The speedup from using vector type variable is highly dependent on the memory access pattern of memory operations and the utilized memory bandwidth of the design. For instance, using vector type operations, we were able to improve the throughput of the FW benchmark by 3× while it degraded the performance of the MIS benchmark significantly. Unfortunately, we could not explore this optimization more on all of our experimental benchmarks due to an internal flaw in the Intel OpenCL-to-FPGA SDK. This flaw will result in an internal error while using the feed-forward design with pipe and vector type memory operations and data transfers. We informed Intel about this compiler flaw and received confirmation. Microbenchmarks -In the last part of this work, we designed two sets of automatically generated microbenchmarks to explore the impact of two features in the baseline kernel on the performance of the feed-forward design model. The first one is the access pattern of the kernel's load instructions, and the second one is the divergence among different iterations of the main loop in the single work-item kernel. The first set of microbenchmarks is designed to target memory access patterns. We use two kernels with no divergence among main loop iterations, eight load instructions from global memory, and eighty arithmetic operations (i.e., arithmetic intensity of 10). These two kernels only differ in the behavior of their load instructions. The first benchmark in this set, called M AI10 R, have load instructions with regular memory access patterns, and the second one, called M AI10 IR, has load instructions with irregular memory access patterns. The second set of microbenchmarks targets the divergence among different iterations of the main loop in a single work-item kernel. To this end, we designed two kernels with the same characteristics as the first set; however, we added a for loop with a different trip count for each iteration of the main loop alongside one if statement inside to add divergence to the first set of microbenchmarks. To further show the impact of the feed-forward design model on kernels with DLCD, we also added a reduction operation inside the inner loop to add data dependencies among different iterations of the inner loop. We also decreased the number of arithmetic operations inside the kernels to increase the divergence's impact on the kernel's total execution time. Like the first set of microbenchmarks, one microbenchmark, called M AI6 for-if R, has load instructions with regular, and another, called M AI6 for-if IR, with irregular memory access patterns. Table 3 shows the impact of the feed-forward design model with two producers and two consumers on these sets of microbenchmarks. From the memory access pattern point of view, the results suggest that kernels containing load instructions with regular memory access patterns would often benefit more from the feed-forward design model than those containing irregular load instructions. This is due to a higher memory contention for concurrent irregular load instructions in the feed-forward design with multiple producers, which leads to a lower memory bandwidth utilization. Moreover, the feed-forward design model benefits kernels with divergence and DLCD more than the first set of microbenchmarks. The baseline version of these microbenchmarks has a low memory bandwidth utilization due to a more complex control flow graph and the presence of a DLCD compared to the first set of microbenchmarks. Using the feed-forward design model removes the DLCD from the producer kernel and increases the concurrency among memory instructions by having two producer and two consumer kernels. These changes will result in significantly higher memory bandwidth utilization and, hence, better execution time. RELATED WORK Zohouri et al. [21] and Nourian et al. [13] studied several optimization techniques on different applications from Rodinia benchmark and finite automata traversal respectively while focusing on performance evaluation and power consumption. Their analysis confirms the performance portability gap while porting a GPUoptimized OpenCL implementation to FPGA and indicates a critical need for FPGA-specific optimizations to reduce this gap. Krommydas et al. [11] performed a similar analysis on several OpenCL kernels investigating pipeline parallelism on single work-item kernels, manual and compiler vectorization, static coalescing, pipeline replication, and inter-kernel channels. Hassan et al. [9] explored FPGA specific optimizations in their work. Their benchmarks were chosen from irregular OpenCL applications suffering from unpredictable control flows, irregular memory accesses and work imbalance among work-items. In their work, they exploit parallelism at different levels, floating-point optimizations, and data movement overhead across the memory hierarchy. Several previous works have tried to leverage channels to improve the performance of their implementations by increasing the concurrency among the instructions. Sanaullah et al. [15] proposed an empirically guided optimization framework for OpenCL to FPGA. They leveraged channels to convert a single kernel implementation to multiple kernels, each working as a separate processing element. In their work, they used channels for data communication among kernels. However, their analysis indicates that using channels in their implementation can result in lower performance, mainly due to the data dependency among kernels and the need for synchronizing data paths. Wang et al. [19] leveraged using task kernels and channels to design a multi-kernel approach to reduce the lock overhead. Mainly their work was focused on data partitioning workload. Yang et al. [17] used channels to implement a specific molecular dynamic application. In a more recent work, Liu et al. [12] proposed a compiler scheme to optimize different types of multi-kernel workloads. They introduced a novel algorithm to find an efficient implementation for each kernel to balance the throughput of a multi-kernel design. Additionally, they explored bitstream splitting to separate multiple kernels into more than one bitstream to enable more optimizations for individual kernels. CONCLUSION In this work, we proposed guidelines for using a feed-forward design model based on pipes in order to improve the performance of OpenCL codes running on FPGA. We showed the code transformation steps to convert OpenCL kernels to a feed-forward design model and introduced the limitations of its applicability. By analyzing the results from our experiments, we realized that this design could improve the performance of the single work-item kernels potentially up to 86× if multiple producers and consumer kernels are instantiated. To continue this work in future, we are planning to look into more automatically generated microbenchmarks to identify different baseline kernel features that affects the speedup of the feedforward design model.
2022-08-30T09:01:21.057Z
2022-01-01T00:00:00.000
{ "year": 2022, "sha1": "d91231cc2aec6c6346d68d43f197147fe1ed1994", "oa_license": null, "oa_url": null, "oa_status": null, "pdf_src": "Arxiv", "pdf_hash": "d91231cc2aec6c6346d68d43f197147fe1ed1994", "s2fieldsofstudy": [ "Computer Science" ], "extfieldsofstudy": [ "Computer Science" ] }
209036399
pes2o/s2orc
v3-fos-license
Memetic chicken swarm algorithm for job shop scheduling problem This paper presents a Memetic Chicken swarm optimization (MeCSO) to solve job shop scheduling problem (JSSP). The aim is to find a better solution which minimizes the maximum of the completion time also called Makespan. In this paper, we adapt the chicken swarm algorithm which take into consideration the hierarchical order of chicken swarm while seeking for food. Moreover, we integrate 2-opt method to improve the movement of the rooster. The new algorithm is applied on some instances of OR-Library. The empirical results show the forcefulness of MeCSO comparing to other metaheuristics from literature in term of run time and quality of solution. INTRODUCTION The job-shop scheduling problem (JSSP) was formulated for the first time by Muth and Thompson in 1963. The JSSP is one of the NP-Hard problems [1] and the most known of the classical scheduling problems in the context of manufacturing [2], which help to improve competitiveness of many companies and organizations. The aim purpose of the job-shop scheduling problem is to find a schedule which minimizes the time required to complete a group of jobs (the makespan). JOB-SHOP SCHEDULING PROBLEM The JSSP can be briefly introduced [17] as a sequential allocation of a production schedule for a given set of jobs and resources that optimizes the completion time of all jobs which helps to minimize the makespan. As result, the makespan (the maximum job completion time) C max is the duration between the time of completion of last job and the starting time of the first job (1). Where t ij is denoted as the starting time and p ij as the uninterrupted processing time. The JSSP can be formulated by assigning a set of n jobs J = {J 1 , . . . , J n } to a set of m machines M = {M 1 , . . . ., M m },each machine can process at most one operation at time.As well , each job consists of a set of O ik , which contains m operations where i denotes the job of a specific operation and k represents the current machine M k . Each operation must be processed during an uninterrupted period of time on a given machine. In the jssp, the order and the uninterrupted processing time must be take into consideration. The schedule as a solution for the JSSP can be modeled as a vector of a seqence of operation (C 11 , , C ji , ..., C nm+1 ) then the main goal is to find the minimum time of all processes, the problem is formulated as follows: Where C kl ≤ C ji − d kl ; j = 1, . . . , n; i = 1, . . . , m; kl ∈ P ji (3) The constraint (2) minimizes the finish time of operation o nm+1 (the makespan). The constraint (3) represents the fact that between operations the precedence relations should be respected. The constraint (4) describes that each machine can process one operation at a each time. The constraint (5) guarantees that the finish times to be positive. The remainder of this paper is organised as follows: The section 2 represents the literature review of the problem. The Section 3 describes the proposed memetic-CSO algorithm. The Section 4 presents the results of the experimental study . The Section 5 gives a discussion of the empirical results. Finally, the Section 6 gives the conclusion and the prospects for further works. FORMULATION OF THE PROBLEM In the job-shop scheduling problem (JSSP),the solution can be depicted as a sequence of n × m operations,which optimizes the completion time of all jobs and then helps tp find a schedule with minimum makespan. let's consider the following example with m = 3 machines and n = 3 jobs, where: J = {Job0, Job1, Job2} The representation of the matrix will be as bellow: The first line contains the operation number,the second line contains the job number,the third line contains the sequence number,the forth line contains the machine number and the last line contains the processing time of each operation. As indicated in the Gantt chart representation Fig.1, the solution S = {0, 6, 3, 4, 1, 5, 2, 7} is given by a permutation of a set of operations on each machine, in this example the minimum makespan Cmax=11. In this paper, the chicken can search food in a set of solutions S defined as the search space. CHICKEN SWARM OPTIMIZATION The Chicken swarm optimization (CSO) was introduced by Meng, X.B. And al. [18] and inspired by the behavior of a chicken swarm while searching for food. Each swarm is divided into several groups, which comprises one rooster,hens and chicks.The hierarchical order in the swarm is established by the fitness value. We refer the number of roosters, hens, chicks and mother hens by RN, HN, CN and MN. The position update equation of the rooster can be formulated as: where Randn(0, σ 2 ) is a Gaussian distribution σ 2 is a standard deviation The rooster index k is randomly selected from the rooster's group. f is the fitness value of the corresponding x. The position update equation of the hen can be formulated as bellow : and S1= exp( fi−fr1 |fi|+ε ) and S2= exp((f r2 − f i )) where Rand ∈ [0, 1], r 1 is the index of the rooster and r 2 is the index of a random chicken from the swarm (where r1 = r2). Finally , the position update equation of the chick is formulated in [19] as follows : Where W is a self-learning factor for chicks, F L ∈ [0, 2] is a randomly selected parameter to refer to the relationship between the chicks and its mother with the index m where m ∈ [1, N ]. Otherwise,C is a learning-factor from the rooster with the index r . ISSN: 2088-8708 ADAPTATION OF CHICKEN SWARM ALGORITHM TO JOB SHOP SCHEDULING PROBLEM During the discretization of the original version of the chicken swarm algorithm in order to solve the jop shop scheduling problem, the redefinition of operators is represented by the subtraction ,the multiplication ⊗ and the addition ⊕ used in the original version [19]. Furthermore, we used the uniform crossover (UX) [20] in the position update equation of hens and chicks for the movement towards the leaders of groups and the sequential constructive crossover (SCX) [21] to simulate the movement towards the neighbors. operator represents the crossover operator and ⊗ operator as applying the chosen crossover to the equation.the addition operator indicates that the randomly chosen crossover is applied to the movement. The application of UX and SCX ensure the competition between groups in the swarm. As well, we integrate the 2-opt neighborhood operator to realize the auto-improvement mechanism in the position equation of the roosters and the chicks. In this new adaptation each schedule of a group is chosen randomly. The MeCSO in pseudo-code is represented by algorithm 1. Default parameters The table 1 shows the parameter values used in the new adaptation MeCSO. We execute different tests on instances Abz5 and Orb1 in order to choose the values which guarantee to obtain good results and converge towards the global optimum . We applied MeCSO on some instances of OR-library,the table 2 summarizes the obtained results of 20 runs.The first column represents different instances instance in OR-Library, the second column indicates the best Known solution (BKS) ,the third column describes the average of the best found solution δ avg , the remaining columns represent the measures use to perform the quality of the solution. The proposed algorithm MeCSO allows to find the best-known solution about 51.08 % from all tested instances. where BKS is the best known value , δ avg the average of the best found solution. The proposed algorithm MeCSO seems to be promising to solve jssp in a reasonable time compared to GB algorithm [23] as represented in Fig.2. Furthermore, the algorithm allows to obtain good results in term of the global optimum compared to other algorithms from literature, such as [24] and [25] as represented in table 3 and GB algorithm [23] as represented in Fig.3. CONCLUDING REMARKS In this paper,we proposed a Memetic Chicken swarm optimization algorithm based on the original version of chicken swarm optimization (CSO) and 2-opt mechanism in order to solve the job shop scheduling problem. The empirical results show that MeCSO algorithm is efficient to solve this type of problem than the other algorithms from literature such as GB algorithm and GA in term of the quality of solutions and the computing time. In further research, we suggest to integrate the simulating annealing with the chicken swarm algorithm to ensure the redistribution of the swarm.
2019-02-19T14:08:40.477Z
2019-06-01T00:00:00.000
{ "year": 2019, "sha1": "6ef8a7d34d3a554571201bc5a861510f1e06f050", "oa_license": "CCBYSA", "oa_url": "http://ijece.iaescore.com/index.php/IJECE/article/download/14966/12872", "oa_status": "GOLD", "pdf_src": "ScienceParsePlus", "pdf_hash": "501ed52c05459ce0acb0527c7f77046fe5af2a08", "s2fieldsofstudy": [ "Business", "Computer Science" ], "extfieldsofstudy": [ "Computer Science" ] }
225693556
pes2o/s2orc
v3-fos-license
Anterior Spinal Fusion Using Autologous Bone Grafting via the Lateral Approach with Posterior Short-Range Instrumentation for Lumbar Pyogenic Spondylitis with Vertebral Bone Destruction Enables Early Ambulation and Prevents Spinal Deformity Introduction Pyogenic spondylitis of the lumbar spine markedly decreases the ability to perform activities of daily living and causes severe low back pain. The challenge is to improve low back pain and activities of daily living performance earlier and prevent post-infection sequelae, and conservative treatment with antibiotics is the mainstay of treatment. Methods In the present study, patients who were unable to walk following lumbar pyogenic spondylitis even in the subacute phase after successful infection control, showing bone defects expanding from endplate to vertebral body in CT, were treated with posterior percutaneous short-range instrumentation and anterior autogenous bone grafting (group S, n = 10) or with conservative treatment alone (group C, n = 10). Acute cases of absolute surgical indication with paralytic symptoms and mild cases who could walk by antibiotics administration were excluded. The two groups were compared regarding the post-treatment change in C-reactive protein level, duration of bed rest, and post-infection local spinal deformities (local scoliosis angle in the coronal plane and local kyphosis angle in the sagittal plane). Results Compared with group C, group S took a significantly shorter time for the C-reactive protein level to return to normal and required a significantly shorter duration of bed rest. Furthermore, surgery prevented the formation of kyphosis and scoliosis, while group C developed local kyphosis. Conclusions The minimally invasive surgical method of posterior percutaneous short-range instrumentation and anterior autogenous bone grafting effectively enables early control of pain and maintenance of locomotive function and prevents spinal deformity in patients with lumbar pyogenic spondylitis in the subacute phase with advanced vertebral bone destruction. Introduction Lumbar pyogenic spondylitis results in severe low back pain, which necessitates bed rest and decreases the ability to perform ADL. In prolonged pyogenic spondylitis, the intervertebral disc narrows and vertebral body deformation often results in spinal deformity 1) . Where surgery is required for those who are resistant to conservative treatment, treat-ment for pyogenic spondylitis basically comprises bed rest and infection control via the administration of antibiotics 2) . Although the usefulness of surgical treatment for pyogenic spondylitis has been shown 1,3,4) , there is no clear standard for surgical indications and procedures, and few studies have evaluated the appropriateness of staging and procedures. In the present study, we evaluated after successful infection control the significance of anterior fusion using autologous bone grafting via the lateral transpsoas approach after posterior short-range spinal stabilization in the prone position for lumbar pyogenic spondylitis with advanced vertebral bone destruction in the subacute phase. Patients The study population comprised 20 patients with lumbar spinal pyogenic spondylitis with low back pain and movement difficulty. In all patients, MRI showed inflammation of the upper and lower vertebral bodies and endplates, and CT showed bone defects expanding from endplates to vertebral bodies (Grade III in the literature described by Pee YH,et al. 5) ). Conservative treatment was administered, which consisted of identification of the causative bacteria and antibiotic administration. The participants had prolonged low back pain and could not walk even in the subacute phase. We presented two methods (change to surgical treatment and continued conservative treatment) to patients when the infection had been controlled (normal body temperature, normal white blood cell number, and decreasing C-reactive protein [CRP] concentration were judged), and the patients chose either one. The group that underwent surgery (group S) and the group that received conservative treatment alone (group C) were compared regarding the time taken to regain the ability to walk (restoration of gait function was defined as the ability to be able to walk 10 meters or more without assistance or a walker), and the time from the initial examina-tion to the return of the elevated CRP concentration to a normal level of less than 0.3 mg/dL. The groups were also compared regarding CT changes in local vertebral morphology (coronal and sagittal) at admission, immediately after surgery, and at more than one year after admission. Patients with chronic inflammatory diseases, acute cases of absolute surgical indication with paralytic symptoms and mild cases who could walk were excluded. The demographic data of each group are shown in Table 1. Surgical treatment The patient was initially placed in the prone position, and percutaneous pedicle screws were inserted to achieve stabilization from the posterior aspect in situ. The screws were inserted into one vertebra above and one vertebra below the affected vertebra. The affected vertebral body was skipped and fixed in cases where the infection could not be completely controlled by antibiotics and cases with large bone defects. The patient was then repositioned to the lateral position, and the anterior intervertebral disc and bone were scraped using a nerve monitoring device and a retractor for extreme lateral interbody fusion (XLIF) via the transpsoas approach 6) , followed by autologous bone grafting (Fig. 1A). Graft bones were collected tricortically from the iliac wing on the entry side, and had the same diameter as the vertebral body and a width of approximately 15 mm (Fig. 1B). The bone graft was inserted into the space surrounded by the upper and lower sliders (Fig. 1C), and an anterior longitudinal ligament retractor or elevatrium was inserted into the disc and used as a blocker in front of the bone graft. The Conservative treatment Conservative treatment comprised antibiotic administration, rigid corset application, and rehabilitation under the guidance of a physiotherapist. Rehabilitation and post-treatment therapy In each group, walking exercise was started based on each patient's tolerance of the pain. The time at which bed rest ended was defined as the timepoint at which the patient was able to walk 10 meters or more without assistance or a walker. Bone morphogenic protein-2 and teriparatide prepa- A B C E D rations were not used. The posterior percutaneous pedicle screws that were inserted to skip the affected vertebrae were removed after bone fusion was confirmed on imaging. Spine morphometry CT images were obtained at the time of admission, immediately after surgery, and when healing had been achieved. The angles between the upper and lower vertebral endplates of the affected vertebral body in the center slice of the vertebral body in the sagittal and coronal planes (local kyphosis angle and local scoliosis angle) were measured. Statistical analysis Mann-Whitney's U-test was used for comparisons between the two groups, and p-values of less than 0.05 were considered statistically significant. Results There were 10 patients in group S and 10 patients in group C. There were no significant differences between the two groups in age, sex ratio, and level of the infection (Table 1). In group S, antibiotics had been administered for more than two weeks at the time of surgery, and the detection of bacteria in the surgical lesion was all negative. The total bleeding volume during both the anterior and posterior procedures was 38.5 ± 18.7 g, and the surgery time was 117.8 ± 16.9 min (including the time taken to reposition from the prone position to the lateral position). One patient in group S incurred an iliac wing fracture that caused local pain, but was healed at two months postoperatively. One patient in group S had transient sensory impairment of the thigh. No patient developed vascular or retroperitoneal organ damage, recurrence, or wound infection. Fig. 2 shows the images from a patient in group S (case S-1), while Fig. 3 shows the images from a patient in group C (case C-1). The time taken for the elevated CRP level to decrease to less than 0.3 mg/dL was significantly shorter in group S (4.5 ± 1.7 weeks) than in group C (10.2 ± 7.0 weeks; p = 0.005; Fig. 4A). The bed rest period in group S (8.4 ± 8.5 days) was significantly shorter than that in group C (28.5 ± 7.1 days; p = 0.001; Fig. 4B). These results suggested that the infection and inflammation resolved more rapidly in group S than in group C, which enabled earlier ambulation in group S than in group C. The local kyphosis angle at admission was 2.3 ± 9.8°in group S and 1.9 ± 8.4°in group C. In group S, the local kyphosis angle was −1.9 ± 11.3°immediately after surgery, and 1.8 ± 11.1°when healing had been achieved. In group C, the angle was 13.9 ± 7.9°at the time of healing, which was significantly higher than that at the time of hospitalization (p = 0.005; Fig. 5A). The change in local kyphosis angle during treatment was significantly smaller in group S (−0.5 ± 6.0°) than in group C (12.0 ± 7.1°; p < 0.001; Fig. 5B). Similarly, the local scoliosis angle at admission was 6.3 ± 4.6°in group S and 4.0 ± 3.8°in group C. In group S, the local scoliosis angle was 4.9 ± 2.8°immediately after surgery, and 5.4 ± 3.2°when healing had been achieved; in group C, local scoliosis angle was 8.0 ± 8.6°when healing had been achieved (Fig. 5C). The change in the local scoliosis angle during treatment was significantly smaller in group S (−0.9 ± 3.3°) than in group C (4.0 ± 5.6°; p = 0.020; Fig. 5D). These results showed that group C healed with local kyphosis, while group S did not develop this deformation. Discussion Pyogenic spondylitis is basically controlled by antibiotics 4) . The indication of surgical treatment of pyogenic spondylitis has not yet been established because phase of infection, host condition, and causative bacteria were variable. At this facility, even if there is pain in the lower extremities or slight weakness, the presence or absence of an abscess in the spinal canal does not meet the criteria for surgery. Weakness of lower extremities below Manual Muscle Test Grade 3 and bladder and bowel dysfunction due to abscess are indicated for emergency surgery (posterior decompression and percutaneous long-range posterior fusion surgery). We often wonder how to treat patients whose infections have been controlled but who cannot move due to pain, although acute cases with uncontrollable infection also have indication of surgery. In this study, we focus on the significance of surgery; how to cure cases which can be cured without surgery even in the subacute phase after successful infection control. It is necessary to control infection first and relieve pain resulting from instability in order to restore gait function early, with gait disorders from the pyogenic spondylitis being thought to result from inflammation and instability. As shown in the present study, conservative treatment of symptomatic lumbar pyogenic spondylitis with bone destruction requires a prolonged treatment period and results in spinal deformity. In general, surgery for pyogenic spondylitis is broadly divided into two methods: irrigation/debridement; and stabilization of the infected lesion. However, the effects and necessity of each method are controversial. Fusion surgery is recommended for patients with pyogenic spondylitis who have neurological disorders, large bone defect, or severe kyphosis 3,7,8) . Furthermore, the method of fixing the unstable infected spine with destroyed anterior support is either posterior long (two or more above and two or more below) stabilization or posterior short-range (basically one above and one below) stabilization with anterior support reconstruction. Debridement of the infected lesion and reconstruction of the anterior strut with autologous bone transplantation, as performed in the present study, resulted in the patients being able to recover their movement ability more quickly due to the rapid relief of pain, and the infection was managed completely. It is unclear whether these outcomes were caused by the effects of stabilization or bone grafting. Mohamed et al. reported that resolution of spinal infection is achieved by stabilizing the infected site with posterior fixation without debridement 9) . In recent years, percutaneous pedicle screws have been used in minimally invasive spine surgery that preserves the posterior paraspinal tissues, and have also been used to treat pyogenic spondylitis 10) . The good outcomes achieved by spine surgery are due to the promotion of bone repair and lesion stability without touching the infected lesion, as lesion instability in pyogenic spondylitis worsens the outcome. If deformation or instability remains, it should be fused with a proper alignment, and it is doubtful whether the infected part is fused or functionally moved after posterior stabilization without bone graft and removal of instrumentation. Percutaneous posterior long stabilization without bone graft sometimes needs one more removal operation for the preservation of spinal mobility. An added benefit of the anterior bone grafting was that the fixation range was shortened, suggesting enough stability obtained with clinically minimal invasion in the present study, and long fusion was avoided. Kyphosis of the lumbar spine greatly affects low back pain 11) . In addition, kyphosis increases disability and reduces the ability to perform ADL and quality of life [12][13][14][15] . Our results and past report indicate that infected lesions with bone destruction are healed with kyphoscoliosis deformity 1) , and there is a high risk of adverse effects resulting from adult spine deformity in the future. Adverse effects resulting from remaining deformity of the lumbar spine on global alignment should be considered and the surgical treatment for keeping appropriate spinal alignment would be valuable, in order to prevent deterioration of the quality of life after pyogenic spondylitis. In the present study, we performed autolo-gous bone grafting into a lesion with a bone defect to reduce deformed healing and promote bone fusion. Similarly, Madhavan et al. reported good results without postoperative kyphosis in eight patients who underwent lateral surgery with autologous bone grafting and posterior instrumentation 16) . In the present case series, iliac bone was trimmed in the form of an intervertebral cage, and the bone graft was safely inserted into the optimal position by using sliders and an anterior longitudinal ligament retractor for XLIF. The pros and cons of implanting artificial biomaterial into the infected area have long been debated. Korovessis et al. reported satisfactory results with anterior insertion of a titanium mesh cage and posterior instrumented fusion 17) , and Blizzard et al. reported the use of a cage for XLIF instead of the strut bone 18) . Pee et al. reported that image evaluation showed that the subsidence rate is lower in patients treated with cages vs. autologous bone 5) , and anterior fixation of infected foci with a cage achieves equivalent clinical outcomes as fixation with autologous bone. As tricortical iliac bone extraction involves considerable pain and invasion, artificial bone grafts may need to be considered in the future. The prognosis may vary depending on the infection severity and patient status at the time of admission. The surgical method described in the present study was minimally invasive, achieved good outcomes regarding operation time, blood loss, and early ambulation, and can be applied even in older adults and patients with poor general condition. The present study had limitations, as it was a retrospective study with a small number of cases. The present findings require confirmation in a future large-scale prospective study. The present surgical method remains unclarified regarding the maximum size of the bone defect and the minimum strength of spinal fixation. Further biomechanical research would be needed. Conclusion Posterior percutaneous short-range instrumentation and anterior spinal fusion using autologous bone grafting via the lateral transpsoas approach for pyogenic spondylitis is a safe and minimally invasive procedure that improves the ability to perform ADL early in the recovery period and prevents spinal deformity. Conflicts of Interest: The authors declare that there are no relevant conflicts of interest. Ethical Approval: This study is retrospective and anonymous. No financial burden or physical invasion was added to patients.
2020-06-25T09:07:36.775Z
2020-06-18T00:00:00.000
{ "year": 2020, "sha1": "4292869cbff1329b28c75130ac64071e753d1724", "oa_license": "CCBYNCND", "oa_url": "https://www.jstage.jst.go.jp/article/ssrr/4/4/4_2020-0049/_pdf", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "e5fec5032a403846e85c648c1abd46041658ef42", "s2fieldsofstudy": [ "Medicine" ], "extfieldsofstudy": [ "Medicine" ] }
211564712
pes2o/s2orc
v3-fos-license
“Sometimes it feels like thinking in syrup” – the experience of losing sense of self in those with young onset dementia ABSTRACT Purpose: To explore and describe the experience of people having young-onset dementia. Methods: This was a qualitative study that used semi-structured interviews to collect data from nine persons with young-onset dementia (aged 47–65; five men and four women). Data were collected in the spring of 2018. All interviews were conducted at the participants’ choice and in their own homes by one interviewer. The collected data were analysed using the six-stage process of reflexive thematic analysis model. Results: The analysis revealed three themes: Dementia causing loss of control over oneself; becoming a burden to the family while sense of self disappears; and fearing a humiliating future. Conclusions: The experience of having and living with young onset dementia affected the persons’ thoughts and memory and was experienced through the persons’ loss of personality and sense of self. Thoughts about the future were associated with fear, and the risk of changing their personalities to something different from the one which they had experienced as humiliating throughout most of their lives. Introduction The experience of encountering dementia has been described by people with dementia as loss of control, loss of role and loss of identity (Clemerson, Walsh, & Isaac, 2014;Spreadbury & Kipps, 2018). Being diagnosed with young onset dementia is considered to be a disruption of the life cycle since it is unexpected and out of time with their biography both to them and those who know them (Clemerson et al., 2014;Greenwood & Smith, 2016). It is a rare condition and is also less common than dementia, which comes in later stages of one's life (Prince et al., 2015;Vieira, 2013). Young-onset dementia is defined as dementia with symptom onset before the age of 65 (Draper & Withall, 2016). Recently, there have been discussions in the literature related to the estimation of the number of persons with young-onset dementia (Kvello-Alme, Bråthen, White, & Sando, 2019). However, it is estimated that about 2300 people are living with young onset dementia in Denmark (Jørgensen, 2019). The management of young-onset dementia presents different challenges from those found in dementia among older persons, mainly because they usually still work when their symptoms emerge, thereby incurring in more financial hinderances (Greenwood & Smith, 2016). Therefore, due to the nature of the condition, changes in job performance or behaviour experienced by those with young onset dementia are not always understood by other people in their surroundings (Clemerson et al., 2014;Evans, 2019). Moreover, it has been shown that people with young onset dementia are often parents of young adults or teenagers, so they usually have family responsibilities (Rossor, Fox, Mummery, Schott, & Warren, 2010), and some also still have older and healthy parents owing to their young age. In terms of the impacts of dementia, it has been shown that there are implications for one's sense of self-confidence and that it is strongly associated with disempowerment. A previous study has shown that living with the disease involves feelings of uncertainty and becomes a struggle between self-protection and self-adjustment (Steeman, De Casterlé, Godderis, & Grypdonck, 2006). The condition creates a need to maintain a sense of being useful for one's family and surroundings, and this happens especially when the condition requires the person to cease working. This poses an enhanced importance on the maintenance of purposeful activities in the early stages of the condition (Roach & Drummond, 2014;Van Vliet et al., 2017). Further, the uncertainty about how the disease may develop seems to cause devastating psychosocial consequences, and it is well known that the whole family experiences a profound sense of loss when the person is diagnosed with young onset dementia (Cabote, Bramble, & McCann, 2015). It occurs not only owing to dementia symptoms but also owing to subsequent changes to lifestyle and roles (Svanberg, Spector, & Stott, 2011). Family members feel like they are being "robbed of their future", and there is also guilt associated with having these feelings towards the person diagnosed with young onset dementia (Svanberg et al., 2011). Having to manage role changes and becoming "like a parent" for the person with dementia seem to change their feelings towards those diagnosed. For instance, spouses tend to experience a gradual protective behaviour, so they end up demonstrating more rigid control towards the person with dementia, who thereby may be at risk of feeling controlled and being treated like a child (Wawrziczny, Antoine, Ducharme, Kergoat, & Pasquier, 2016). In corroboration, previous studies have shown that people with dementia experience a struggle for autonomy in their lives and an increasing dependency on others (Clemerson et al., 2014;Johannessen & Möller, 2013;Spreadbury & Kipps, 2019). Based on the previous studies, we deem that there is a need for further knowledge regarding the experience of living with young onset dementia (Clemerson et al., 2014;Spreadbury & Kipps, 2019). Qualitative research, with its ability to provide insights about the subjective experience of lived phenomena, should be well placed to offer answers. Among many stakeholders, this type of knowledge is especially important to health professionals, as it allows them to know how to appropriately shift towards providing personcentred care for these specific types of patients (Kristiansen, Normann, Norberg, Fjelltun, & Skaalvik, 2017;McKeown, Clarke, Ingleton, & Repper, 2010). Thus, this qualitative study aimed to explore and describe the experience of people having young-onset dementia. Design This study was conducted as a qualitative study using semi-structured interviews inspired by Kvale and Brinkmann and was conducted to get detailed information related to the topic in examination (Kvale & Brinkmann, 2015). Data were analysed using Braun et al.'s model for reflexive thematic analysis (Braun, Clarke, Hayfield, & Terry, 2018). Participants and recruitment In total, there were nine participants, five men and four women, where eight were diagnosed with Alzheimer's and one with vascular dementia. See Table I for more details on the description of the participants. They were recruited by the help of dementia consultants who identified participants who were willing to participate and consulted their families regarding this participation. Inclusion criteria included being under 65 when diagnosed with dementia and who were assessed by dementia consultants as being able to give consent to participate, both verbally and in writing. The principles of purposive sampling were adopted to ensure diversity in terms of age, gender, diagnosis, and living arrangements (Moser & Korstjens, 2018). The participants' ages ranged from 47 to 65 years old, with a mean age of 58 years. Further, to be able to participate in the study, participants must have had been able to demonstrate their understanding of it, so the researcher asked the eligible participants to re-articulate the study´s purpose and to describe how they would be able to contribute to it. As a counterpart, some participants wished to have a family member present during the interview to promote a trustful environment and ensure their protection, and this was granted to them. Data collection Data were collected in the spring of 2018. All interviews were conducted at the participants' choice and in their own homes by one interviewer, who had experience in homecare nursing for people with dementia. Two of the interviews were conducted as individual interviews, and seven interviews were conducted as interviews with partner, so either the patients' spouse or another family member was present in the room. To ensure the interviews were focused on the person with young onset dementia, the family members were briefed that they should avoid helping the patients when they were answering and were asked not to intervene or finish sentences for the participants. The participants with youngonset dementia were ensured that they would have all the time needed to articulate and narrate their experiences without being interrupted. In view of the pre-understanding established both by the existing literature about the experience of having young-onset dementia and the interviewer's (LMB) experience of working with people with dementia. The pre-understanding brought the interview situations knowledge about what to expect from the people with young-onset dementia. Specially on how to communicate and to pose short and easy understanding questions but also offering patience and time for the person with dementia to answer the question during the interview. Based on this knowledge, a set of open-ended questions was developed and was utilized as an interview guide. As an example, the participants were asked: "Please, tell me: how do you experience dementia in your everyday life?" and "What kind of thoughts do you have about your future?". The interviews lasted between 58 and 112 min. All interviews were recorded on a Dictaphone and transcribed verbatim by the interviewer. The transcribed interviews were complemented with written field notes by the interviewer. Written notes included observations of both verbal and non-verbal behaviours as they occurred, and immediate personal reflections about the interview situation (Phillippi & Lauderdale, 2018). Data analysis Reflexive thematic analysis is a qualitative method utilized to identify patterns of meaning across a dataset that provide an answer to the research question (Braun et al., 2018). Braun et al.'s six-stage process of reflexive thematic analysis was used to describe the participants' experiences related to having young-onset dementia. With an inductive approach, codes and themes were developed from data analysis (Braun et al., 2018). First, data were read and reread several times. In the second step, initial codes were generated using broad codes such as "irritation to oneself" and "embarrassment". In the third step, the intersection of data, researcher experience, and subjectivity with the research question allowed us to construct themes, mould them, and give them meaning. This process added more details to the codes, and the codes were combined to construct themes such as "fear of embarrassment in relation to others" and "embarrassment to oneself"; this step ended in a collection of candidate themes and sub-themes. In the fourth step, the candidate themes were revised and reviewed to check how each theme related to the other themes and to the entire data set. Then, in the same step, a thematic map of the analysis was made to illustrate participants' phrasal expressions, and the themes were outlined based on the interpretations of these expressions. In the fifth step, themes were defined and named by identifying the "essence" of what each theme was about. All three authors took part in thorough discussions that were aimed at identifying and refining themes. The sixth step was writing-up the final report, revisiting the research question, notes, and codes, all to ensure that the final themes remained close to the original data and answered the research question with accuracy. The first author was responsible for the first, second, and third step of the analysis process and all three authors participated in the final steps of the analysis. Findings were discussed and interpreted in the light of existing research and Buber's thoughts on I-thou relations as a theoretical perspective in the reflection of some of the findings. Ethical considerations Interviewing persons with dementia requires high moral sensitivity (Heggestad, Nortvedt, & Slettebø, 2013). One of the reasons these specificities are required is due to people with dementia and their family members being in a vulnerable situation because dementia affects many domains of the person's life, acting as a threat to individuals' identity, autonomy, and independence (Pesonen, Remes, & Isola, 2011). The interviewer [LMB] had solid knowledge and experience working with people diagnosed with young onset dementia in homecare nursing, which contributed to an attentive and sensitive approach during the interviews. Participants' wishes and needs for having family members close to them during the interviews were respected. The participants were introduced to the study both verbally and in writing, and they all gave written consent to participate. The ability to provide informed consent by the person with dementia was assessed by the researcher during initial contact. All participants were assured that their participation was voluntary and that they could withdraw from the study at any time without giving any reason. The participants also received assurances of anonymity and confidentiality. The Danish Data Protection Agency approved the study in accordance with the Act on Processing of Personal Data No. 2015-57-0016. Ethical clearance was obtained from a Danish Regional Committee on Health Research Ethics (S-20162000-158), and approval was not required according to Danish law. The study was conducted in accordance with the Declaration of Helsinki. Results All participants presented that the overall burden of having young-onset dementia was that the disease affected thoughts and memory, which consequentially and severely impacted their sense of self and personality. This caused possible loss of control and sense of self. Three broad themes emerged from the analysis and were used to describe the recurring topics, which account for participants' experience of losing their sense of self and their thoughts on the future while having young-onset dementia: (1) Dementia causing loss of control over oneself; (2) Becoming a burden to the family while the sense of self disappears; (3) Fearing a humiliating future. Feeling embarrassed The experience of living with young onset dementia varied from day to day among participants. Every morning, when waking up, participants could immediately notice in their minds whether today would be a good or a bad day. Some days they were able to think clearly from the morning, and other days they had the feeling that their brain was thinking slowly. A participant described with a metaphor how the disease affected her, which illustrated how the ability to think was getting difficult with time: "Sometimes it feels like thinking in syrup-it is possible, but it takes a long time, it's sticky and it's difficult …" P9, woman. Having young-onset dementia was experienced by participants as being a new process of understanding of oneself, and in this process, they experienced both a frustration with themselves for not being able to do what they wanted and a desire for the situation to come back to as it used to be. In that regard, a participant expressed: "I am not myself anymore. There are so many things I cannot remember, and things are going too fast for me at times". P3, woman. Further, most participants expressed it was important that they sustained their personality and stayed as normal (in a reference to the way they had always been prior to the dementia symptoms) as possible. In that topic, a man expressed: "I don't want to be himwith-dementia. I want to live as normal as possible. I don't want to see others with dementia. I don't want to be put into a box" P5, man. This quote illustrates the participant's need to not let the diagnosis take over the control of his life, his fear of being stigmatized and put aside, and his feelings of personality and sense of self. Participants used different strategies to cope with everyday life. One good example is one participant: she always used the same parking space when going to town, so that she could find her car again; another participant carried bags with names from the shops he had to go to when he went shopping, so as to remember where he was going. Moreover, participants said that they made what was possible to sustain their own identities and sense of self as long as possible by using strategies that prevented the dementia diagnosis to take over and define who they were. Feeling shame Participants experienced the fact that they were not always able to remember things as frustrating, and they felt this experience was both a torment and a shame in relation to people close to them. They also expressed that they knew their behaviour was changing, and that they knew sometimes they would repeatedly ask for the same things and/or repeat their actions over and over. When they became aware of this situation, it caused them to feel frustrated. Regarding this frustration, a male participant said: "I know I repeat myself, and when I notice it, I get so annoyed! I try to pretend I don't notice it because the others don't need to know that I am aware of it". P1, man. The quote indicates that, for this participant, realizing he was performing a repetitive behaviour brought him shame, and that he needed to cover it up to hide his embarrassment. This was a common experience among participants. Participants independently expressed that they were well aware of their dementia disease and that they often chose to be open about the disease when meeting new people, which served as an intentional strategy to minimize the embarrassment and shame that would eventually come over them when others experienced their repetitive and forgetful behaviour that were characteristic of their diagnosis. During the interviews with a family member present, participants clearly showed nonverbal trust and signs of dependency on the family member by eventually turning their head to the family member in a search for help to find the missing words in the conversation. This behaviour not only symbolized how participants would nonverbally express their need for help to find the right words for the conversation but also that they sought help to not instinctively reveal both the embarrassment and shame of not being able to speak for oneself as usual. Becoming a burden to the family while the sense of self disappears Feeling like a burden in the marriage Participants who had a partner explained that hurting their partner was the worst element about having the disease. They expressed how they would easily forget about having dementia and what this meant to their relatives. When they expressed these situations, it would usually be related to feelings of fault and having a guilty conscience for not being able to help with practical things around the house and not being able to have conversations and intimacy as they used to in their marriage. This made the participants feel being a burden to the family. Participants also expressed awareness that they had to keep a low profile when being around their spouses or partners, as they knew that their repeating behaviour and a repetitive sequence of questions could overburden their spouses or partners, especially so when they recognized the sighing and the encouragement to think twice before asking the same questions that came from the latter. A participant illustrated this by saying: "My husband is very understanding, but he is busy. He has become wrinkled, and he looks devastated. It hurts me to see. I think I take up too much space". P2, woman. This feeling of being a burden to the marriage was obvious when participants expressed their limitations: not being able to help with cooking and/or housekeeping anymore, or even not being able to have conversations as they used to have. Further, ceasing to work, apart from the psychological effects, also had financial consequences for the marriage, for family life and towards their routines, and participants also expressed this overall lack of contributions to the family was as a burden. The feeling of being incapable, added to the perceptions that they were the ones who were causing the family's future and possibilities to be taken away from them, also entailed the feeling of being a burden to the marriage. Feeling like a burden to the family All participants had children, and most of them expressed that they had a close relationship with their children by speaking openly and freely to each other about their feelings. They reported having frequent contact with each other and, for some participants, they experienced that the relationship felt even better than before the diagnosis. At the same time, participants were aware that the children were broken-hearted by having a parent with dementia, and causing sadness to their children was perceived as extremely burdensome. Additionally, the relationship with their children was marked by the uncertain future, which in most families, in turn, resulted in a greater confidentiality between the children and the person with young onset dementia. Participants were also well aware that, if something needed to be said, it had to be said now because their memory could disappear in the future. However, not all participants expressed a closer and better relationship with their children; one of the participants stated that her relationship to her 19-year-old son had become more difficult and that he needed more distance from her because of her dementia. The relations to siblings, parents, and peripheral family members seemed to change as well. In this regard, a participant expressed: "My siblings don't visit me much anymore, perhaps to protect me. I guess they think I have a busy everyday life". P2, woman. Another participant expressed it this way: "Unfortunately, my parents bury their heads in the sand. They keep a distance from us [the participant and her partner]". P3, woman. Finally, participants who had troubled relationships with their parents and siblings explained that they were afraid that they had caused their families to become distant, which also contributed to their feelings of being a burden to their families. Fearing a humiliating future Fear of forgetting and being forgotten The fear of forgetting was an issue of major concern to the participants. If it was glasses or the keys, it did not mean much, but the fear of forgetting the important things in life-like family and friends-was devastating. This sentiment was also expressed by their fear of being forgotten, so much so that their children or spouses would eventually forget who they had once been within their relationships, and that this forgotten memory would, thereby, give way for the memory of the person with dementia. It was obvious to the participants that their changing behaviours would eventually lead to their dependency on others. Based on their data, we found that these two thoughts-turning into a helpless and dependent person and the risk of being forgotten as the person they used to bewere associated. A participant described that she knew she was at risk of changing her own personality, and that she risked being dependent on her husband to help her in her day-to-day life. She feared this could lead him to experience caregiver-burden and make him ill, and this responsibility was unbearable to her. "In the beginning, I did not want to accept this kind of life. I asked for divorce, but my husband didn't want it. Nevertheless, I signed the papers, and now the decision is his. I set him free. If I get to be dependent on caregivers, I don't want him to feel obligated". P10, woman. This phrase was an example that the thought of being a burden to her husband was so humiliating that she had convinced him to live in open divorce. She preferred to set her husband free and be dependent on other people instead of him. This agreement had given her peace about the future, knowing that she would have avoided the humiliation of being a burden to her beloved husband and that she gave him the possibility to leave her with blessings when the dementia became severe enough, so that he remembers her as the person she used to be. It was also characteristic that all participants expressed that they took one day at a time and made sure they had a good day every day. Further, most participants found it hard to acknowledge and to talk about the progression of the dementia with their families. For the participants, it seemed as though the future needed to be repressed, despite relatives mentioning attorney letters and insurances that needed handling. Nevertheless, participants did think about the future, and their thoughts were usually encumbered with fear of humiliation. Moreover, participants reported that they experienced their futures as being robbed from them and that it became insecure and unsafe because they did not know how the disease would progress and they knew they would not able to control it. Finally, their fear of changing their personalities to one that was different from the ones they had through their whole lives was experienced as humiliating and contributed to their fear of the future. Fear of getting lost The risk of forgetting where they were and the risk of getting lost caused fear for most participants. Stories in the news about disorientated people with dementia being tracked by police and helicopters were well known, and the thought of ending in a situation like this was expressed as the worst of possible humiliating situations. To prevent this, all participants went out of their homes with smartphones, so that they knew they could be monitored by the family through the GPS in the smartphone, which brought them safety-owing to knowing they were being monitored-and maintained peace of mind for both the participants and their families. They found it hard to understand the public debate about resistance to GPS-tracking for people with dementia. In that regard, a participant expressed: "I cannot understand why I can't just have a chip implanted. I could forget my smartphone. Animals can have a chip. If my family is okay with it-then I just really cannot understand why I can't". P4, man. The participants reported that a GPS-tracker could give assurance at the sacrifice of their autonomy. The fear of the dementia progressing caused a risk of getting lost someday, and so the surveillance from GPS-tracking was preferred over the price of the possible humiliation. Fear of getting a humiliating end-of-life The fear of ending their life in a nursing home was also expressed as a fear of the future. The thought of being younger-than usual residents-and dependent on other people in a nursing home was difficult to face, since it is usually a place where very old people lived. The solution to avoid ending their life in nursing home had for some of the participants been to consider suicide. A participant narrated that he had savings for euthanasia in Switzerland as an alternative to a nursing home if the dementia disease became too deteriorating: "Life is always to prefer as related to die …, but how much is life worth to prefer …?" P5, man. He was of the opinion that death was preferred as related to live with young onset dementia when dementia progress and he risked losing sense of self and ending life at the nursing home. To him this was too humiliating. This decision had caused different reactions in his family, but the fear of becoming severely ill from dementia, as he had seen in TV programs about people living with dementia, was too humiliating, and he preferred to end his life with dignity. References to TV programs about patients living with Alzheimer's and the knowledge about dementia development-that was developed based on what they saw on the TVclearly underlined that the future was associated with fear to all participants. The possibility of a lonely and humiliating future was something difficult for them to put into words, and is one of the facts that could explain why a voluntary ending of life was a choice for some of them. Discussion The main findings in the current study underline that having young-onset dementia is experienced as losing control of oneself, becoming a burden to the family while the sense of self disappears, and fear of a humiliating future. This could be interpreted as a slow and painful loss of sense of self. The findings also provided new insights into their thoughts concerning surveillance at the expense of autonomy, and suicide as a way to obtain control and autonomy back to their lives. The experience of losing sense of self can be understood as a nuanced feeling related to the struggles to live in life altered by the limitations of the diagnosis, which is also directly related to the experience of being a burden to others and forgetfulness, as Mazaheri et al. (2013) found in their study. We also believe that this loss of sense of self helps to illustrate the changes that the person with dementia witnesses, and how they cannot control them. The experience of losing sense of self is also consistent with Harris and Keady's study on selfhood of younger people with dementia, where they also found there is a transition in selfhood and identity over various aspects of life because of loss of control (Harris & Keady, 2009). The feeling of losing sense of self could also be explained by Gjødsbøl's ethnographic research, which illustrates how the diagnosis and treatment of dementia challenge the fundamental values and principles of the autonomy of the people with dementia of being capable to make informed and rational choices about their own medical condition (Gjødsbøl & Svendsen, 2018). They found that, in the consultation, clinicians feel obliged to acknowledge the concerns articulated by relatives, and that the person with dementia is not necessarily a part of the conversation at counselling. Thus, if the person with dementia is perceived as a person who is not able to speak up for him/herself, the feeling of sense of self is at risk of being lost if the disease progresses. Our results also showed that participants' knowledge on the risk of becoming progressively unconscious in the future because of the disease contributed to the fear of losing sense of self. Further, the risk of not being recognized as the one you had been your whole life and the risk of being forgotten as the one you used to be contributed to losing sense of self. Martin Buber's (1997) philosophy on I-Thou and I-It relationships can explain what it means to lose oneself when the relationship between the subject and how they are treated is reduced to I-It. An "I-it" is someone who is talked about as an object, an "I-Thou" is someone who is talked to as a person (Buber, 1997). Losing the ability to speak up for oneself can make the person with young onset dementia to feel like an "I-It" when they are no longer able to keep up with conversations; instead, they become someone who is still talked about in the room even though they are still present. In a shift of the discussion over the results, the concerns related to being a burden to relatives when having dementia were also found in the previous research, and being burdensome has been shown to be of great concern to persons with dementia (Benbow & Kingston, 2016;Read, Toye, & Wynaden, 2017). Our results showed surprising findings related to the desire to prevent becoming a burden to close relatives, as some participants revealed their thoughts related to the consideration of suicide or that they would agree with "living in an open divorce", as one participant expressed. A possible explanation of this could be participants' fear of losing oneself that was reflected through their loss of control and loss of dignity, which were factors that an interpretative systematic review found were motivations for patients to wish to hasten death (Monforte-Royo, Villavicencio-Chávez, Tomás-Sábado, Mahtani-Chugani, & Balaguer, 2012). They found that a "wish to hasten death" was a way persons with dementia found to reduce their suffering related to being the one that was causing burden on the family, and also a way to relieve the burden of care from the family, so that they would not have to witness their progressive deterioration. Dementia is an uncertain and unpredictable disease, and there are risks attached to its progression that challenge the person's feeling of autonomy: the risk of feeling useless, of being a burden and of not being able to do anything unaided. A previous study points out, in a philosophical essay on autonomy and its competencies, that autonomy is the right to determine for oneself one's own interests, goals and values, and one's own conception of a good life, free from unwarranted interference (Atkins, 2006). In the current study, specific strategies such as considering suicide or living in open divorce could be ways that these persons with dementia find to determine their future for oneself in order to maintain control and autonomy over their lives. The findings in our study also showed that persons with young onset dementia, in spite of their cognitive impairment and memory loss, are able to speak about their experiences of living with dementia, their coping strategies, their fears, their needs, and their wishes for the future. They were able to describe, in their own words, how it is to live with dementia, a fact that supported the research of Johannessen and Möller (2013). The current study clearly showed that persons with young-onset dementia have their own opinion related to their situations and that they showed the desire to use GPS-tracking, while also not seeing any ethical issue in this process. They prefer giving up their privacy in exchange for living with the certainty that they will not experience the humiliation of getting lost. These results further support the findings of a previous study on how technology devices may be able to create a safe and secure environment for both the persons with dementia and their relatives (Olsson, Engström, Skovdahl, & Lampic, 2012). This possibility seems to overshadow the potential ethical problems, such as violating the integrity of the person with dementia. David Lyon suggests that surveillance has a dual nature; one side is used for protection, in a "caring" way, whilst the other side is used to regulate behaviour, in a "controlling" way (Lyon, 1995). When participants in our study reported they could not comprehend the reason behind not being able to choose whether they could be monitored or not, it demonstrated that there are intriguing questions underlying the nature and extent of the need for more surveillance, in a caring way, for people with dementia and its relationship with their feeling of safety and security. Another important finding in the current study was how participants experienced the changes in their family relations, and how these changes caused feelings of fault and guilty conscience towards both their marriage and family members. In that regard, participants considered themselves as burdens towards their families, and it looked like there were issues too difficult to talk about within the family. Contrastingly, a previous research on how it is to be a family member to a person with dementia found the opposite; that family members experienced changed roles and new type dependency on each other within the family, which were also issues difficult to talk about within the family (Busted, Nielsen, & Birkelund, 2019). This newly changed roles and new types of dependency also resulted in a feeling of caregiver-burden, thereby leading to a need for support that should be provided to the whole family. The combination of the findings from the current study provided support for the conceptual premise that there is a noticeable need for involving the whole family when caring for persons with young-onset dementia. Methodological considerations The interviews were conversational, so not all questions were posed in the same way to all participants, and while analysing this fact, it should also be taken into consideration that the setting and the participants' cognitive abilities were different. This could influence the results since the interviews were carried out very differently from one another. Nevertheless, the interviews were thorough and detailed, which helped to gain a wide understanding of the experiences of those living with young onset dementia. A limitation of this study could be the small numbers of participants in the study. However, Malterud, Siersma, and Guassora (2016) identify different items having impact on the information power of the sample including narrow study aim, purposive, and specific sampling of participants, strong interview dialogue with thorough and rich descriptions of the experience of living with dementia and analysis strategy. This study is based on large information from thorough interview dialogues recruited from purposive and specific sampling of participants living with young onset dementia. The information power in this study was judged as sufficient even though the low number of participants. The analysis was supported by research and established theory which served to extend the sources of knowledge. Reflexive thematic analysis was chosen to analyse data. This approach identifies patterns or themes within qualitative data. This analysis approach is not tied to a particular epistemological or theoretical perspective (Braun & Clarke, 2006). Given that the purpose of this study was to explore the experience of living with young onset dementia it was beneficial to use reflexive thematic analysis as a method. Another qualitative analysis method could have been chosen, e.g., grounded theory if the purpose had been to develop a specific theory. This study brings knowledge and theory on living with young onset dementia to enhance our understanding of the people living with this condition. The family members' presence during the interview could have both positive and negative impact on the results. It was a necessary step to meet these persons with dementia in the interview situation but it could be considered as a limitation to the study. In the cases with a family member being present during the interview, the participants showed, nonverbally, an obvious headturning syndrome, and this could also help to explain losing sense of self. However, even people without dementia can have this action in conversations. The head-turning syndrome is found to be a clinical marker of Alzheimer's disease and mild cognitive impairment and signifies the patients' dependency on and trust in family members by shifting the responsibilities that the person with dementia is supposed to carry to those they trust (Fukui, Yamazaki, & Kinno, 2011). In this study, participants shown dependency on family members when losing the ability to find the right words in conversations or to finish sentences as usual, which could be a sign on creaking sense of self. The interviewer's experience of working with people with dementia meant that this behaviour was expected. Knowing that interviewing persons with dementia calls for great patience and increased time, as the person with dementia may eventually hesitate for words and repeat narratives. Prolonged engagement and persistency during the interview required time and patience but made it possible to achieve a rich description on the experiences of those living with young onset dementia, which thereby ensured the validity of the interviews. The family members were briefed that they should avoid helping the patients when they were answering and were asked not to intervene or finish sentences for the participants which resulted in the participants to a large extent were willing to talk about their experiences, thoughts, and feelings. However, when the topic was sensitive, as an example about the participants' thoughts about suicide the presence of the family members was a limitation to the dialogue and to the study. In two cases, it was clear that the participants protected their respective adult son/daughter during the interview situation, and so they hesitated to reveal their thoughts on suicide. As the participants started talking about the topic, just a little exposure already lead the family members to start crying, which clearly had an influence on which extent the participant explored and reported the topic. Reflexive objectivity is defined as a reflection of one's contribution as a researcher in the production of knowledge (Kvale & Brinkmann, 2015) and was required to increase reliability in the study. To be aware of the objective reflexivity, the first author responsible for the interviews and the first stages in the analysis process had to gain insight to own pre-understandings and being aware of when they appeared during both gathering data and analysing data (Attia & Edge, 2017). An openness and continuous awareness of preunderstandings involved all stages of the research process and were one way to increase reliability in this study (Kvale & Brinkmann, 2015). Conclusion The experience of having and living with young onset dementia affects thoughts and memory and was experienced as losing sense of self. The feeling of losing control over mind and memory caused frustration and irritation towards oneself and was both a torment and an embarrassment in relation to other people. Further, we found that having youngonset dementia changed the family relations and caused the participants to feel at fault and with a guilty conscience towards both marriage and family when not being able to function as usual. These experiences resulted in the perception that they had become a burden to the family. Thoughts of the future were associated with fear, and the risk of changing their personalities to one that was different from that they had throughout most of their whole lives was experienced as humiliating. Participants' fear of getting lost was experienced as the worst kind of humiliation possible, and underlined an increased need for surveillance, which should be carried out with care. It was also shown that persons with youngonset dementia preferred utilizing GPS-tracking, and did not see this type of usage as an ethical problem. Since we found reports of suicidal thoughts when being diagnosed with young-onset dementia, future studies are warranted to further investigate the topic to gather more knowledge in this regard. Implications for practice The study has important implications for practice. It has been shown through our results that, when caring for persons with young onset dementia, there is a need to listen to their voices, in spite of their cognitive impairment and memory loss, since letting them speak about their experiences may assist them in regaining or maintaining a certain level of autonomy in their lives. These results are transferrable to other situations and settings where people with earlyonset dementia live and act. Further, there is a need for advanced care planning before the point at which cognition decreases critically is reached, so as to enlighten and empower persons with young-onset dementia to know their opportunities related to the influence they can have on their lives in the future even if diagnosed with dementia. This could contribute to the patients' regaining control and autonomy over their lives and may perhaps help them in holding on to their sense of self. Finally, future research should investigate how parents to persons with dementia experience their relations with their sons/daughters since this issue has yet to be given attention in academic research on the topic. This type of knowledge could greatly contribute to our understanding related to the involvement of the whole family when caring for persons with youngonset dementia. For this end, we believe that family health conversations may be one solution that allows for families that are dealing with the diagnosis to intervene early in the dementia illness process. This type of approach creates a context where all family members are able to narrate and reflect on each other's stories, thereby increasing their understanding and manageability of the illness experience and its associated consequences (Benzein, Olin, & Persson, 2015). This type of approach could help to decrease the experience of strain and burden for the person with young onset dementia while also decreasing their feelings of fault and guilty conscience towards their own marriages and families. ageing, and patients' everyday life with chronic illnesses and diseases. She works with a multidisciplinary practice lead research approach and take patients narratives, everyday life, communication and culture into account. The research involves qualitative as well as quantitative methods for identifying and developing practice-oriented nursing and treatment for vulnerable patients. Regner Birkelund is a professor of person-centered cancer care. He works in the field of humanistic health research with a focus on qualitative research methods, especially aimed at studying patients' perspective on their situation as a basis for developing a person-centered practice. He has initiated a number of research projects on patients' experiences, preferences and needs in relation to their disease and treatment as the focal point.
2020-03-01T14:03:41.402Z
2020-01-01T00:00:00.000
{ "year": 2020, "sha1": "6d15c2a4dcc98f20fca16f686d81859a46dd13cf", "oa_license": "CCBY", "oa_url": "https://www.tandfonline.com/doi/pdf/10.1080/17482631.2020.1734277?needAccess=true", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "619dc6ca6ac1961fa4f549d80455f10c773f6645", "s2fieldsofstudy": [ "Medicine", "Psychology" ], "extfieldsofstudy": [ "Psychology", "Medicine" ] }
52896332
pes2o/s2orc
v3-fos-license
TIPE Family of Proteins and Its Implications in Different Chronic Diseases The tumor necrosis factor-α-induced protein 8-like (TIPE/TNFAIP8) family is a recently identified family of proteins that is strongly associated with the regulation of immunity and tumorigenesis. This family is comprised of four members, namely, tumor necrosis factor-α-induced protein 8 (TIPE/TNFAIP8), tumor necrosis factor-α-induced protein 8-like 1 (TIPE1/TNFAIP8L1), tumor necrosis factor-α-induced protein 8-like 2 (TIPE2/TNFAIP8L2), and tumor necrosis factor-α-induced protein 8-like 3 (TIPE3/TNFAIP8L3). Although the proteins of this family were initially described as regulators of tumorigenesis, inflammation, and cell death, they are also found to be involved in the regulation of autophagy and the transfer of lipid secondary messengers, besides contributing to immune function and homeostasis. Interestingly, despite the existence of a significant sequence homology among the four members of this family, they are involved in different biological activities and also exhibit remarkable variability of expression. Furthermore, this family of proteins is highly deregulated in different human cancers and various chronic diseases. This review summarizes the vivid role of the TIPE family of proteins and its association with various signaling cascades in diverse chronic diseases. TIPE, the most extensively studied member of this family, is a transcription factor nuclear factorκ-B inducible, anti-apoptotic, and oncogenic molecule that is associated with prognosis of different malignancies. It is a 21-kDa cytoplasmic protein that was initially identified in human head and neck squamous cell carcinoma [5][6][7][8][9][10]. It is expressed in different human normal tissues with relatively higher levels in placenta and lymphoid tissues. The open reading frame of this protein bears a sequence in the amino terminus that displays a notable homology to the death effector domain II of the cell death regulatory protein, Fas-associated death domain-like interleukin-1β-converting enzyme-inhibitory protein (FLIP) [11]. TIPE is associated with the immune regulation of CD4 + T lymphocytes and inhibits autophagy under oxidative stress through the mammalian target of rapamycin (mTOR)-dependent pathway [3,12,13]. Notably, different transcript variants of this TIPE gene were recently listed in the NCBI databank. However, currently no study has described their distinguished roles or depicted the factors that regulate their expression. A study by Lowe and group reported TIPE variant 2 as an oncogenic gene product that may regulate different processes in tumor cells such as proliferative signaling, resistance to cell death, and evasion of growth suppressors. On the other hand, other variants are normally downregulated in cancer (variant 1) or show minimal expression in cancer or normal tissues (variant 3-variant 6) [14]. TIPE1 (tumor necrosis factor-α-induced protein 8-like 1) is a recently identified member of the TIPE family that can act as a cell death regulator. It is regarded as a pro-apoptotic factor with the ability to cause increased apoptotic functions. Currently, there is little information available about the role of TIPE1. The information on its biological activity under both physiological and pathological conditions remains ambiguous [4,5,[15][16][17]. It was reported to be distributed in different mouse tissues except for mature B and T lymphocytes. Further, TIPE1 was speculated to be associated with cardiac decompensation linked with diabetes and to interact with FBXW5 and caspase-8. Besides, different post-translational modifications were also predicted to exist in the case of TIPE1 [2,16]. TIPE2 (tumor necrosis factor-α-induced protein 8-like 2), the third member of this family, is a latterly discovered negative regulator of innate, as well as cellular immunity, with sizable sequence homology with the other members of the family [18][19][20]. It is a cytoplasmic protein consisting of 184 amino acids and is expressed preferentially in lymphoid tissues and some non-lymphoid tissues [19,21]. This protein was initially identified as an abnormally expressed gene in the inflamed spinal cord of experimental autoimmune encephalomyelitic mice [22,23]. Further, TIPE2 was found to be expressed in varied cell types such as neurons in the brain and brainstem; hepatocytes; squamous epithelial cells in the cervix and esophagus; glandular epithelial cells in the colon, stomach, and appendix; and transitional epithelial cells in the ureter and bladder [24]. It negatively regulates the functions of toll-like receptor (TLR) and T cell receptor, and its selective expression in the immune system averts hyper responsiveness and maintains immune homeostasis [22,23,25]. Further, it is an inhibitor of the nuclear factor κ-light-chain-enhancer of activated B cells (NF-κB) and mitogen-activated protein kinase (MAPK) signaling pathways and contributes to the reduced activation of activator protein-1 (AP-1) and NF-κB [5,26,27]. It also acts as an inhibitor of Rac, which is a GTPase involved in the promotion of trailing-edge polarization [28]. A recent genome-wide expression profiling analysis reveals TIPE2 to function as an immune checkpoint regulator of inflammation and metabolism. This finding depicts that during the course of inflammation, the expression of TIPE2 may be downregulated plausibly due to its altered epigenetic status, which in turn results in the upregulation of the expression of lipid biosynthesis genes, mitochondrial respiration, and inflammation [29]. TIPE3 (tumor necrosis factor-α-induced protein 8-like 3), the newest member of TIPE family, is located on human chromosome 15. It functions as a transfer protein for lipid second messengers PIP2 (phosphatidylinositol 4,5-bisphosphate) and PIP3 (phosphatidylinositol 3,4,5-trisphosphate), and enhances their level in the plasma membrane [1,30,31]. This protein is expressed in various human organs and is highly upregulated in several human cancers such as cervical cancer, colon cancer, esophageal cancer, and lung cancer [5,32]. Furthermore, the crystal structures of two members of the TIPE family, namely, TIPE2 and TIPE3 from Homo sapiens, have been determined. Both of them possess a central hydrophobic cavity that is proposed as a binding site for cofactors, occupied by two long electrostatic densities, which are plausibly phospholipid in nature [3,19,33,34]. Moreover, these phospholipids were observed to share similar binding modes that involve the exposure of the inositol head group outside and insertion of the lipid tails into the cavity. In addition, all the lipid molecules interact with the critical, positively charged residues as per molecular interaction studies, i.e., Arg 75 and −91 in TIPE2, and Arg 181 and 197 in TIPE3, indicating the similar binding fashion of the phosphoinositides to the TH domain of this protein family [35]. Interestingly, the high-resolution crystal structure of TIPE2 clearly reveals that it possesses a unique yet previously uncharacterized fold that gives TIPE2 a unique structure and topology that is different from that of death effector domain (DED). The structure of TIPE2, which comprises around 150 amino acids, is reasonably larger than that of the DED, as it usually contains a total of 90 amino acids. Again, the topology of TIPE2 is different from that of a DED, as it was observed that N-to-C arrangement of TIPE2 is identical to the C-to-N topology diagram of DED. Therefore, the topology of TIPE2 seems to be a mirror image of that of the DED [34]. Additionally, the crystal structure of TIPE from Mus musculus (mTIPE) was also determined. The overall shape of mTIPE bears a resemblance to a water dipper. Its cylindrical domain contains two long electron densities and has a dimension of 48 × 31 × 30 Å linked to an N-terminal grip-like domain of length~35 Å that comprises of 20 residues. It possesses a hydrophobic cavity of depth around 20 Å, a diameter of around 7 Å, and a volume of 837 Å, which is lined with highly conserved hydrophobic residues, thereby facilitating the binding of hydrophobic cofactors or substrates inside the cavity [3]. Aforementioned, different in vitro and in vivo studies have revealed that this family of proteins plays a crucial role in inflammatory responses and tumorigenesis (Table 1; [5]). Interestingly, the expression analyses in clinical settings have also demonstrated these proteins to be highly deregulated in different cancers and various chronic diseases ( Figure 1). TIPE Family of Proteins and Cancers Cancer, which stems from the perturbations of multiple signaling pathways, affects people of all ages and is a major health concern worldwide [5,91,92]. The TIPE family of proteins plays a vital role in carcinogenesis and metastasis through its deregulated expression and function. It has been found to be strongly associated with cancers of breast, bone, brain, cervix, colon, esophagus, endometrium, liver, lung, stomach, and thyroid. Overall, the potential crosstalk of the four different TIPE proteins with various signal transduction cascades in different cancers has been reviewed by our group [5] previously. TIPE Family of Proteins and Inflammatory Diseases TIPE and TIPE2, the regulators of immunity, have been demonstrated to protect against inflammatory diseases such as atherosclerosis, colitis, and rheumatoid arthritis. Atherosclerosis Atherosclerosis is widely known as an inflammatory disease of the arterial wall in which macrophages play an important role. Notably, TIPE2 exhibits a high expression level in resting macrophages and has been found to exhibit a potent atheroprotective role by regulating macrophage responses to oxidized low-density lipoprotein (ox-LDL). When macrophages lacking TIPE2 were treated with ox-LDL, it resulted in enhanced production of oxidative stress and pro-inflammatory cytokines, as well as activation of NF-κB, JNK, and p38 signaling cascades. These results clearly implied TIPE2, a new-found inhibitor of atherosclerosis, to be an effective target against this disease [71]. Further, TIPE2 displayed its atheroprotective role through modulation of phenotypic switching of vascular smooth muscle cells (VSMCs), which plays a vital role in the development of This review summarizes the role of this TIPE family of proteins in different chronic diseases, their molecular targets and associated signaling cascades in different chronic diseases based on existing literature. TIPE Family of Proteins and Cancers Cancer, which stems from the perturbations of multiple signaling pathways, affects people of all ages and is a major health concern worldwide [5,91,92]. The TIPE family of proteins plays a vital role in carcinogenesis and metastasis through its deregulated expression and function. It has been found to be strongly associated with cancers of breast, bone, brain, cervix, colon, esophagus, endometrium, liver, lung, stomach, and thyroid. Overall, the potential crosstalk of the four different TIPE proteins with various signal transduction cascades in different cancers has been reviewed by our group [5] previously. TIPE Family of Proteins and Inflammatory Diseases TIPE and TIPE2, the regulators of immunity, have been demonstrated to protect against inflammatory diseases such as atherosclerosis, colitis, and rheumatoid arthritis. Atherosclerosis Atherosclerosis is widely known as an inflammatory disease of the arterial wall in which macrophages play an important role. Notably, TIPE2 exhibits a high expression level in resting macrophages and has been found to exhibit a potent atheroprotective role by regulating macrophage responses to oxidized low-density lipoprotein (ox-LDL). When macrophages lacking TIPE2 were treated with ox-LDL, it resulted in enhanced production of oxidative stress and pro-inflammatory cytokines, as well as activation of NF-κB, JNK, and p38 signaling cascades. These results clearly implied TIPE2, a new-found inhibitor of atherosclerosis, to be an effective target against this disease [71]. Further, TIPE2 displayed its atheroprotective role through modulation of phenotypic switching of vascular smooth muscle cells (VSMCs), which plays a vital role in the development of atherosclerosis in response to ox-LDL stimuli. Ox-LDL treated TIPE2-deficient VSMCs were found to have lower expression of contractile proteins such as smooth muscle-myosin heavy chain (SM-MHC), smooth muscle α-actin (SmαA), and calponin, while proliferation, migration, and the synthetic ability for cytokines and growth factors were found to increase significantly [72] (Figure 2A). Thus, these findings clearly imply that TIPE2 is an atheroprotective protein that may serve as a potent drug candidate for protection against this inflammatory disease. atherosclerosis in response to ox-LDL stimuli. Ox-LDL treated TIPE2-deficient VSMCs were found to have lower expression of contractile proteins such as smooth muscle-myosin heavy chain (SM-MHC), smooth muscle α-actin (SmαA), and calponin, while proliferation, migration, and the synthetic ability for cytokines and growth factors were found to increase significantly [72] ( Figure 2A). Thus, these findings clearly imply that TIPE2 is an atheroprotective protein that may serve as a potent drug candidate for protection against this inflammatory disease. Colitis TIPE2 plays a vital role in inflammatory cell function and commensal bacteria dissemination regulation in dextran sodium sulfate (DSS)-induced colitis. Lou and group observed that mice with TIPE2 deficiency in the hematopoietic compartment survived longer compared to the wild types upon treatment with DSS. Further, it was observed that the degree of severity of colitis, as well as colonic damage in TIPE2-deficient mice, was notably less and was plausibly attributed to the decreased colonic expression of inflammatory cytokines TNF-α, interleukin (IL)-6, and IL-12. In addition, it was observed that TIPE2-deficient mice with ameliorated DSS-induced colitis also displayed a weaker systemic inflammatory response together with reduced local dissemination of commensal bacteria [73]. Another study investigated the role of TIPE in DSS-induced colitis in which TIPE-deficient mice were reported to be more prone to DSS-induced colitis, and that lack of expression of TIPE in non-hematopoietic cells was found to play a vital role. In TIPE knockout mice, a great reduction in body weight, the occurrence of severe diarrhea, rectal bleeding, and increased mortality was observed, exemplifying the role of TIPE in protection against DSS-induced colitis [74] ( Figure 2B). Altogether, these two findings indicate that both TIPE and TIPE2 play important roles in the maintenance of colon homeostasis and the prevention and treatment of colitis. However, further in-depth studies are required to clearly understand the exact molecular mechanism(s) of actions of these proteins against colitis. Rheumatoid Arthritis Rheumatoid arthritis is a chronic inflammatory illness characterized by joint tenderness, joint swelling, and synovial joint destruction, resulting in severe disability and premature mortality [93]. Fibroblast-like synoviocytes (FLSs) play an important role in the pathology of rheumatoid arthritis. Colitis TIPE2 plays a vital role in inflammatory cell function and commensal bacteria dissemination regulation in dextran sodium sulfate (DSS)-induced colitis. Lou and group observed that mice with TIPE2 deficiency in the hematopoietic compartment survived longer compared to the wild types upon treatment with DSS. Further, it was observed that the degree of severity of colitis, as well as colonic damage in TIPE2-deficient mice, was notably less and was plausibly attributed to the decreased colonic expression of inflammatory cytokines TNF-α, interleukin (IL)-6, and IL-12. In addition, it was observed that TIPE2-deficient mice with ameliorated DSS-induced colitis also displayed a weaker systemic inflammatory response together with reduced local dissemination of commensal bacteria [73]. Another study investigated the role of TIPE in DSS-induced colitis in which TIPE-deficient mice were reported to be more prone to DSS-induced colitis, and that lack of expression of TIPE in non-hematopoietic cells was found to play a vital role. In TIPE knockout mice, a great reduction in body weight, the occurrence of severe diarrhea, rectal bleeding, and increased mortality was observed, exemplifying the role of TIPE in protection against DSS-induced colitis [74] (Figure 2B). Altogether, these two findings indicate that both TIPE and TIPE2 play important roles in the maintenance of colon homeostasis and the prevention and treatment of colitis. However, further in-depth studies are required to clearly understand the exact molecular mechanism(s) of actions of these proteins against colitis. Rheumatoid Arthritis Rheumatoid arthritis is a chronic inflammatory illness characterized by joint tenderness, joint swelling, and synovial joint destruction, resulting in severe disability and premature mortality [93]. Fibroblast-like synoviocytes (FLSs) play an important role in the pathology of rheumatoid arthritis. The study conducted by Shi and group indicated that TIPE2 increased adjuvant arthritis (AA)-FLSs apoptosis through enhanced DR5 expression levels, thus inhibiting NF-κB activation and promoting the activation of caspase in AA-FLSs. [75]. Further, TIPE2 was found to regulate lipopolysaccharide-induced rat rheumatoid arthritis immune responses via activation of Rac and phosphorylation of interferon regulatory factor 3. This study depicted TIPE2 to be inversely associated with cytokine gene expression in synovial fibroblasts after lipopolysaccharide stimulation. Thus, TIPE2 plays a negative role in the activation of the Rac signaling pathway, as well as initiation of the immune response via reduced function of pro-inflammatory cytokines [76] ( Figure 2C). Thus, using this novel target, TIPE2, therapeutic strategies against rheumatoid arthritis can be designed and used for protection against this disease. TIPE Family of Proteins and Infectious Diseases Various studies were performed to evaluate the association of the TIPE family of proteins and different infectious diseases such as hepatitis B, hepatitis C, listeria infection, and liver fibrosis. Hepatitis B Hepatitis B virus (HBV)-induced hepatic inflammation affects a vast majority of people across the world and is also recognized as a prime cause of hepatic cancer. It has been reported that TIPE2, a regulator of immune receptor signaling, can control HBV-induced hepatitis. Xi and colleagues reported that patients with chronic hepatitis B exhibited remarkably decreased TIPE2 expression in their peripheral blood mononuclear cells (PBMCs) compared to healthy individuals. Further, the expression of TIPE2 negatively correlated with the blood levels of aspartate aminotransferase (AST), alanine aminotransferase (ALT), total bilirubin, and the HBV load of the patients, suggesting that TIPE2 is an important marker in HBV-induced hepatic inflammation [79]. In addition, the expression of TIPE2 was found to be relatively higher in acute-on-chronic hepatitis B liver failure (ACHBLF) patients compared to healthy controls, which positively correlated with total serum bilirubin, international normalized ratio, and the model for end-stage liver disease scores. Additionally, the TIPE2 mRNA level was significantly higher in non-survivors compared to survivors in patients with ACHBLF, and the TIPE2 mRNA level was found to be reduced progressively in survivors together with signs of recovery from patients with ACHBLF. Further, lipopolysaccharide stimulation in ACHBLF patients resulted in reduced levels of IL-6, as well as TNF-α, which displayed a negative association with TIPE2 [80]. Another study investigated the expression of TIPE2 in mice PBMCs with autoimmune hepatitis (AIH) and its involvement in the pathogenesis of AIH. The results showed that TIPE2 was expressed less in AIH mice, whereas in the case of concanavalin A-induced AIH, TIPE2-deficient mice exhibited enhanced levels of serum ALT, AST, pro-inflammatory cytokines, and severe hepatic inflammation [77]. Zhang and group reported that the TIPE2 protein level in PBMCs of hepatitis B patients was significantly less and negatively associated with the aminotransferases sera values. Notably, CD8 + T cells, which express a low level of TIPE2, produced significantly high granzyme B, perforin, and interferon-γ, resulting in their enhanced cytolytic effect [78]. Further, in chronic hepatitis B (CHB) patients, TIPE2 mRNA level in immune clearance phases was notably more compared to the immune tolerance phase, indicating that TIPE2 might be involved in immune clearance of patients with CHB. In addition, TNF-α, interferon-γ, and HBV DNA load were also observed to be independently linked with the level of TIPE2 in CHB patients [81] (Figure 2D). Hepatitis C Approximately 80% of chronic hepatitis cases are caused due to infection with Hepatitis C virus (HCV). TIPE2 has been found to play an important role in chronic hepatitis C (CHC) infection. Kong et al. showed that in CHC patients, TIPE2 gets significantly downregulated, whereas TLR2 and TLR4 show upregulation when compared to healthy controls. Further, the mRNA expression level of TIPE2 was found to be negatively associated with serum ALT, AST, and HCV RNA levels, as well as TLR2 and TLR4 mRNA levels in CHC patients. In addition, treatment of HCV patients with ribavirin and interferon-α led to the upregulation of TIPE2 mRNA and downregulation of TLR2 and TLR4 mRNA level [82] (Figure 2D). Taken together, TIPE2 possesses a strong correlation with the infection with hepatitis virus, and hence it can used as a target to develop strategies for the management of hepatitis-infected patients. Listeria Infection TIPE was reported to regulate infection with Listeria monocytogenes by controlling pathogen invasion and host cell apoptosis in a Rac1 GTPase-dependent manner. Notably, TIPE-knockout mice were found to be resistant to lethal Listeria monocytogenes infection, and they exerted a decreased bacterial load in the liver and spleen. In addition, knockdown of TIPE in murine liver cells resulted in enhanced apoptosis, reduction in bacterial invasion into cells, and deregulated Rac1 activation [83]. These findings provide understanding towards the role of TIPE2 in the pathogenesis of listeria infection, and thus it can be used as a therapeutic target for listeriosis. Liver Fibrosis TIPE2 possesses a protective effect on liver fibrosis and hence may serve as a potent target against this disease. TIPE2 diminished liver fibrosis through reversal of activated hepatic stellate cells. Xu et al. demonstrated low expression of TIPE2 in CCl4-treated murine primary HSCs and activated HSC-T6 cells. Overexpression of TIPE2 hindered the activation and proliferation of HSC-T6 cells, as well as the expression of c-myc, cyclin D1, and β-catenin, whereas its inhibition displayed the reverse effect [84] ( Figure 2E). Thus, owing to its protective effect, TIPE2 displays potential as an effective target against liver fibrosis. TIPE Family of Proteins in Neuromuscular and Neurodegenerative Diseases TIPE2 and TIPE1 have been found to exert their effect in Myasthenia Gravis and Parkinson's disease, respectively, and thus can serve as important targets for therapies against these diseases. Myasthenia Gravis Myasthenia gravis (MG) is an autoimmune neuromuscular disease, the incidence of which is increasing. TIPE2 has been found to play role in MG via modulation of autoimmune T helper 17 cell responses mediated by TLR4 [85]. The study showed downregulation of TIPE2 in MG compared to normal controls. Furthermore, TIPE2 exerted a negative association with the levels of IL-6, -17, and -21 in the serum of MG patients. In cultured MG PBMC, TLR4 activation caused downregulation of TIPE2, whereas RORγt expression and IL-6, -17, -21 production was enhanced. Nevertheless, overexpression of TIPE2 abrogated the TLR4 activation-induced effects [85] ( Figure 2F). Collectively, this study provides evidence that targeting TIPE2, which functions as a negative regulator of immunity, may offer protection against this autoimmune disease. Parkinson's Disease Parkinson's disease is a long-term degenerative disorder that involves the central nervous system. Altered regulation of TIPE1 may contribute to deregulated autophagy seen in dopaminergic neurons under pathogenic oxidative stress, which is especially observed in post-mortem brains in Parkinson's disease. This protein binds to FBXW5, a tuberous sclerosis complex 2 (TSC2; a negative regulator of mTOR) binding receptor present in CUL4 E3 ligase complex, resulting in enhanced autophagy via activation of TSC2 in a Parkinson's disease model. Further, oxidative, stress-induced TIPE1 caused stabilization of TSC2 protein, reduction in mTOR phosphorylation, and an increase in autophagy [86]. Another study conducted by Kouchaki and group attempted to evaluate the association between the serum levels and circulatory gene expression of TIPE2 with severity of Parkinson's disease by enrolling a total of 43 patients. The results implied that no significant differences were noted between the mean serum levels and TIPE2 expression in patients compared to the healthy individuals. They further showed that enhanced serum levels of TIPE2 possess a direct correlation with age and severity of patients with Parkinson's diseases. Besides, TIPE2 expression was also found to be strongly linked with the age of the patients [94] (Figure 2G). The TIPE Family of Proteins and Other Chronic Diseases Apart from the abovementioned, this newly discovered family of proteins is strongly involved in various other diseases such as choroidal neovascularization, restenosis, and metabolic disease-like diabetes. Choroidal Neovascularization (CNV) Choroidal neovascularization (CNV), a pathological condition commonly occurring in ocular diseases, is primarily characterized by vasculogenesis and angiogenesis of the neuroretina, with the retinal pigment epithelium (RPE) as a major target. TIPE2, a negative regulator of immunity, has been found to play a role in CNV, as inflammation and immunity are critical in the early development of CNV. Suo and group conducted a study that reported that TIPE2 is present in human RPE cells in the cytoplasm, as well as the nucleus, and was downregulated in the inflammatory condition, with a subsequent reduction in cell viability. Further, knock-down of TIPE2 resulted in the upregulation of TNF-α, IL-1β, and VEGF, especially under lipopolysaccharide induced stimuli. As TIPE2 displays potent anti-angiogenic properties and VEGF plays a vital role in the final stage of neovascularization, TIPE2 might take part in CNV formation [87] ( Figure 2H). However, comprehensive studies are vital to decipher the underlined molecular mechanisms through which TIPE2 function and help in the development of CNV. Diabetes Diabetes is a type of metabolic disease associated with high blood sugar levels. Although TIPE2 plays a key role in inflammatory homeostasis, its exact role in type 2 diabetes mellitus (T2DM) remains unknown. Liu and group reported that TIPE2 is involved in T2DM via modulation of TNF-α. They observed an increased level of TIPE2 in T2DM patients that was positively associated with hemoglobin A1c and low-density lipoprotein cholesterol, while it negatively correlated with serum TNF-α, IL-6, and hsCRP concentrations in the diabetic patients. Further, treatment with high glucose concentrations resulted in the upregulation of TIPE2 and cytokine secretion in differentiated THP-1 human monocyte cells. Additionally, TIPE2 adenovirus infection reversed the enhanced TNF-α level, whereas treatment with siTIPE2 aggravated the enhanced level of TNF-α and IL-6 in differentiated THP-1 cells under high glucose conditions [88]. Again, TIPE serves as a vital component of a signaling cascade linking mesangial cell proliferation and diabetic renal injury. The study conducted by Zhang and colleagues showed that in response to high glucose, TIPE was upregulated in mesangial cells, and the expression of TIPE was directly correlated with mesangial cell proliferation mediated via an NADPH oxidase-regulated signaling pathway [89] (Figure 2I). The above studies illustrated the critical role of the two TIPE family of proteins, namely, TIPE and TIPE2 in diabetes and diabetic nephropathy. Hence, they may serve as effective targets against diabetes mellitus and may aid in the development of therapeutic strategies for the prevention and treatment of this metabolic disorder. Restenosis Restenosis is a disease characterized by smooth muscle cell hyperplasia and neointimal formation. Zhang and group reported that TIPE2 repressed injury-induced restenosis through inhibited proliferation of vascular smooth muscle cells (VSMCs) via modulation of ERK1/2 and Rac1-STAT3 signaling cascades. The study reported that enforced TIPE2 expression suppressed the proliferation and blocked cell cycle progression in VSMCs, while deficiency of TIPE2 induced proliferation of VSMCs and upregulation of cyclin D1 and cyclin D3 [90] ( Figure 2J). Therefore, targeting TIPE2 might help in designing novel approaches against restenosis. Altogether, this family is evinced to have profound role in different chronic diseases. Interestingly, the function of TIPE and TIPE2 in different chronic diseases and their mode of action have been studied extensively. However, focus needs to be given to unveil the role of TIPE in various chronic diseases other than cancer. In addition, much comprehensive studies are immensely critical to elucidate the roles of the other two members of this family of proteins such as TIPE1 and TIPE3 in the development of different chronic diseases and to unveil their underlined molecular mechanisms. This would help us not only to understand their exact functions, but also to develop novel therapeutic approaches for the prevention and treatment of diverse chronic diseases effectively. Conclusions The TIPE protein family presents a novel group of proteins discovered just a decade ago. Expression studies of these proteins show remarkable variability among themselves. Interestingly, although the proteins of this family were initially depicted as the modulators of tumorigenesis, inflammation, and cell death, they were also found to possess various other functions. For instance, TIPE and TIPE1 function as autophagy inhibitors and activators in experimental models of Parkinson's disease. Further, they are involved in the transfer of lipid secondary messengers PIP2 and PIP3. Members of the TIPE family have been associated with the regulation of immune function and homeostasis and the development of diverse cancer types. Most importantly, the members of this family share a significant sequence homology but are involved in different biological activities. For instance, TIPE1 exhibits a high degree of sequence homology with TIPE2. Despite the existence of a common fold, TIPE2 plays a vital role in immune homeostasis, whereas TIPE1 may not play an essential role in immunity. Despite the existing knowledge on this protein family in the literature, a lot more needs to be elucidated. Though this family of proteins plays an important role in carcinogenesis, metastasis, and development of different human chronic diseases either through up-or downregulation, its exact molecular functions, detailed mechanism of action, and the plausible crosstalk between its members remain ambiguous. Therefore, more comprehensive studies are imperative for better understanding of this important family of proteins, which would provide key insights for biomarker discovery and treatment strategies for a wide array of chronic diseases.
2018-10-14T17:56:42.180Z
2018-09-29T00:00:00.000
{ "year": 2018, "sha1": "8a69eccac75fc29ba814fd34a32b863645001d99", "oa_license": "CCBY", "oa_url": "https://www.mdpi.com/1422-0067/19/10/2974/pdf", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "8a69eccac75fc29ba814fd34a32b863645001d99", "s2fieldsofstudy": [ "Biology" ], "extfieldsofstudy": [ "Biology", "Medicine" ] }
16330438
pes2o/s2orc
v3-fos-license
Resistance Evolution to Bt Crops: Predispersal Mating of European Corn Borers Over the past decade, the high-dose refuge (HDR) strategy, aimed at delaying the evolution of pest resistance to Bacillus thuringiensis (Bt) toxins produced by transgenic crops, became mandatory in the United States and is being discussed for Europe. However, precopulatory dispersal and the mating rate between resident and immigrant individuals, two features influencing the efficiency of this strategy, have seldom been quantified in pests targeted by these toxins. We combined mark-recapture and biogeochemical marking over three breeding seasons to quantify these features directly in natural populations of Ostrinia nubilalis, a major lepidopteran corn pest. At the local scale, resident females mated regardless of males having dispersed beforehand or not, as assumed in the HDR strategy. Accordingly, 0–67% of resident females mating before dispersal did so with resident males, this percentage depending on the local proportion of resident males (0% to 67.2%). However, resident males rarely mated with immigrant females (which mostly arrived mated), the fraction of females mating before dispersal was variable and sometimes substantial (4.8% to 56.8%), and there was no evidence for male premating dispersal being higher. Hence, O. nubilalis probably mates at a more restricted spatial scale than previously assumed, a feature that may decrease the efficiency of the HDR strategy under certain circumstances, depending for example on crop rotation practices. Introduction Right after emergence and just before mating, adult insects face a crucial dilemma that The Clash [1] celebrated in a famous song that could be slightly rephrased as: ''Should I mate or should I go now? If I mate there will be trouble. But if I go it might be double.'' In other words, fitness is affected by two important adult ''decisions''-mating and dispersal-and by the order in which they are performed. Mating before dispersal may increase the risk of consanguinity, but dispersal before mating may increase the risk of not finding a sexual partner. Therefore, the timing between mating and dispersal is likely to vary with species and environmental conditions [2]: in the absence of a universal optimal strategy, no general prediction can be made and each species of interest must be studied on its own. Nevertheless, the timing between mating and dispersal can be of great practical importance: notably, it may influence the efficacy of strategies intended to drive the microevolution of agricultural pest species-e.g., the evolution of their resistance to control agents such as pesticides-by managing agricultural landscapes [3]. The ''high-dose refuge'' (HDR) strategy [4] is one such strategy. It is aimed at delaying or preventing the evolution of resistance in target pest populations against Bacillus thuringiensis (Bt) toxins produced by transgenic Bt crops [5]. The underlying principle is that, in a patchy environment of treated and untreated areas, high gene flow between patches with different selection pressure (here, between Bt crop fields and Bt-free refuges) should limit local adaptation [6] (here, the selection of resistance alleles). Over the past decade, an HDR-based management of agricultural land became mandatory for several Bt crops in the United States [5] and is being discussed for Europe [7]. Population genetics studies conducted on the main pests targeted by the Bt crops usually revealed no departure from patterns expected at Hardy-Weinberg equilibrium and suggested a high level of gene flow over a broader spatial scale than that of the patches, a necessary condition for the success of the HDR strategy (e.g., [8][9][10]). However, tests for departure from Hardy-Weinberg equilibrium are quite conservative, estimates of the spatial scale at which gene flow occurs based on such methods are coarse, and, more fundamentally, the same genetic structure patterns can also be generated by a number of alternative processes [11]. In particular, population genetics studies provide no information on whether adults mate mostly before or mostly after dispersal. However, high predispersal mating can reduce the efficacy of the HDR strategy. Indeed, the goal of this strategy is to purge each generation of as many as possible of its ''r'' alleles (alleles conferring resistance to the Bt toxin) to counterbalance, as much as possible, the increase of their frequency resulting from the selection pressure exerted by Bt crops. The r alleles are carried by either heterozygous (rS) or homozyous (rr) individuals. Resistance to Bt toxins being generally recessive [12], only the fraction of r alleles carried by rS individuals among the offspring of any given generation is thus available for purging. Adults emerging from a Bt field are likely to be mostly resistant homozygotes (rr), whereas adults emerging from refuges are mostly susceptible homozygotes (SS). If most mating takes place before dispersal, the proportion of heterozygotes in the offspring is expected to be low. Other things being equal (but see, e.g., [13] and Discussion), the higher the predispersal mating rate, the lower is the expected rS/(rS þ rr) ratio in the offspring and the lower is the expected success of the HDR strategy. Classical mark-recapture studies have been performed on a number of agricultural pests (e.g., [14][15][16]), mostly with the aim of estimating dispersal distances. Although they provide more precise estimates of such distances than population genetics studies, only very few of them examined the timing between mating and dispersal. This relationship has, to our knowledge, seldom and incompletely been quantified directly in natural populations of insect pests targeted by Bt crops (but see [16][17][18]). First, it is often difficult to detect individuals in the field at the very moment they mate, and even more difficult to appreciate their mating success in terms of sperm transfer. Second, and most important, the origin-local or external-of individuals mating in a specific site can generally not be ascertained. The present study deals with the European corn borer (ECB), Ostrinia nubilalis Hü bner (Lepidoptera: Crambidae), one of the major pests of corn (Zea mays L.), and one of the main targets of the Cryab and Cry1F Bt toxins produced by transgenic Bt corn [19]. Since the 2000 growing season, US corn belt growers planting Bt corn outside and inside Bt cotton growing areas must assign 20% and 50%, respectively, of their corn acreage to refuges, and must set these refuges less than 0.5 mile (approximately 800 m)-and preferably less than 0.25 mile (approximately 400 m)-away from Bt cornfields [20]. When these requirements were made, empirical data about ECB dispersal were very scarce [20,21]. Studies published since then [22][23][24] confirmed that dispersing ECB moths are able to move over far more than a few hundred meters, but the proportion of individuals actually engaging in dispersal, the probability that they do so before versus after mating, and the probability that immigrants and residents mate randomly remained open questions. We used a combination of color-mark, release and recapture experiments and of biogeochemical marking [25] over three breeding seasons (one in 2004 and two in 2005) to quantify, directly in natural populations, the percentage of ECB individuals staying long enough to mate in their natal cornfield and its herbaceous border and the percentage of mating actually occurring between those individuals when mixed with dispersing individuals coming in from surrounding cornfields. Our results show that immigrant males, once present locally, have the same probability of mating with resident females as resident males have, while limited evidence suggests that immigrant females mate less readily with local males than do resident females. They also show that a variable and sometimes substantial proportion of ECB females mate before they engage in any dispersal, with no evidence for any sex-related difference. This might decrease the efficiency of the HDR strategy in certain situations. Experiment 1 Experiment 1 aimed at estimating the probability of young (less than 24 h old) individuals located in a cornfield to be found in the closest field border two nights later and at estimating the probability that such young individuals mate assortatively rather than with immigrants coming from other cornfields. Over 17 sessions spread over 3 wk of the June 2004 breeding season, we released a total of 8,788 virgin moths into four different cornfields (Table 1). These fields were planted with soybean (Glycine max L.) or wheat (Triticum sativum L.) during the previous (2003) season (two plants that suffer no or very little infestation by the ECB [26,27]), thus ensuring that virtually no ECB pupae from the previous season were present locally. In this respect, such fields mimicked fields planted with Bt corn during the previous season. The closest fields planted with corn in 2003-mimicking the refugeswere located at a distance of at least 400 m. ECB larvae having overwintered in these 2003 cornfields were the closest, and hence the most likely, sources of wild moths occurring at the study sites in 2004. At each site and for each release session, 340 to 600 marked moths that had emerged less than 24 h earlier were released into the cornfield, between the seventh and eighth rows, planted parallel to the field border (Table 1). Released moths were marked twice: they were reared as larvae on a wheat (a C 3 plant) diet, which caused their tissues to contain a lower ratio of 13 C/ 12 C carbon isotopes than wild individuals fed on corn (a C 4 plant) [27][28][29][30][31] without altering their longevity (Table S1) or propensity to mate (Table S2), and they were marked with a colored ink spot on the dorsal thorax. Thirty-six hours (2 nights and 1 day) after release, we recaptured a total of 374 marked moths in the herbaceous borders of the field, along with a total of 5,504 unmarked individuals ( Table 2). The proportion of marked individuals among all moths caught varied widely between sites and sessions, as did the proportion of individuals released during a given session that were recaptured (Tables 1 and 2). The percentage recaptured averaged (61 SE) 4.3 6 1.6% and ranged from 0.2% to 26.9%. This proportion was not significantly different between sexes (paired t-test: Approximately 97% of the 155 females we recaptured were mated ( Table 2). To determine whether they had mated with wild or with released males, the spermatophore (or the most recent spermatophore, in cases of multiple matings) dissected from the bursa copulatrix of their genital duct were subjected to stable carbon isotope analysis [27,31]. As expected, the d 13 C values of the 150 spermatophores showed a bimodal distribution. Based on previous results [27], spermatophores with d 13 C values of À31% to À22% were assigned to released males reared on wheat, whereas those with d 13 C values of À20% to À9% were assigned to wild (corn-fed) males originating from surrounding cornfields. A small number of spermatophores (n ¼ 5, less than 4%) with intermediate d 13 C values (À22% to À20%) were considered of uncertain origin and excluded from our analysis. This biogeochemical method therefore resulted in the assignment of greater than 96% of the mated females' sexual partners to either the wild or the released males' pool. Marked and recaptured females had mated with both types of males. Over all sites, the proportion of released males among their partners was 19.4%. It ranged from 0% to 67%, differing considerably over the 17 release-recapture sessions ( Table 2), and it was positively correlated with the ratio of the number of released males over the total number of males captured in the herbaceous border during the corresponding session (Spearman's rank correlation coefficient r S ¼ 0.622, p , 0.05, n ¼ 14; Table 2 and Figure 1). In accordance with this result, the maximum-likelihood estimate of the proportion of released females mating assortatively with released rather than wild males was very low (mode ¼ 0.04), and its 95% credibility interval included 0 ( Figure 2). We also determined the d 13 C values of the spermatophores dissected from the bursa copulatrix of 79 wild females caught during sessions displaying the highest released/wild male ratios. We found that all but three wild females had mated with wild males and that the overall proportion of females mated with wild males was thereby significantly higher among wild females (96.2%) than among recaptured females (80.6%; Fisher's exact test, p ¼ 0.0003). Wild females also carried more numerous (Fisher' spermatophores than recaptured females did (Tables S3 and S4). Furthermore, wild females captured in the study sites carried more numerous (Fisher' .0001) spermatophores than did wild females caught in the border of nearby cornfields planted with corn the previous year; i.e., the closest and therefore the most likely sources of most wild moths found in the study sites (Tables S3 and S4). Although very informative, Experiment 1 had two drawbacks. First, although the moths we released were very young (less than 24 h postemergence) and although the proportion of recaptured moths did not vary with age upon release (less than 12 h or 12 to 24 h; see Materials and Methods), we could not entirely discard the possibility that, in natural conditions, some dispersal occurs during the very first hours of adult life and that the moths we released had already lost this propensity to disperse. Second, Experiment 1 most probably underestimated the proportion of moths mating in the vicinity of their natal field. Indeed, the herbaceous border section screened during any recapture session was only a small part of the entire field border. Moreover, given that both male and female ECBs can mate a few hours after emergence [32], some released moths might have mated locally and left the site before the recapture session. Experiments 2 and 3, performed the following year, were designed to fill these two gaps. However, regardless of their sex and period of emergence, more than two thirds of the moths performed their first flight between 09:00 A.M. and 02:00 P.M. during the first 24 h of their adult life. The time of day of the first flight did not vary with period of emergence (F 1, 165 ¼ 0.452, p ¼ 0.502), and varied only slightly with sex (females: 02:04 P.M. 6 27 min, range: 08:55 A.M. to 10:45 P.M., n ¼ 68; males: 00:48 P.M. 6 17 min, range: 08:50 A.M. to 10:24 P.M., n ¼ 101; F 1,165 ¼ 4.347, p ¼ 0.039), with no significant interaction between sex and period of emergence (F 1,165 ¼ 0.595, p ¼ 0.442). This confirms quantitatively our observations that individuals emerging in the evening typically move relatively little during their first night, and subsequently move essentially like individuals emerging the next morning. The average distance covered during the first flight after emergence also did not vary with period of emergence (F 1,120 Experiment 3 Experiment 3 aimed at examining the proportion of virgin ECB females mating locally, i.e., within 50 m of the cornstalk on which they are settled at dusk, over one night. As shown in Experiment 2, this stalk is situated less than 67 m from the moth's place of emergence in 95% cases (34.5 m on average). To this effect, we released a total of 747 less-than-24-h-old virgin females by settling them on individual cornstalks, during both the first (June) and the second (August) 2005 breeding season. In June, the virgin ECBs were obtained from an outbred laboratory strain and color-marked before release. In August, the virgin ECBs were obtained as offspring of wild moths collected during the first flight, and they were not color-marked before release, for us to be able to exclude any effect of ink or laboratory selection on our results. During the first flight season, the average (6 1 SE) proportion of females recaptured within the release section was 36.7 6 8.6% (range: 16.7% to 62.2%; n ¼ 5 release sessions; Table 3). During the second flight, this proportion was 15.6 6 2.5% (range: 7.6% to 27.8%; n ¼ 8 release sessions; Table 3). Both proportions were significantly different from each other (Mann-Whitney test, p ¼ 0.040) ( Table 3). This difference may result partly from moths being more easily overlooked in August (corn height: approximately 2.5 m) than in June (corn height: approximately 0.5 m), and partly from variations in the propensity to disperse due to environmental factors such as temperature. During the first flight season, taking into account only females recaptured within the release section itself-often on the very stalk where they had been released-the proportion of mated females over the total number of released females averaged (61 SE) 28.4 6 7.0% (range: 13.3% to 45.9%; n ¼ 5 release sessions; Table 3). If we include females recaptured within 50 m from the stalk of release, this estimate rises to an average of 34.1 6 7.8% (range: 16.7% to 56.8%; n ¼ 5 release sessions; Table 3). During the second flight season, the proportion of females observed mating within the release section itself averaged (61 SE) 11.7 6 2.7% (range: 4.8% to 25.0%; n ¼ 8 release sessions; Table 3). Over both breeding seasons, the average proportion (61 SE) of females mating locally was 18.1 6 3.8% (n ¼ 13 release sessions; Table 3). Experiment 3 was also attempted on virgin less-than-24-hold males. Unfortunately, they proved to be too agitated to be settled successfully on a cornstalk at dusk. They were usually no longer present within 50 m of their initial location after approximately 30 min. Discussion Since the HDR strategy has been chosen in the United States with the aim of delaying pest adaptation to transgenic insecticidal crops, a number of its potential limitations have been examined (see [5,7,33]). One of its main technical problems is that significant preferential mating among resistant individuals could decrease its efficiency [4]. However, despite a large modeling effort [4,13,[34][35][36][37][38][39][40][41] and despite calls for data on how far adults move before they mate in order to determine the most suitable management strategy (e.g., [33]), empirical studies addressing this question remained scarce. In the case of the ECB, a major target of the Bt toxins produced by transgenic insecticidal corn, the few studies on dispersal that have been published so far [22][23][24] provide estimates on the distance moved by this pest but little indication on the level of mating between immigrant and resident individuals or on the timing between dispersal and mating. These features are important because they directly influence the probability of mating between susceptible and resistant ECB moths and as such may have a significant influence on the efficiency of the HDR strategy. From that perspective, the present study generated two important findings. First, there was no evidence that the mating success of immigrant, and presumably older, males is any different from that of resident males. Indeed, at the local scale, resident females mated regardless of males having experienced a dispersal event beforehand or not. Limited evidence suggests that, at the study sites, the mating success of immigrant females may, however, be lower than that of resident ones. Second, a fraction of both male and female ECB adults mate before they engage in any long-range dispersal. Indeed, Experiment 1 shows that virgin females mate indiscriminately with both immigrant and resident males. Our Bayesian estimate of the proportion of assortative mating among released moths was very low, less than 4%, and its 95% credibility interval included 0, suggesting that, at the local scale, resident females mate randomly with resident and immigrant males. We must, however, caution that this may not be true for resident males, as wild immigrant females carried a significantly lower proportion of marked/total spermatophores than did released females; this could be due to factors such as older age, previous matings, or decreased energy reserves of immigrant females. The general- ity of this result needs to be confirmed, as this feature was examined over four sessions but for only one site. However, it is consistent with the fact that, conversely, part of the released females mated before dispersal (see below), which means that they, in turn, arrived as mated immigrants in the places to which they dispersed. Experiment 1 also shows that on average, 4%, but up to 27%, of virgin male and female ECB moths released in a cornfield can be recaptured in the vicinity of this field after two nights, with no evidence for any sex-related difference in recapture rates. Such a delay is known to be long enough for more than 70% ECB females to be mated [22,42]; and indeed approximately 97% of the females that we recaptured were mated. The proportion of the recaptured males that had mated could not be determined; at least some of them did, as marked spermatophores were recovered from both marked and unmarked females. One of the limitations of this first experiment, though, is that individuals were kept in the laboratory for up to 24 h after emergence before release. Hence, we could not exclude that substantial premating dispersal in fact occurs during a short time window situated very early in adult life but that our experimental design had prevented us from detecting it. This point was addressed in Experiment 2 using pupae placed directly into the field. The delay between emergence and first flight proved to be relatively long-on average approximately 3.5 h and 9.5 h for moths emerged in the morning and in the evening, respectively-and time intervals between subsequent movements averaged approximately 26 min. The estimated distance covered during this first flight was similar to subsequent flights and was short, on average, less than a meter. Using these values, we calculated that, at dusk, young (less than 24 h old) moths have a 95% probability of being located less than 67 m from their emergence point. Therefore, Experiment 2 showed no evidence of any high precopulatory dispersal during the very first hours of adult life. Experiment 2 further suggested that the 4% of moths that were recaptured in Experiment 1 was an underestimate, rather than an overestimate, of the proportion of adult moths remaining locally long enough to mate. This was confirmed by Experiment 3, raising this estimate to an average of approximately 18% for female moths (approximately 34% and 12%, in June and August, respectively). Our experiments thus show that a substantial proportion of recently emerged ECB females can and do mate at a very local scale before engaging in any long-range dispersal. This conclusion is further supported by the fact that 96.2% of wild females captured during the course of Experiment 1 had mated one or several times with wild males, while only 3.8% had mated with released males. The latter estimate is significantly less than the corresponding proportion among released females (19.5%), despite the fact that this proportion was estimated over all sessions for released females but, conservatively, only over the sessions when the proportion of released over total males was highest for wild females. This suggests that a significant part of the wild females captured during the course of this experiment may have preferentially mated at the vicinity of fields from which they emerged and thus before long-range dispersal. Hence, although the ECB is able to move over far more than a few hundred meters at the adult stage [22][23][24], its life history may be similar to that of, e.g., Diatraea grandiosella Dyar (Lepidoptera: Crambidae), another stalk-boring moth targeted by Bt corn. In the latter species, more than 91% of females are mated within 24 or 48 h of emergence, 66% of females mate the first night after emergence, and precopulatory flight of males occurs mostly within the natal field (references in [13]). The timing between dispersal and mating has also been studied on another stem borer, Chilo suppressalis, a pest targeted by Bt rice [17]. Approximately 15% of females called and mated within 3 m of the site of eclosion, and approximately 5% of males mated within 5 m of the site of eclosion. In the same vein, Alyokhin and Ferro [16] found that a significant proportion (approximately 50%) of the Colorado potato beetles (Leptinotarsa decemlineata, which is one of the pests targeted by Bt potato) stayed and were seen pairing close to the place of their larval development before they left or before the end of the experiment, but, unlike in the present study, it was not ascertained that the observed pairing behaviour actually resulted in sperm transfer. What are the implications of the present findings on resistance management? One implicit-and, to our knowledge, so far untested-assumption of most models on which the HDR strategy is based is that, once present locally, immigrant individuals have the same probability of mating with any locally present individual of the other sex, regardless of its having performed dispersal beforehand or not. One of our two main conclusions is that this assumption is valid for immigrant males but questionable for immigrant females. One must keep in mind that our study was performed on susceptible moths. Bt resistance alleles might be associated with fitness costs that may decrease mating success. This has been shown in another lepidopteran pest targeted by Bt crops, Pectinophora gossypiella, where susceptible males were found to mate more often than resistant males in competition for matings with virgin females [43]. The possible impact of Bt resistance on ECB life history traits can currently not be evaluated since, to our knowledge, no ECB strain resistant enough to complete a whole lifecycle on Bt corn has been selected so far. Our second main conclusion is that some predispersal mating does occur for both males and females. Hence, the current HDR management strategy aimed at delaying resistance to transgenic Bt corn in ECB populations may not ensure complete mixing between susceptible and resistant moths prior to their first mating. This is all the more important as ECB males are thought to mate on average only 3.8 times over their whole lifetime, producing spermatophores of exponentially decreasing size so that the first copulation accounts for approximately 60% of the total volume emitted over their entire lifetime [44]. This lack of complete mixing between susceptible and resistant moths prior to their first mating probably increases the proportion of crossings among homozygous resistant (rr) individuals to be expected. Actually, our results point out that the proportion of rr 3 rr crossings over the entire landscape strongly depends on the timing and importance of males' long-range dispersal. Indeed, the higher the proportion of SS and rS males emigrating from the refuges into the pool of males present at the edges of Bt cornfields, the lower is the proportion of resident rr females that will be mated by rr males emerged from the same field. Also, it must be noted that the densities we released mimic a field where the frequency of the resistance allele is already quite high. If the r allele is initially as rare as estimated by Bourguet et al. [45], it is possible that resistant ECB in Bt field borders would be at such low densities that their behavior might be different; they might, for instance, leave more readily than in the present study. On the other hand, it is unlikely that an rr individual would be completely alone in a Bt field, as some of its sibs should be present, too. Notably, Experiment 1 shows that, within the pool of males present in the field border, the proportion of males that had immigrated from cornfields located several hundred meters away was very variable between sites and between dates: it ranged from 32.8% to 100%. The spatiotemporal variations of the proportion of resident moths staying in their natal field long enough to mate, and of the local proportion of resident to immigrant males in fields with small resident populations, as well as the factors affecting them appear to be crucial points, that must be studied in more detail. Again, it must also be kept in mind that while these factors need to be studied for susceptible moths, results may not be directly applicable to resistant genotypes. For instance, resistant P. gossypiella have been found to display a longer developmental time [46], which could affect (positively or negatively) the probability of effective mixing at the landscape level. The influence of predispersal mating on Bt resistance evolution under the HDR strategy is not trivial. While models of the HDR strategy assuming crop rotation conclude that predispersal mating uniformly increases the speed at which Bt resistance alleles are selected [4], others [13,34,37,41] show that, if crops are not rotated, departures from the original assumptions on the life history of the pest (amount of dispersal, timing between dispersal and mating, oviposition, etc.) can also significantly affect the predictions. Notably, they show that the negative consequences of predispersal mating can be offset, provided it is not too high, by non-random oviposition on Bt rather than on non-Bt crop fields. Provided crops are not rotated (i.e., provided the same fields are used as refuges year after year), such preferential oviposition could result from some females staying in their natal field not only to mate but also to oviposit. However, all these models assume, in addition, sex-biased premating dispersal, with females staying in the vicinity of their natal field until mating and males dispersing and then mating randomly with any of them. Our data provide no evidence that such a sex-bias exists, so that, to be conservative, this assumption warrants more careful checking before it can be made. Some models assuming no crop rotation suggest that an intermediate amount of dispersal can, in certain cases, be better than low or high dispersal: if the HDR strategy were applied in this framework, a precise quantification of the amount of predispersal mating would become important. Our results do not provide such precise quantification. They offer a method to estimate this proportion, but most of all they show that it can be highly variable, suggesting that it may not be so easy to find a single, generally applicable value and that overestimating or underestimating it with a security margin may not be safely applicable either. Therefore, relying on an intermediate level of predispersal mating in the HDR strategy seems to require more finetuning than can probably be concretely achieved on a large scale. In addition, models using this framework assume that female postmating dispersal is low (i.e., that females staying close to their natal field to mate will also stay there to oviposit), an assumption that may be reasonable but remains yet to be checked. Another method for reducing random ovipositing could be strategic cutting of field borders or other means to bias the attractivity of Bt fields versus refuges for oviposition-a ''trap-crop'' type of strategy already suggested by Alstad and Andow [4]. In sum, our results do not necessarily imply that the HDR strategy cannot work. However, they caution that model assumptions must be carefully investigated before relying on them, and they offer a method to do so, which could also be applied to other pests targeted by Bt crops. Materials and Methods Statistical analyses were performed using SYSTAT 9.0 software [47] or JMP IN 3.0 software [48]. Throughout this paper, all p-values are given for two-tailed tests. Time is given as local time. Experiment 1. Mark-release-recapture experiments were conducted during the June 2004 breeding season at four sites located in a corn-growing area, approximately 20 km south of Toulouse, France. Adult ECBs were obtained from an outbred strain that was mass-reared on a wheat diet. Male and female pupae were kept separately, so that adults were all virgin at the time of release, which was conducted less than 24 h after emergence. As wheat and corn use different types of photosynthesis (C 3 and C 4 type, respectively), the ratio of 12 C and 13 C carbon isotopes in their tissues differ [28]. This characteristic is transferred to moth tissues [27], so that laboratoryreared individuals fed on wheat diet can be distinguished from field individuals fed on corn. In addition to this chemical marking, moths were color-marked prior to release. After approximately 30 min at 6 8C (until they were unable to fly), they were marked on the dorsal thorax and base of the wings with a 1:1 ink/ethanol mixture applied with the tip of a matchstick. A different colour was used for each release and for each of the two age classes released (less than 12 h and 12 to 24 h). After marking, moths were placed into small boxes and stored in cool boxes with ice blocks to reduce agitation, which might cause damage or abnormally high dispersal at the time of release. In each site, the experiment was carried out in a field planted with corn in 2004 and with soybean or wheat in 2003. Because these two crops suffer very little infestation by the ECB [26,27], our experimental setup mimicked the situation in a cornfield planted with Bt corn the year before. Herbaceous borders along cornfields are described as a very suitable habitat for the ECB, where large numbers of adults typically mate and rest during the daytime [49]. During the afternoon before each release, a 100-m section of a herbaceous border running along a cornfield and of the first eight rows into the cornfield was cleared of any moth using sweep net capture. Capture efforts were stopped when the 100-m section could be screened once entirely without finding more than five additional moths. At approximately 08:00 P.M., the small boxes containing the marked moths were taken out of the cool boxes, placed open into a 40-m section along a line parallel to the cleared border section, at equal distance from both ends, and situated between the seventh and eighth row into the cornfield, approximately 7 m from the herbaceous border of the field, and gently agitated until the moths left. Recaptures were conducted by sweep net capture, 36 h later, in the 100-m section cleared before release. Again, all efforts were made to ensure exhaustive capture of all marked and unmarked ECB moths over this section. The proportion of recaptured moths was not influenced by their age (less than 12 h or 12 to 24 h) upon release (paired t-test, t ¼ 0.002, df ¼ 15, p ¼ 0.998), so that released moths were pooled regardless of age (less than 24 h) in subsequent analyses. We dissected all color-marked recaptured females to determine their mating status. We also dissected a subsample (n ¼ 79) of the wild females caught during recapture sessions. In order to estimate their mating rate with marked versus unmarked males conservatively, we took these 79 females from those sessions displaying the highest released/wild males ratios (site 1: 03 June; site 2: 02 June; site 3: 01, 07 and 11 June; site 4: 31 May and 04 June). Finally, for each of four recapture sessions conducted in site 1, we dissected not only the marked but also the unmarked females, and we captured 49 to 55 (unmarked) females in site 5, i.e., in the border of a large cornfield located approximately 400 m away from site 1 that was a likely source of many of the immigrant moths captured there (Tables S3 and S4). The bursa copulatrix of the genital duct of mated female moths contains one or several spermatophores, i.e., solidified droplets of sperm and nutritious substances deposited by males during the mating process (usually one per mating event [44]) and later used by the female. The overall proportion of females mated with wild vs. released males was compared with a Fisher exact test between unmarked and marked females. The numbers of spermatophores carried by marked versus unmarked females at site 1 and by unmarked females at site 1 versus site 5 were compared for each replicate by means of a Kruskal-Wallis nonparametric analysis of variance. The shape and color of spermatophores were used to assign them to two age categories (''recent'': white and of ovoid shape, i.e., little digested, versus ''old'': brown and with a substantial part missing, indicating a more advanced stage of digestion), from which it can be inferred how recently the mating occurred [42,50]. The proportion of old versus new spermatophores carried by females was compared for each replicate with a Fisher exact test between unmarked and marked females at site 1 and between unmarked females at site 1 and site 5. Global tests across replicates were constructed using Fisher's combined probability test method (Fisher's global test [51,52]) to compare the number of spermatophores and the proportion of old vs. new spermatophores carried by the same pairs of groups of females. Stable carbon isotope analysis of wings, legs, or spermatophores of ECB moths were conducted as described in [27]. Results are expressed in conventional ''per mill'' units relative to the Pee Dee Belemnite standard as: d 13 C (%) ¼ [(R sample À R standard )/R standard ] 3 1,000, where R sample and R standard are the 13 C/ 12 C atom ratios in the sample and the standard, respectively. We checked for possible differences in survival and in mating propensity between marked and wild moths in two control experiments: SURVIVAL and MATING. The SURVIVAL experiment was conducted as follows. While we performed Experiment 1, in June 2004, three replicates of five color-marked and five non-colormarked females and five color-marked and five non-color-marked males, all less-than-12-h-old adults from the wheat-fed laboratory strain used in Experiment 1, were placed into a 20 3 20 3 40-cm plastic box with a wet paper wad at room temperature with natural daylight. Deaths were recorded daily. We analysed the number of days of survival using a general linear model with sex, marking, and marking color nested in marking as fixed factors and replicate as a random factor. In laboratory conditions, once the approximately 2-d difference in survival between males and females had been accounted for (F 1,166 ¼ 43.59, p , 0.0001), there was only a slight difference in survival of colour-marked versus unmarked moths, and the difference was in favor of the former (8.32 6 0.27 d and 7.62 6 0.26 d, respectively; F 1,166 ¼ 4.67, p ¼ 0.032), and all moths survived at least 2 d (Table S1). Thus, the released moths that were not recaptured in the field were probably mostly still alive. The MATING experiment was conducted as follows. While we performed Experiment 1, in June 2004, four replicates of ten females and ten males, all color-marked and all from the wheat-fed laboratory strain used in Experiment 1, were placed together with ten unmarked wild males collected less than 2 h earlier, next to a cornfield approximately 5 km from the closest study site, into the same boxes and in the same room conditions as replicates of the SURVIVAL experiment. We thereby controlled that, in laboratory conditions, (i) when caged with equal proportions of marked and unmarked males, marked (this experiment) and non-color-marked (SURVIVAL experiment) females showed the same overall mating rate after 48 h (v 2 ¼ 3.80, df ¼ 3, p ¼ 0.284, Table S1), and that (ii) recently emerged (less than 24 h old) marked males had the same probability to mate with marked females as wild males collected directly in the field (Fisher's exact test, p ¼ 0.683, Table S2). Thus, released females probably had the same propensity to mate as wild females, and so had released and wild males. Data analysis of assortative matingwe estimated the probability, for a released moth, to mate assortatively with other released moths by assuming that each released virgin female would mate either with a marked, released male or with a wild male present at the study site. Let p Aij and f ij be, respectively, the proportion of released females mating assortatively with released males rather than randomly and the proportion of released males among all males present at site i during a given release-and-recapture session j. As an estimate for f ij , we used the proportion of marked males among total males captured during the recapture conducted 36 h after the corresponding release). Any female released at site i during session j has a probability p Aij þ (1 À p Aij ) f ij of mating with a released male and (1 À p Aij )(1 À f ij ) of mating with a wild male. Using the values of parameters p Aij and f ij , it is then possible to compute the likelihood of the observed proportions of marked and unmarked spermatophores retrieved from females captured at each site and for each session. To obtain posterior distributions for p Aij , we applied a Bayesian Markov chain Monte Carlo approach with an uninformative [Uniform (0,1)] prior and lognormal deviates. Ten thousand values were sampled with a thinning of 100 and a burn-in of 10,000 and used to calculate statistics (mode and credibility interval) of the posterior distribution. The 95% credibility interval of p Aij was estimated as the highest posterior density region, i.e., the region of values that contains 95% of the posterior probability [53]. The probability of assortative mating was assumed to be identical for all release sessions and sites. Experiment 2. Experiment 2 was conducted during the June 2005 breeding season, in the same corn-growing area as Experiment 1. Approximately 800 pupae were obtained by mass-rearing the same outbred strain used in Experiment 1. These pupae were placed between the seventh and eighth rows of a cornfield during 11 d (05À15 June), i.e., they were allowed to emerge directly in the field. Experiment 2 proceeded in four independent steps, as follows. Over 5 d of observation, we recorded (i) individual emergence times (n ¼ 221), (ii) the time (n ¼ 169), and (iii) the distance (n ¼ 124) of the first flight performed after emergence. After their first flight, moths were gently placed individually in a small plastic box with a wad of wet paper, which was left in field conditions in the shade. We also recorded (iv) the frequency and distance of subsequent flights during a total of 136 observation sessions of 10 min each, spread from 10:00 A.M. to 09:30 P.M. and over 4 d. During these observation sessions, lessthan-24-h-old moths that had already performed their first flight were taken out of the small boxes in which they were kept in field conditions and settled individually on cornstalks. Possible moth movement was monitored for 10 min and the distance covered during each flight (if any) was measured and recorded. For data analyses, plots of residuals versus predicted values were examined and the distribution of residuals was checked for normality. To improve normality of the residuals' distribution, the distances covered during the first flight following emergence were Ln (1 þ distance) transformed before testing for any differences related to sex and period of emergence. These offspring were not color-marked but were reared on a wheat diet, which provided a biogeochemical marking allowing later checking. Females to be released were either kept separate from males since pupal stage or checked very frequently and separated just after emergence, which ensured their not being mated prior to release. Less than 24 h after emergence, they were settled individually on every cornstalk or every second stalk of the eighth row running parallel to the nearest field border. The stalks where released moths were settled were clearly identified with plastic labels. Individual boxes in which moths were kept upon release were gently opened, and the tip of a corn leaf was used to invite the moth to climb the stalk and settle on the plant without flying away. When females flew away, we attempted to recapture them immediately and tried again to settle them until they were either successfully settled or lost. A total of 198 and 549 females were settled successfully in June and August respectively, of approximately 220 and 770 attempts. All females were settled just before sunset-i.e., between approximately 9:00 and 9:30 P.M. in June and between approximately 8:00 and 9:00 P.M. in August, a time when their activity in the field, as well as that of wild females in the vicinity, was usually low (AD, SP, and DB, personal observations). ECBs typically start flying at dusk in search of a suitable site to emit pheromones (females) or of a receptive female (males) and settle at various times before dawn, depending notably on temperature. In the present experiment, moths were checked every 30 to 60 min after release, either until dawn (August) or until the temperature dropped to the point that no moth activity was observed anymore in and around the field for more than 1 h, i.e., approximately 4 h after release (June). Any mating pair including a marked female was collected, and its place of collection (either on cornstalks within the release section, or nearby, within approximately 50 m) was recorded. We checked that all females were able to fly the day after the experiment and that they all survived 3 d or longer in the laboratory. In addition, we confirmed their mating status either by checking for the presence of a spermatophore or by recording the laying of fertile eggs. Females, 10 Marked Males, and 10 Unmarked Males Each Two marking colors (black and green) were tested. One female had mated twice (both times with a C 4 male), and one female carried a spermatophore for which C 3 /C 4 status could not be determined. We also checked that wild-caught males displayed an isotope ratio typical for C 4 -fed individuals. On one occasion (of 30), a wild ECB male from a batch collected near a cornfield, approximately 5 km from the closest study site, for a side-experiment (see MATING) was found to have an isotope ratio typical for C 3 -fed individuals. It presumably belonged to a different, morphologically indistinguishable C 3 -feeding ECB taxon. Indeed, Bontemps et al. [30] found that more than 90% of wild-caught ECB adults displaying a C 3 isotope ratio belong to a distinct ECB taxon that is genetically differentiated [54], uses a different sex pheromone [55], interbreeds very little (less than 5% [31,56]) with individuals feeding on corn, and typically reaches much lower population densities [30,31]. A small number of such individuals may have been present, but they are unlikely to have interfered with our experiment to any extent, given their small densities and low interfertility. Alternatively, an occasional individual belonging to the corn taxon may have developed on a C 3 plant. However, this does not affect the overall conclusion of our experiment, i.e., there was no evidence for a difference in mating success between wild and laboratory-reared marked males. Found at DOI: 10.1371/journal.pbio.0040181.st002 (29 KB DOC). Table S3. Experiment 1 Number of O. nubilalis females found to carry no, one, two, three, or four spermatophores among randomly picked females caught with a net at two different sites and four different dates. Site 1 is a 2004 cornfield planted with wheat in 2003 (see Tables 1 and 2
2014-10-01T00:00:00.000Z
2006-05-30T00:00:00.000
{ "year": 2006, "sha1": "5607197b861e7d8c7cbf01743e30875685950bc2", "oa_license": "CCBY", "oa_url": "https://journals.plos.org/plosbiology/article/file?id=10.1371/journal.pbio.0040181&type=printable", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "5607197b861e7d8c7cbf01743e30875685950bc2", "s2fieldsofstudy": [ "Environmental Science", "Biology" ], "extfieldsofstudy": [ "Medicine", "Biology" ] }
94045537
pes2o/s2orc
v3-fos-license
Indirect Optical Absorption of Single Crystalline beta-FeSi2 We investigated optical absorption spectra near the fundamental absorption edge of beta-FeSi2 single crystals by transmission measurements. The phonon structure corresponding to the emission and absorption component was clearly observed in the low-temperature absorption spectra. Assuming exciton state in the indirect allowed transition, we determined a phonon energy of 0.031 +- 0.004 eV. A value of 0.814 eV was obtained for the exciton transition energy at 4K. β-FeSi 2 is increasingly attracting attention as a suitable material for use in silicon based optoelectronic devices, due to its band gap being near the absorption minimum of quartz optical fibers [1,2]. Recently Leong et al. and Suemasu et al. fabricated light-emitting diodes operating at the wavelength of 1.5 -1.6 µm by introducing β-FeSi 2 particles into a silicon bipolar junction [3,4]. Chu et al. also demonstrated the 1.57 µm electroluminescence (EL) at room temperature from sputter deposited β-FeSi 2 films on Si [5]. However, the luminescence mechanism in β-FeSi 2 is not clearly understood, because the electronic structure of β-FeSi 2 is not clarified yet. The band gap nature, i.e., a direct or indirect gap, is also still controversial. A number of experimental studies on the band gap nature of β-FeSi 2 have been performed by optical absorption measurements. From the analysis of the energy dependence of the absorption coefficient, in most reports it is argued that β-FeSi 2 has a direct band gap [1,2,6,7,8,9,10,11,12], but a few papers report an indirect gap lower than the direct one by some tens of meV. [13,14,15]. The reported values of the band gap are 0.80 -0.95 eV for direct gap and 0.7 -0.78 eV for indirect one. The wide variation of the reported values suggests some uncertain factors existed in measured samples. In order to study the band gap nature, optical transmission measurement using thick single crystalline samples is preferable because the absorption coefficient of crystals with an indirect energy gap is usually low. However, no paper reports the optical transmission measurements of bulk β-FeSi 2 single crystals because of the difficulty of crystal growth. Recently, we have succeeded in growing large-sized β-FeSi 2 single crystals. In this paper, we report optical transmission measurements of β-FeSi 2 single crystals. Single crystalline β-FeSi 2 ingots were grown by the temperature gradient solution growth (TGSG) method using Ga solvent. Details of the growth condition were described elsewhere [16,17,18]. The crystals showed p-type conduction with a typical hole concentration of 1.5 × 10 19 cm −3 at 300K and less than 1 × 10 16 cm −3 at 25K. Crystals cut from grown ingots were ground using carborundum and polished using colloidal alumina. After the polishing, the surface of the crystals showed mirror-like face. Optical transmission spectra were measured between 3.5 and 300K using a double-beam spectrophotometer (Hitachi U-4000). Reflection measurements were made at 300K using a UV-VIS-NIR microspectrophotometer (Nippon Bunko). The absorption coefficient α was obtained by solving the following equation, assumed that the temperature dependence of reflectivity R was negligible throughout the measured spectra region (0.7 -1.0 eV): where d is the thickness, I T the transmitted intensity and I ′ T the apparent transmitted intensity [19]. The energies E e P 1 and E a P 1 refer to the thresholds for structural components as defined in the 70K spectrum. The superscripts denote whether the phonon is emitted (e) or absorbed (a) during the optical absorption process. The strength of the phonon absorbed component with threshold at E a P 1 decreased with decreasing the temperature, and the component was not present below about 40K. The difference between E e P 1 and E a P 1 was the same for each temperature within experimental uncertainty. Thus, E e P 1 and E a P 1 would be related to the thresholds of indirect transitions with a phonon emission or absorption. From the observation of several absorption spectra in different samples, we found only one dominant phonon structure. Therefore, we analyze the spectra using one dominant phonon of energy E ph . We will assume that the Coulomb interaction between the excited electrons and holes is strong enough for the creation of free excitons to play a significant role in the optical absorption spectrum at the low temperature, as is so for Ge, Si and GaP [19,20,21]. Then, the optical absorption of indirect allowed transitions should be of the form [22] for the pair of component associated with a given phonon of energyE ph , which has the momentum required to take the electron from the valence-band maxima to the conductionband minima. The energy E gx is just the band gap energy minus an exciton binding energy. The quantities A and B are parameters containing the density-of-state effective masses of electrons and holes. k is Boltzmann's constant. According to Eq. (2), the strength of the absorption component α a is proportional to the available phonon, and the strength depended on the temperature is given by (3) Figure 2 shows the temperature dependence of absorption coefficient at the energy thresholds E e P 1 for each spectra. The absorption coefficient at the energy thresholds increased with increasing the temperature as followed to Eq. (3). Thus, we obtained a phonon energy E ph = 0.031 ± 0.004 eV from the fitting curve using Eq. (3). Excellent agreement between the experimental absorption coefficient and the theoretical fitting provides convincing evidence that the absorption band comes from the phonon-assisted transition to the exciton state. pointed out that the highest-intensity peak was at about 250 cm −1 for the Raman spectra measured on single crystalline β-FeSi 2 [23]. Our phonon energy agreed with those reported values. Different weak phonon peaks were also reported in the IR and Raman spectra. However, the phonon energy dominantly determining absorption profiles is believed to be 0.031eV. We compared the experimental absorption spectra as the basic shape to the theoretical temperature dependence, on the assumption that the phonon energy of 0.031 eV is dominant during the optical transition in our crystals. The results of such a comparison for 4K and 70K were shown in Fig. 3. So by considering only one phonon energy, rather good agreement between the experimental spectra and the calculated spectra is obtained. In our experiment, the ratio B/A of the best-fitted spectra is not just unity but around 3.3. From the fitting of the spectra, we obtained E gx = 0.814 eV at 4K and 0.810 eV at 70K. These values are approximately 0.1 eV lower than the values of the reported direct energy gap that were measured from β-FeSi 2 films on Si [7,8,10,11]. Based on the phonon-assisted transition probabilities, the small energy difference δE = 0.1eV and the phonon energy E ph = 0.031eV can gave the ratio B/A = (δE + E ph ) 2 /(δE -E ph ) 2 = 3.6 which is close to the experimental B/A. This agreement gives us the following definitive conclusion, β-FeSi 2 is an indirect band gap semiconductor, although the direct gap is very close to the indirect one. In conclusion, we have measured the optical absorption spectra near the fundamental absorption edge of β-FeSi 2 single crystals by transmission measurements. The stepped structure corresponding to the phonon emission and absorption is observed in the low-temperature absorption spectra below about 150K. We determined the exciton transition energy E gx is about 0.814 eV at 4K and about 0.810 eV at 70K, and also obtained a phonon energy E ph = 0.031 ± 0.004 eV from the analysis of the spectra. Our experimental results reveal that
2019-04-04T13:06:54.965Z
2004-05-20T00:00:00.000
{ "year": 2004, "sha1": "adfee816a5f0ef581d0efc551bea11b268a6381c", "oa_license": "CCBYNCND", "oa_url": "https://tsukuba.repo.nii.ac.jp/record/15286/files/APL_85-11.pdf", "oa_status": "GREEN", "pdf_src": "Arxiv", "pdf_hash": "adfee816a5f0ef581d0efc551bea11b268a6381c", "s2fieldsofstudy": [ "Physics", "Materials Science" ], "extfieldsofstudy": [ "Physics" ] }
67753379
pes2o/s2orc
v3-fos-license
Hydroxyoleoside-type seco-iridoids from Symplocos cochinchinensis and their insulin mimetic activity As part of an ongoing study of new insulin mimetic agents from medicinal plants, the 70% EtOH extract of Symplocos cochinchinensis was found to have a stimulatory effect on glucose uptake in 3T3-L1 adipocyte cells. The intensive targeted isolation of this active extract resulted in ten new hydroxyoleoside-type compounds conjugated with a phenolic acid and monoterpene (1–6 and 8–11), as well as four known compounds (7 and 12–14). The chemical structures of the new compounds were determined based on spectroscopic data analysis (1H and 13C NMR, HSQC, HMBC, NOESY and MS). The absolute configurations of the isolated compounds were determined by electronic circular dichroism (ECD) analysis of derivatives obtained after a series of reactions, such as those with dirhodium (ІІ) tetrakis (trifluoroacetate) and dimolybdenum (ІІ) tetraacetate. In vitro, compounds 3, 7 and 8 moderately increased the 2-deoxy-2-[(7-nitro-2,1,3-benzoxadiazol-4-yl)amino]-D-glucose (2-NBDG) uptake level in differentiated 3T3-L1 adipocytes. For further studies, we evaluated their effects on the expression of glucose transporter-4 (GLUT4), its translocation, protein tyrosine phosphatase 1B (PTP1B) inhibition and expression of phosphorylated Akt. Our results strongly suggest that the traditional uses of this plant can be described as active constituents by hydroxyoleoside-type compounds. Globally, the number of people with diabetes mellitus is growing rapidly, and the incidence rate of diabetes is also accelerating, especially as the elderly and obese population increases 1 . The number of patients with diabetes is expected to increase from 171 million in 2000 to 366 million globally by 2030. According to the American Diabetes Association, the incidence of diabetes is approximately 25.2% of the total elderly population in the United States and 12.0 million seniors suffered from diabetes in 2015. As the increase in diabetic patients is associated with a dramatic increase in the cost of diabetes-related complications, direct healthcare costs and productivity losses in the US alone are estimated to be $ 176 billion and $ 69 billion in 2012, respectively 2 . Diabetes is a chronic disease that occurs when the pancreas does not produce enough insulin (type 1 diabetes) or the insulin produced does not function effectively at the site of action (type 2 diabetes, T2D). When insulin does not function properly, the increased blood sugar in the body can cause serious damage to the heart, blood vessels, kidneys, eyes and nerves. Since T2D associated with insulin resistance is the most prevalent 3 , it is urgently necessary to develop new anti-diabetic agents, especially those of targeting type 2 diabetes. While many diabetes medications lower blood glucose levels in the short term, they often cause weight gain as a side effect and prolonged use worsens the insulin resistance of diabetic patients 3 . Insulin mimetics used as oral diabetic agents, which act similar to insulin but do not synthesize fats, have been suggested as a good solution for the treatment for diabetes. Interestingly, food intake and body weight decrease when insulin is selectively delivered to the brain, but not when it is delivered to the whole body 4 . These results suggest that insulin mimetics that separate glucose-lowering action from the weight gain are a very good pharmacological solution for overcoming insulin resistance as the side effects of diabetes therapies. Symplocos cochinchinensis (Lour.) S. Moore (www.theplantlist.org) is an evergreen tree that grows up to 35 meters in height and belongs to the Symplocaceae family. This plant is distributed in East Asia, including China, Japan, India, Vietnam and Malaysia 5 . Ethnobotanical uses of this plant include treatment of diabetes mellitus in traditional "Ayurvedic" Indian medicine 6 antidiabetic 7,8 , antilipidemic and antioxidant activity 9 , but there have been few studies on the chemical constituents of S. cochinchinensis. The genus Symplocos contains a large amount of seco-iridoid and phenolic compounds 10 . Recent reports on the antidiabetic activity of oleuropein, which is abundant in olive tree leaves 11 , led us to isolate active compounds by a special dereplication method aimed at seco-iridoids. In the search for new insulin mimetics from S. cochinchinensis, the 70% EtOH extract of the plant showed a moderate increase in glucose uptake in differentiated adipocyte cells. Bioassay-guided fractionation resulted in the isolation of ten new hydroxyoleoside-type compounds, including eight phenolic hydroxyoleosides, symplocochinside A-H (1-6, 8 and 9), and two monoterpene-derivatized hydroxyoleosides, symplocochinside I-J (10)(11), along with four known compounds, including a megastigmane and triterpene glycosides (Fig. 1). The absolute configurations of the monoterpene attached to hydroxyoleoside (10) and megastigmane (12) were assigned by chemical methods coupled with spectroscopic analysis. All isolates were evaluated for glucose uptake level, GLUT-4 translocation, PTP1B activity and Akt phosphorylation. In this paper, we report the isolation, structural elucidation, determination of absolute configuration and anti-diabetic properties of these isolates. Symplocochinside B (2) was purified as a brownish gum, and its molecular formula was established as C 26 (Table 1) were similar to those of 1 except for the configuration of the ferulic acid double bond. The J values of H-7′ (δ H 6.86, d, J = 13.0 Hz) and H-8′ (δ H 5.78, d, J = 13.0 Hz) of compound 1 are indicative of its cis-configuration. Whether the cis form of compound 2 is a plant-derived compound was determined by the retention time and abundance when the partial extract of S. cochinchinensis was co-injected with 2 on LC/MS. Hence, the structure of 2 was characterized as 10-O-cis-feruloyl-10-hydroxyoleoside. Symplocochinside C (3) (Fig. 1) was isolated as a brownish gum, and its molecular formula was established as C 25 H 28 O 14 by HRESI mass spectrum, which showed a ion peak at m/z 575.1371 [M + Na] + (calcd for C 25 H 28 NaO 14 , 575.1371). The distinct UV patterns of 3, which indicate the presence of a cinnamic acid moiety, and the characteristic proton peaks of H-1 (δ H 5.90, s) and H-3 (δ H 7.46, br s) showed the common features of this seco-iridoid. The 1 H and 13 C NMR spectra of 3 were similar to those of 1 except for the absence of the methoxy group. Since the trans-cinnamic acid derivative can be converted to the cis-isomer through photoisomerization 18 , the structure of compound 3 was determined after conversion to the trans form by the reaction with iodine 19 . Thus, the structure of 3 was determined as 10-O-trans-p-coumaroyl-10-hydroxyoleoside. Symplocochinside D (4) was obtained as a brownish gum, and its molecular formula was established as C 25 H 28 O 14 from the HRESI mass spectrum with a peak at m/z 575.1381 [M + Na] + (calcd for C 25 H 28 NaO 14 , 575.1371). The 1D NMR of 4 (Tables 1 and 2) showed almost same patterns as those of 3 except for the coupling constants of H-7′ (δ H 6.87, d, J = 11.2 Hz) and H-8′ (δ H 5.77, d, J = 12.8 Hz), indicating that compound 4 is the cis-isomer of compound 3. The pure form of cis-configured compound 4 could be obtained. Hence, the structure of 4 was identified as 10-O-cis-p-coumaroyl-10-hydroxyoleoside. Symplocochinside E (5) (Fig. 1) was isolated as a brownish gum, and its molecular formula was established as C 27 H 32 O 15 from the HRESI mass spectrum, which showed a sodium adduct ion peak at m/z 619.1673 [M + Na] + (calcd for C 27 H 32 NaO 15 , 619.1633). The NMR spectra of 5 (Tables 1 and 2 Fig. S26) of H-10 with C-7′ at δ C 167.8 showed that the benzoic acid group is connected to C-10. Thus, the chemical structure of 8 was assigned as 10-O-benzoyl-10-hydroxyoleoside. Symplocochinside H (9) (Fig. 1) was purified as a brownish gum, and its molecular formula was established as C 18 Symplocochinside I (10), a yellowish gum, had the molecular formula C 26 , which are three sp 3 primary carbons, three sp 3 methylene carbons and one sp 3 quaternary carbon with a deshielded chemical shift (δ C 70.1) implying the presences of one oxygenated quaternary carbon, one carbonyl group, and two olefinic carbons. The HMBC correlations of H-10 with C-1′, H-9′ with C-2′/C-4′, of H-6′ with C-4′/C-5′, of H-5′ with C-3′ and of H-4′ with C-2′/C-5′ suggested the connectivity of the 10-hydroxyoleoside skeleton with 3-hydroxydimethyloctenoic acid 20,21 , which is a monoterpene also known as 3-hydroxycitronellic acid. The isolation of 3-hydroxycitronellic acid from 10 by selective hydrolysis was not successful due to racemization (data not shown). Therefore, after the reaction with dirhodium (ІІ) tetrakis (trifluoroacetate), the empirical ECD method was employed for the determination of the absolute configuration at C-3′ 22 . The acetate form of compound 10 was purified after the peracetylation reaction (Supplementary Table S1) and was subjected to complexation with [Rh 2 (OCOCF 3 ) 4 ] in CDCl 3 . According to the bulkiness rule, the negative Cotton effect at 350 nm (band E) was observed (Fig. 3B), which means that the absolute configuration of C-3′ is 3′R. Thus, compound 10 was elucidated as (3′R)-10-O-3′-hydroxycitronellyl-10-hydroxyoleoside. Symplocochinside J (11) was obtained as a yellowish gum, and its molecular formula was established as C 26 Fig. S38) of H-9′ with C-2′/C-4′ along with the similar pattern of HMBC correlations with 10 signified that 11 is an analogue of 10-hydroxyoleoside substituted with a different monoterpene, namely, geranic acid, which is supported by the comparison with previously reported data 23 . Thus, the structure of compound 11 was elucidated as 10-O-geranyl-10-hydroxyoleoside. Compound 12 (Fig. 1) was obtained as a brownish gum, and its molecular formula was established as C 13 H 20 O 3 from the HRESI mass spectrum with a peak at m/z 225.1484 [M + H] + (calcd for C 13 H 21 O 3 , 225.1485). The comparison with previously reported NMR data 24 showed the compound has the same planar megastigmane structure (Supplementary Table S1). However, the NOESY spectrum ( Supplementary Fig. S43) indicated the possibility of different configurations at C-8 and C-9 from those of the known compound 8,9-dihydromegastigmane-4,6-diene-3-one. In the NOESY spectrum, the correlations between H-7 at δ H 6.06 (d, J = 9.2 Hz)/H-13 at δ H (2.15, d, J = 0.8 Hz) and H-8 at δ H (4.68, dd, J = 9.4, 5.3 Hz)/H-9 at δ H (3.75, m) indicated that the olefinic carbon of C-7 is in the E configuration and that the relative configuration of C-8 and C-9 is [8R*, 9S*]. The specific rotation of the known compound is +54.0 (c 1.52 MeOH), whereas that of 12 was −88.9 (c 0.2 MeOH). Since the planar structure of the known compound was reported without the absolute configuration, the absolute configuration of compound 12 was determined by the helicity rule using the ECD measurement after derivatization with dimolybdenum tetraacetate 25,26 . It is possible to apply this method because compound 12 is erythro-1,2-diol and there is a bulkiness difference between the two substituents around the hydroxyl groups. The CD measurement after complexation with [Mo 2 (OAc) 4 ] showed a negative CE in the band II region (403 nm, −0.15 mdeg) used as the diagnostic band (Fig. 3C). Accordingly, the structure of 12 was determined as (8R,9S)-8,9-dihydromegastigmane-4,6-diene-3-one. Measurements of insulin mimetic activity with 2-NBDG on differentiated 3T3-L1 adipocytes. Unlike the insulin in the brain, insulin at the periphery acts as an anabolic factor and causes weight gain as a side effect if the energy usage of the patient is not increased. Insulin mimetics have similar characteristics to insulin in terms of the reduction in food intake and body weight in rats when administered intracerebroventricularly. Considering the abovementioned difference, insulin memetics appear to be more advantageous than insulin because of their potential to pass through the blood-brain barrier (BBB), which allows us to find insulin mimetics from natural resources with fewer side effects. To measure for insulin mimetics, the 2-NBDG assay, which is used a fluorescent-tagged glucose analogue for monitoring the glucose uptake in cells, was introduced. All isolates (1)(2)(3)(4)(5)(6)(7)(8)(9)(10)(11)(12)(13)(14) were evaluated for 2-NBDG uptake in differentiated 3T3-L1 adipocytes at a concentration of 40 μM (Fig. 4A and Supplementary Fig. S49). Most of the phenolic acid-derivatized seco-iridoid showed activity, whereas compound 9 with an acetyl group and compound 10 with a monoterpene showed weak activity. Comparison of the activities of the compounds with the trans and cis forms showed that the trans isomers had stronger activity than cis. Among these compounds, compounds 3, 7 and 8 showed stronger activities compared to others. Thus, fluorescent signals measurement was performed using a fluorescence microscopy for assessing the transport efficacy of 2-NBDG into cells. Increased signal intensities after treatment of the compounds were more strongly observed in cells treated with 3, 7 and 8 at 40 μM compared to those in cells of the control group (DMSO, Fig. 4B). It was also observed that selected compounds 3, 7 and 8 increased the 2-NBDG uptake in a dose-dependent manner (Fig. 5A,B). Taken together, these results suggest that derivatization of seco-iridoids with trans-configured phenolic acids is relevant to activity. These results are consistent with the ethnopharmacological history of this plant as a diabetes remedy in Ayurvedic formulations. Figs S61 and S62). These results provide insight into how these molecules stimulate glucose uptake through up-regulation of the Akt pathway. The present data strongly indicate that compound 3 promotes glucose uptake by GLUT4 translocation associated with increased level of GLUT4 expression. Although the mechanism of 3 is still unclear and further studies are in need near future, we propose this compound could regulate the glucose metabolism and insulin sensitivity. Protein tyrosine phosphatase 1B (PTP1B) inhibitory activity of isolated compounds. Upon insulin binding, glucose uptake is increased by a series of signals including phosphorylation of the insulin receptor (IR), transformation of phosphatidylinositol (4,5)-bisphosphate (PIP2) to phosphatidylinositol (3,4,5)-triphosphate (PIP3) by phosphatidylinositol 3-kinase (PI3K), activation of Akt, and translocation of GLUT4. PTP1B exerts negative regulation of insulin and leptin receptor signalling by dephosphorylation of activated IR and Janus kinase 2 31 . Since some seco-iridoids increased glucose uptake through 2-NBDG and GLUT4 translocation, the PTP1B inhibitory activity, which is important in the relevant signal transduction process, was also assessed 32 . When all isolated compounds were subjected to the PTP1B assay, compound 3 was found to be a good candidate for use as a PTP1B inhibitor, whereas 7 and 8 showed moderate activity at 50 μM ( Supplementary Fig. S53). Among these compounds, the IC 50 of 3 is 19.54 ± 0.76 μM, showing moderate activity compared to the positive control tested (Supplementary Fig. S54). Compound 3 was further examined in kinetic experiments and was shown to inhibit PTP1B in a non-competitive manner at different concentrations (10, 20 and 40 μM, Fig. 5D). Molecular docking analysis of compound 3 into the active site of PTP1B (PDB ID code 1Q6T) was performed according to the CDOCKER protocol in the CHARMm-based docking algorithm 33 . As shown in Fig. 5E, the C-7 carboxylic acid group forms a conventional hydrogen bond and a carbon-hydrogen bond with Asp548 and GLY759, respectively. Moreover, the benzene ring shows affinity for Phe682 via a π-π interaction ( Supplementary Fig. S55). These key residues have been proposed as active sites A and B of PTP1B. Additionally, the CDOCKER interaction energy was calculated to be −61.18 kcal/mol. Overall, the observed effects appear to contribute to the inhibitory activity of compound 3 against PTP1B. Conclusions In this paper, eight new analogues of phenolic acid-conjugated 10-hydroxyoleoside type (1-6 and 8-9) and two new monoterpene-conjugated compounds (10 and 11), along with one known seco-iridoid (7), one megastigmane (12) and two triterpenoids, (13)(14) were isolated from S. cochinchinensis. The absolute configurations of these compounds were determined by ECD analysis, dirhodium (ІІ) tetrakis (trifluoroacetate) reaction or dimolybdenum (ІІ) tetraacetate reaction. Compounds 3, 7 and 8 exhibited 2-NBDG uptake increasing activity in differentiated 3T3-L1 adipocytes by GLUT4 translocation which was evident in Western blot analysis. Compound 3 increased GLUT4 expression level and direct GLUT4 translocation through the PI3K/Akt pathway via a PTP1B inhibition. These results also imply the structure activity relationship to some degree such that derivatization with a phenolic acid and trans configuration contribute to the activity. In conclusion, investigation of new seco-iridoids as ant-diabetic compounds enriches the chemical profile of S. cochinchinensis and provides evidence for the traditional ethnopharmacological uses of this plant. Methods General experimental procedures. Optical rotations were recorded on a JASCO P-2000 polarimeter (JASCO International Co. Ltd., Tokyo, Japan). ECD spectra were measured using Chirascan plus (Applied photophysics Ltd., Surrey, United Kingdom). IR data were obtained using a Nicolet 6700 FT-IR spectrometer (Thermo Electron Corp., Waltham, MA, USA). The 1D and 2D NMR spectra were obtained in deuterated solvents using an AVANCE 800 MHz spectrometer (Bruker, Germany). HRESIMS values were obtained using an Agilent Technologies 6530 Q-TOF MS spectrometer (Agilent Technologies, Inc., Santa Clara, CA, USA). Regular column chromatography (CC) was carried out with silica gel (particle size: 63-200 μm, Zeochem, Lake Zurich, Switzerland), RP-C 18 (particle size: 75 μm, nacalai tesque, Kyoto, Japan), and Sephadex LH-20 (GE Healthcare, Little Chalfont, UK). Silica gel 60 F 254 and RP-18 F 254 S TLC plates were obtained from Merck (Darmstadt, Germany). A Gilson HPLC purification system was used at a flow of 2 mL/min and UV detection at 205, 254, and 300 nm using an Optima Pak C 18 column (10 × 250 mm, 5 μm particle size; RS Tech, Seoul, Korea) and a COSMOSIL 5C 18 -MS-II column (10 × 250 mm, 5 μm particle size; Nacalai Tesque, Kyoto, Japan). Analyticalgrade solvents were used for extraction and isolation. Extraction and isolation. The stems and leaves of S. cochinchinensis (4 kg) were extracted with 70% EtOH (4 × 11 L, for 4 h each) at 60 °C. The combined extract was concentrated by an evaporator to yield a dried residue (752.6 g). Dried extract was suspended in H 2 O and then partitioned with n-hexane, EtOAc and n-BuOH successively. The EtOAc portion (57.3 g) was subjected to silica gel CC (8 × 50 cm) and eluted with gradient system of n-hexane/acetone from 5:1 to 0:1 to yield five fractions (F.1-F.5). F.5 (12 g) was subjected to reversed-phase chromatography ( Determination of absolute configuration of sugars. Compound 9 (1.0 mg) was hydrolysed by 0.5 M HCl (1.0 mL) at 90 °C for 1 hour 35 . The solvent was neutralized with Na 2 CO 3 and concentrated in vacuo. L-Cysteine methyl ester hydrochloride in anhydrous pyridine (0.5 mg) was added to the resulting residue, followed by heating for 1 hour at 60 °C. Phenyl isothiocyanate (0.1 mL) was added and heated at 60 °C for 1 hour. The solution was then analyzed by reversed-phase HPLC under the following conditions: an INNO C 18 column (120 Å, 4.6 × 250 mm, 5 μm); MeCN/H 2 O mobile phase (27:73, v/v); a diode array detector; a detection wavelength of 254 nm; and a flow rate of 0.6 mL/min. Comparisons of the retention time of the derivative of compound 9 with that of the derivative of an authentic sample of D-glucose (retention time: 23.18 min) proved the D-configuration of the glucose moiety in compound 9. Absolute configuration of the tertiary alcohol moiety in 10. Compound 10 (13.2 mg) was kept in pyridine/Ac 2 O 1:1 (4 mL) at room temperature for 19 hours to give peracetylated compound 10a. The mixture was neutralized with NaHCO 3 and dried under vacuum. A colorless gum (10.6 mg, 80.3%) was obtained by extraction with EtOAc and the product was further purified to over 99% using prep-HPLC. Compound 10a (0.5 mg) was then dissolved in a dry solution of [Rh 2 (OCOCF 3 ) 4 ] (1.0 mg) in CDCl 3 (600 μL). The resulting mixture was used for CD measurements and the obtained CD spectrum was compared with that of compound 10a for clarity. The Cotton effect at 350 nm (E band) was correlated with the absolute configuration of the tertiary alcohol 36 . preparation of Mo 2 -complexes of compound 12. The CD spectra were measured at room temperature in DMSO with 1.0 nm/step scans using a 2 mm cell over the range of 250-650 nm according to the Snatzke's method 25 . To form complexes, compound 12 (0.18 mg, 1.33 mM/L) was dissolved in a solution of [Mo 2 (OAc) 4 ] (0.34 mg, 1.33 mM/L) in DMSO at a 1/1 ratio of the molybdenum complex to the diol. Measurement of glucose uptake using 2-NBDG in differentiated 3T3-L1 adipocytes. To determine the level of glucose uptake into 3T3-L1 adipocytes, a fluorescent derivative of glucose (2-NBDG) (Invitrogen, OR, USA) was used as previously described with slight modifications 34,37 . First, 3T3-L1 preadipocytes were differentiated by Dulbecco's Modified Eagle's Medium (DMEM) (HyClone, IL, USA) containing 10% fetal bovine serum (FBS) (Gibco, NY, USA), 1 μM dexamethasone (Sigma, MO, USA), 520 μM 3-isobutyl-1-methyl-xanthine
2019-02-20T14:52:32.274Z
2019-02-19T00:00:00.000
{ "year": 2019, "sha1": "099c739682d7f4a2812876e656bfd43c5f2bcec9", "oa_license": "CCBY", "oa_url": "https://www.nature.com/articles/s41598-018-38013-4.pdf", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "099c739682d7f4a2812876e656bfd43c5f2bcec9", "s2fieldsofstudy": [ "Chemistry" ], "extfieldsofstudy": [ "Medicine", "Chemistry" ] }
7878864
pes2o/s2orc
v3-fos-license
Preserving a Comprehensive Vegetation Knowledge Base – An Evaluation of Four Historical Soviet Vegetation Maps of the Western Pamirs (Tajikistan) We edited, redrew, and evaluated four unpublished historical vegetation maps of the Western Pamirs (Tajikistan) by the Soviet geobotanist Okmir E. Agakhanjanz. These maps cover an area of 5,188 km2 and date from 1958 to 1960. The purpose of this article is to make the historic vegetation data available to the scientific community and thus preserve a hitherto non available and up to now neglected or forgotten data source with great potential for studies on vegetation and ecosystem response to global change. The original hand-drawn maps were scanned, georeferenced, and digitized and the corresponding land cover class was assigned to each polygon. The partly differing legends were harmonized and plant names updated. Furthermore, a digital elevation model and generalized additive models were used to calculate response curves of the land cover classes and to explore vegetation-topography relationships quantitatively. In total, 2,216 polygons belonging to 13 major land cover classes were included that are characterized by 252 different plant species. As such, the presented maps provide excellent comparison data for studies on vegetation and ecosystem change in an area that is deemed to be an important water tower in Central Asia. Introduction The Western Pamirs of Tajikistan constitute an area of high biodiversity with 1,500-2,000 vascular plant species, including 160 endemics, that perform important ecosystem functions and services for the region and the adjacent lowlands [1][2][3]. However, there is strong evidence that land cover and vegetation in the Pamirs are changing with negative impacts on ecosystem properties [4][5][6][7]. Particularly after the dissolution of the Soviet Union pressure on natural resources and human induced land cover change strongly increased [8]. This is primarily associated with a demand for fuel and agricultural products leading to deforestation and overgrazing. For example, the area of Juniper woods strongly decreased during the last decades [2] and the number of cattle increased since 1990 [9]. Overstocking led to a reduced pasture potential, including the expansion of unpalatable and harmful plants [4]. In contrast to pastures, which cover vast areas of the slopes, arable land and associated villages are limited to narrow river terraces and alluvial fans. In this area, riparian Tugai forests play the dominant role in river discharge regulation and embankment stability, and hence in the protection of soils and infrastructure [2]. These forests are strongly degraded because of the energy crisis after the Soviet breakdown that forced the local population to use the Tugai for fuelwood, leading to an increased vulnerability of arable land [10]. This situation was intensified by increased severity of weather conditions, such as torrential rains [5,11], and a rapid decrease in mass balance and extent of local glaciers, resulting in increased summer run-off of the rivers [11][12][13][14][15][16], trends which are considered to be linked to climate change. In summary, this led to increased erosive forces and decreased erosion control at the same time. A consistent warming within the next decades [17] will further accelerate this development. The temperature increase might also destabilize harvests and therefore intensify food scarcity [6,18], for example due to increased infestations of insects on fruit trees [5]. Furthermore, it might affect high-altitude species [19,20], which encompass many endemics and medicinal plants [7,21]. These plants become threatened by the upward shift of more competitive species from below, which might cause their decline or even extinction because they are 'trapped' on the summit and thus lack an escape route [20,22,23]. The discussed findings indicate that the Western Pamirs are a highly dynamic region where anthropogenic and climatic impacts affected vegetation patterns and ecosystem properties in the last decades. Hence, this area provides an ideal field laboratory for detailed studies on the impact of vegetation and ecosystem change on ecosystem functions and services and on the livelihoods of the people. However, such studies require baseline comparison data from the past. Here, we edited, redrew, and evaluated four unpublished historical vegetation maps and the corresponding field notes of the Soviet geobotanist Okmir E. Agakhanjanz (see section 2 and Fig 1) that cover 5,188 km 2 of the Western Pamirs' districts Jazgulom, Rushan, Shugnan and Roshtkala and date from 1958 to 1960. A few other maps are available but not yet evaluated. The purpose of this article is to make the historic vegetation data available to the scientific community and thus preserve a hitherto non available and up to now neglected or forgotten data source with great potential for studies on vegetation and ecosystem response to global change [24,25]. For some vegetation units, where it is feasible, we give an estimation of recent developments based on own observations. Okmir Agakhanjanz and the History of the Vegetation Maps "I am a geobotanist. I investigate the plant cover of the Central Asian Mountains and prepare vegetations maps. Attentively I study the plants. I am interested to know how they form communities among themselves and how they thrive in their mountainous environment. At which altitudes do they grow? What kind of slopes and soils do they colonize? And why specifically those?" ( [26], p. 7-8). Prof. Dr. Okmir E. Agakhanjanz ( Ã 5 January 1927 in St. Petersburg; † 28 October 2002 in Minsk; Fig 1) started his geobotanical career on the Taimyr-Peninsula in 1946. Since 1949 he lived in Dushanbe (Tajik SSR) and was member of the Department of Ecology and Experimental Geobotany at the Academy of Sciences of the Tajik SSR. His main duty was to do geobotanical mapping in various parts of the Soviet Union during many self-organized expeditions. The difficulties and the special circumstances of many of these expeditions during Soviet times are described in AGACHANJANZ [27]. The main goals of the geobotanical mapping expeditions were to establish sound data on grazing potential and biomass production of the natural vegetation in Darwaz, the Fergana Valley, in Southern Tajikistan and predominantly in the Pamirs. From the latter area several geobotanical maps were produced. Vegetation types were characterized Study Area The Western Pamirs are located in the east of Tajikistan, in the Gorno Badakhshan Autonomous Oblast (GBAO). The four maps discussed in this article cover 5,188 km 2 of the districts Jazgulom, Rushan, Shugnan and Roshtkala, approximately between 37°N/71°21'E and 38°2 2'N/72°E (see Fig 2). Elevations range from less than 1,600 m asl in deeply incised valleys up to 6,231 m asl (Peak Vudor). The climate is strongly continental and mainly characterized by the influence of the Westerlies bringing precipitation in winter whereas the summer is dry. Monsoonal influences are assumed to be blocked by the mountain ranges of Hindu Kush and Karakoram. Nevertheless, in summer minor rainfall occurs, which might be related to monsoonal dynamics [28,29]. WALTER and BRECKLE [30] determined the annual mean precipitation within a range of 90 to 217 mm per year. However, the amount of precipitation shows great local differences that are mainly linked to elevation and aspect. It can reach more than 500 mm per year near the snow line at 4,000 m asl, or it can be below 100 mm per year in shielded valleys [11,30,31]. The annual mean temperature ranges between 0.2 and 1.6°C [4,32,33]. Material and Methods The original hand-drawn maps were scanned, georeferenced, and digitized. Then, the corresponding land cover class was assigned to each polygon. In order to test for spatial accuracy and to eliminate allocation errors we carried out GPS-based field spot checks for 60 polygons. Then, the polygons were used to extract pixel based values of the variables elevation, slope, north-exposedness, and east-exposedness with a spatial resolution of 90 m, derived from the Shuttle Radar Topography Mission (SRTM) digital elevation model (DEM) [34]. Aspect, as a circular variable, was transformed to north-and east-exposedness (i.e. cosine and sinus of aspect, see [35]). For discussion and comparison of the values (particularly with elevation values given in the original map legends and in AGACHANJANC [31]) minimum, maximum, arithmetric mean, and median were calculated. Furthermore, we applied generalized additive models (GAMs, [36]) to calculate response curves of the land cover classes and to explore vegetation-topography relationships quantitatively. GAMs are an extension of generalized linear models (GLMs, [37]) that allow for more complex response shapes than a linear one and hence for ecologically more meaningful environmental gradients. We evaluated the results of the GAMs based on the D 2 value (100 Ã (null deviance-deviance)/null deviance), which represents the percentage of deviance explained and is analogous to the R 2 as produced by simple linear regression [38]. The GAMs were fitted using the function gam from the mgcv R-package [39] with logit as the link function, binomial error distribution, and smoothed spline fits with two degress of freedom. Plant species were named according to the original map legends. Subsequently, the names were checked for validity with the Vascular plants of Russia and adjacent states [40], the Afghan checklist of vascular plants [41], and by expert W. B. Dickoré. Outdated names were supplemented by the new accepted name mentioned in square brackets behind the original name. Still, few names are under dispute, we then use both names, not indicating which the synonymy is. Description and Discussion of the Mapped Land Cover Classes In this section, we present and discuss the digitized vegetation maps, the associated descriptions, and information on the altitudinal distribution, slope and aspect. Agakhanjanz mapped altogether 13 major land cover classes (Fig 3) for an area of 5,188 km 2 that consists of 2,216 polygons (i.e. spatially coherent patches or biotopes). These classes were further divided in various subunits. In total, the descriptions of these subunits list 252 different plant species. The polygons were used to extract the information on the four topographic variables under Table 1. We found a relatively strong and significant relationship between elevation and 12 out of 13 land cover classes (D 2 between 0.3 and 34.6%, p<0.0001, see Table 1). Slope was important for the distribution of cultivated land (D 2 = 9.0%). Furthermore, Juniper vegetation, cushion plant vegetation, and floodplain meadows showed D 2 values of nearly 6%. Aspect, regardless of wether north-or east-exposedness, showed only minor relation to the distribution of the land cover classes. Only nival areas respond to north-exposedness (D 2 = 4.5%). Furthermore, tall forb communities and Rosaceae scrub show relatively high values, however this result is influenced by spatial autocorrelation due to the low number of polygons representing these land cover classes. Mountain Tugai Tugai is the local name of Central Asian alluvial scrub and forest (Fig 5). Various woody species can dominate and therefore form different 'complexes'. Most important are species of willow (Salix), birch (Betula), poplar (Populus), and sea buckthorn (Hippophaë). Tugai vegetation occurs, rather locally, in all investigated areas and covers 67.6 km 2 (1.3%) of the mapped area. According to the map legends, this formation occurs up to 3,700 m asl. The statistical analysis shows elevations between 1,787 and 4,505 m asl, with an average of 3,036 m asl (median Determined by smoothed spline fits between the predicted probability of occurrence of land cover class presence (yaxis) and elevation (x-axis; the last two graphs refer to slope and north-exposedness) estimated by a GAM using binomial distribution (logit link) for binary data. Shaded areas indicate the 95% confidence bands. 3,059). The response curve indicates a high probability of occurence from the lowest elevations in the study area up to 3,500 m asl and then sharply drops. Tugai forests degraded in the aspect of area and structure especially after the Soviet breakdown, when they were used for firewood. Meanwhile, however, the situation improved due to programs of communal forest management [10]. Willow Tugai. In the Jazgulom area ( Fig 6) willow species form floodplain and gallery forests. Salix turanica frequently grows up to 15 m tall and covers 70 to 80%. Other Salix species form a second lower layer. A third layer consists of scrubs (Ribes janczewskii, Lonicera species), tall grasses Other Tugai. Tugai dominated by sea buckthorn (Hippophaë rhamnoides) associated with Tamarix ramosissima and open grass vegetation. On alluvial sands and screes in the Rushan area (Fig 7). Currant vegetation with Ribes janzcewskii associated with Polygonum coriarium [Aconogonon coriarium] and Rosa fedtschenkoana. Near springs and river inlets in the Roshtkala area (Fig 8). Rosaceae scrub Rosaceae scrubs cover only very small areas of the study area (3.3 km 2 ; <0.1%) and consist of almond, rose, and cotoneaster vegetation. According to the map legends and AGACHANJANC [31] they occur from 2,500 up to 2,900 m asl, topographic analysis shows occurrences between 2,087 and 3,535 m asl with a mean at 2,716 m asl (median 2698). The response curve shows a very high probability of occurrence up to 4,000 m asl and, thereafter, a steep decline towards zero probability. Almond vegetation. Scrubs, dominated by shrubs or small trees of Amygdalus bucharica associated with tall herbs (Eremurus stenophyllus [E. ambigens], Incarvillea olgae, Silene Cotoneaster vegetation. Composed of Cotoneaster uniflorus and C. multiflorus associated with Lonicera microphylla. Occurs on alluvial fans with abundant amounts of snow. Juniper vegetation. Juniper vegetation occurs in 132.5 km 2 (2.6%) of the study area in elevations between 1,929 and 4,607 m asl (mean 3,254, median 3,265). The probability of occurrence is very high between 2,500 and 4,000 m asl and sharply drops for elevations below and above that values. According to the map legends Juniper vegetation is abundant up to 3,800 (the latter only in the Roshtkala area). A characteristic feature of this vegetation type is a distinct shrub layer with Rosa kokanica, Rosa maracandica, Rosa korshinskiana, Lonicera korolkowii, and near springs Betula pamirica. Furthermore, associated with dwarf shrubs and herbs (e.g. Artemisia persica, Artemisia lehmanniana, Cousinia pannosa, Cousinia rubiginosa, and on screes Acantholimon korolkovii, Acantholimon parviflorum) and grasses (Stipa caucasica, Stipa bella [S. drobovii], Stipa kirghisorum, Poa relaxa). Small patches of this association are widespread in higher altitudes (up to 3800 m asl). On conglomerate slopes and on rocks in the Roshtkala area (Fig 8) Mountain deserts Mountain deserts cover 350 km 2 (6.8%) of the mapped area and show occurrences between 1,553 and 4,367 m asl (mean 2,894, median 2,917). The response curve indicates a very high probability of occurrence up to 3,500 m asl and, thereafter, a sharp drop of the curve that reaches zero probability at 4,500 m asl. AGACHANJANC [31] states a range from 2,000 to 3,500 m asl, the map legends give values between 2,500 and 3,400 m asl. Mountain deserts can be divided into four main types. Cushion plant vegetation Cushion plant vegetation occurs in 342 km 2 (6.6%) of the mapped area in elevations between 2,303 and 4,605 m asl (mean 3,580, median 3,582). In AGACHANJANC [31] and the map legends 3,000 m asl are stated as the minimum and 4,000 m asl (partly 4,700) as the maximum elevation. The response curve displays a probability peak of 0.8 at 3,800 m asl. Meadow-like type of Acantholimon korolkovii vegetation with open layers of Prangos pabularia. High elevations with abundant snow cover of the Rushan area (Fig 7). Mountain steppes Mountain steppes cover 206 km 2 (4.0%) of the mapped area an occur between 1,992 and 4,691 m asl (mean 3,664, median 3,701), with a probability peak of 0.9 around 3,600 m asl. AGACHAN-JANC [31] states an altitudinal distribution between 3,100 and 4,000 m asl, in the map legends a Four Historical Soviet Vegetation Maps of the Western Pamirs range from 3,200 to 4,400 m asl is given. Four different steppe types can be differentiated: Grass steppes, Herbaceous steppes, Prickly Herbaceous steppes, and Wormwood steppes. However, according to the authors' perception, prickly Cousinia herbs spread intensively since the completion of Agakhanjanz's work therefore a differentiation between the two types of herbaceous steppes is not reasonable anymore. Tall forb communities With 3 km 2 (<0.1%), tall forb communities cover only very small parts of the study area. Their altitudinal distribution on the mapped area ranges between 2,178 and 3,930 (mean 2,754, median 2,631), the map legends list 3,900 m asl as the maximum elevation, and AGACHANJANC [31] states a range between 3,100 and 3,300 m asl. The response curves shows a maximum probability of occurrence up to 3,000 m asl, followed by a decrease that reaches zero probability at 4,500 m asl. Mountain meadows Mountain meadows occur in 12.8 km 2 (0.3%) of the mapped area in elevations from 2,079 to 4,220 m asl (mean 3,276; median 3,335), with a flat probability peak between 3,000 and 3,600 m asl. In the map legend an altitudinal distribution between 1,700 and 4,100 m asl and in AGA-CHANJANC [31] between 4,000 and 4,500 m asl is given. The rather big variability of this vegetation type and the discrepancy between the different data sources should be verified by future field work. Floodplain meadows Floodplain meadows are widespread in all mapped areas (94.0 km 2 , 1.8%) in elevations between 1,729 and 4,616 m asl (mean 3,695, median 3,667). In the map legends values between 2,700 and 4,000 m asl (sometimes 4,800 m asl) are given. The response curve shows a distinct probability peak of 0.85 at 3,700 m asl. These meadows are limited to riparian habitats under influence of groundwater or melting snow. Several associations can be differentiated, whereof the most important are dominated by sedges (Carex) and bog sedges (Kobresia) and show, according to field observations by the authors, degraded conditions due to a strong grazing impact. Nival areas Nival areas consist of glaciers and firn fields without higher plants and cover 394.3 km 2 (7.6%) of the mapped area. According to the response curves the probability for the occurrence is between 3,069 and 6,029 m asl (mean 4,679, median 4,673), while according to the map legends they are found above 4,600-4,800 m asl and above 3,900 m asl in the moister north (Jazgulom area). AGACHANJANC [31] gives 4,500 m asl as the threshold for the occurrence of nival areas. This value is also displayed by the response curve that increases above 3,000 m asl and reaches a probability of 1.0 at around 4,500 m asl. Furthermore, for this land cover class a relatively high relevance of north-exposedness could be verified by the data analysis. The response curve shows lowest probability values for values below zero (i.e. south-exposedness) and a linear increase of probability from 0 to 1.0 (i.e. fully north-exposed). Compared to the 1960s the extension of nival areas decreased due to strong warming in this region [17]. Cultivated land This land cover class predominantly consists of settlements, agricultural land, clover meadows, gardens (see Fig 17), and in the Jazgulom area (Fig 6) also of walnut plantations with Juglans regia. Cultivated land covers 133.5 km 2 (2.6%) of the mapped area in elevations between 1,596 and 3,930 m asl (mean 2,560, median 2,570). The response curve indicates a maximum probability of occurrence up to 3,800 m asl, followed by a sharp drop towards zero probability. According to the map legends cultivated land reaches up to 3,400 m asl (rarely 3,700 m asl). For this land cover class also slope was identified as an important environmental variable. The response curve shows a nearly linear trend of a maximum probability at 0°via a 0.5 probability at just under 30°towards zero probability at 70°. Conclusion The presented maps depict a detailed description of the distribution and the status of vegetation in the study area in 1958-1960. However, they also have some weaknesses that need to be outlined. Ground checks revealed spatial inaccuracies of the polygons. This applies particularly to highly elevated polygons, e.g. of cryophytic and subnival vegetation, and to very narrow polygons depicting Mountain Tugai, floodplain meadows and cultivated land. For example, edges of narrow Mountain Tugai polygons reached into the valley slope and hence into neighbouring vegetation classes like e.g. Artemisia deserts. Another problem is the unclear botanical taxonomy. Many species names are outdated or need to be verified and existing determinations are often doubtful or highly debated by taxonomists. In addition, the number of species listed in the map descriptions is far from being complete. The estimated number of species for the entire Western Pamirian flora is between 1,500 [2] and 2,000 [3]. Obviously, in the map descriptions only prominent and/or dominant species are listed. This is mainly due to the fact that mapping at that time had the main goal to provide data on productivity of the various vegetation types. Therefore, further taxonomic efforts are necessary, including the collection of herbarium specimen. Nevertheless, we are confident that the maps presented in this article provide a sound basis for the study of environmental changes that e.g. occurred widespread after the breakdown of the Soviet Union, and recently because of increasing temperatures and heavy rains that can lead to extreme events, such as floods, debris flows, or glacial lake outbursts [5,42,43]. For example, on the 7 th August 2002 a glacial lake outburst destroyed the village of Dasht in the Shakhdara valley, killed 24 people and displaced the Shakhadar river bed by about 1 km. A similar event occurred in the Red valley (a tributary valley of the Bartang valley in the Rushan area) where heavy rains caused a debris flow in summer 2011 that destroyed the cultivated land and Tugai forests almost completely causing the abandonment of three small villages. Finally, since vascular plants react with some delay on changed environmental conditions, they are considered to display long term trends and thus are useful indicators for an ecological assessment of the impact of climate change [23,25]. Particularly, high mountain areas are ideal sites for comparative studies of cold habitats [25] and the maps presented here can serve as a comparison baseline.
2018-04-03T00:00:38.673Z
2016-02-16T00:00:00.000
{ "year": 2016, "sha1": "1ee90b09643f736c964f6f3e9f07d3c7ce57a1bb", "oa_license": "CCBY", "oa_url": "https://journals.plos.org/plosone/article/file?id=10.1371/journal.pone.0148930&type=printable", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "1ee90b09643f736c964f6f3e9f07d3c7ce57a1bb", "s2fieldsofstudy": [ "Environmental Science" ], "extfieldsofstudy": [ "Geography", "Medicine" ] }
250497855
pes2o/s2orc
v3-fos-license
Chimeric Antigen Receptor (CAR)-T Cell Immunotherapy Against Thoracic Malignancies: Challenges and Opportunities Different from surgery, chemical therapy, radio-therapy and target therapy, Chimeric antigen receptor-modified T (CAR-T) cells, a novel adoptive immunotherapy strategy, have been used successfully against both hematological tumors and solid tumors. Although several problems have reduced engineered CAR-T cell therapeutic outcomes in clinical trials for the treatment of thoracic malignancies, including the lack of specific antigens, an immunosuppressive tumor microenvironment, a low level of CAR-T cell infiltration into tumor tissues, off-target toxicity, and other safety issues, CAR-T cell treatment is still full of bright future. In this review, we outline the basic structure and characteristics of CAR-T cells among different period, summarize the common tumor-associated antigens in clinical trials of CAR-T cell therapy for thoracic malignancies, and point out the current challenges and new strategies, aiming to provide new ideas and approaches for preclinical experiments and clinical trials of CAR-T cell therapy for thoracic malignancies. INTRODUCTION With the continuous improvement of living standards, the incidence and mortality of tumors are rapidly increasing worldwide (1). Among them, thoracic malignancies are common thoracic surgical diseases with high morbidity and mortality, mainly including lung cancer, breast cancer, esophageal cancer, pleural mesothelioma, and thymic cancer (2). According to estimates from the Global Cancer Statistics 2020, there were an estimated 5,103,160 new cases of thoracic cancers and 3,051,494 cancer deaths (3), accounting for 26.45% and 30.64% of new cancers and deaths worldwide, respectively. Thus, thoracic cancer is the leading cause of cancer-related death and a significant obstacle to enhancing life expectancy worldwide. In recent decades, despite advancements in our knowledge of tumor progression and treatment strategies (e.g., radical surgery, chemotherapy, and radiotherapy) that contribute to prolonged survival times of patients with thoracic cancers, the prognosis of thoracic cancers has not improved due to tumor mutation and heterogeneity (4,5). Moreover, many thoracic cancers are diagnosed at an advanced stage that often miss the optimal treatment time and are prone to recurrence after surgery (6)(7)(8). Thus, it is imperative to seek novel methods to stop tumor progression and prolong the survival time of patients with thoracic malignancies. In the past decade, numerous studies have used immunotherapy with checkpoint inhibitors, especially monoclonal antibody-targeted drugs, for the treatment of malignancies (such as solid tumors and hematological malignancies), but its application in preclinical and clinical studies still has some limitations (9). Moreover, cytotoxic T cells have been reported to act as important immune mediators in controlling tumor progression (10). Additionally, beneficial effects have been reported in patients with melanoma, lung cancer, and breast cancer when treated with adoptive T cell therapy and genetically engineered T cells (11), which indicated that T cells have the potential to eliminate malignant tumors under appropriate conditions. In some cases, thoracic cancers are already being inhibited by T cell therapy, such as esophageal cancer (12), lung cancer (13), and breast cancer (14). Of note, chimeric antigen receptor (CAR)-T cells, which act as modified T cell therapy, have attracted growing interest in malignant tumors in recent years (15) and are also considered safe and reliable immunotherapies in malignant tumors (16). Currently, CAR-T cell immunotherapy has been highly successful in hematologic malignancies, with overall remission rates of more than 80% (17). For example, CAR-T cells targeting CD19 have a long-term remission effect on drug-resistant B cell malignancies, with a cure rate of approximately 85% in patients with relapsed and refractory acute B-lymphocytic leukemia and non-Hodgkin lymphoma (18,19). Currently, five types of CAR-T cells targeting CD19 have been approved by the US Food and Drug Administration (FDA) for the treatment of hematologic malignancies (20), opening up new directions for tumor immunotherapy and antitumor treatment. Simultaneously, a range of solid tumor CAR-T cell target tumor-associated antigens (TAAs) have been identified and are in early clinical trials (21,22). Moreover, several studies have focused on CAR-T cell immunotherapy for the treatment of thoracic cancers and have made good progress in clinical trials (22,23). The above findings suggest that CAR-T cell immunotherapy may be a novel strategy for the treatment of thoracic tumors. In this review, we summarize the recent research advances in CAR-T cell immunotherapy for thoracic malignancies, including the structure and generation of CAR-T cells and clinical applications. Moreover, we focus on the main challenges and future prospects of CAR-T cell immunotherapy against thoracic cancers, aiming to provide new ideas for the clinical trial design and treatment of thoracic malignancy immunotherapy. THE STRUCTURE AND GENERATION OF CAR−T CELLS The Structure of CAR−T Cells CAR-T cells are produced by isolating the patient's T cells out of the body and re-forcing them into the body and bind to on cancer cells specifically (24). CARs are mainly composed of an extracellular antigen recognition domain, a hinge and transmembrane domain, and an intracellular signal transduction domain (Figure 1). The single-chain variable fragment (scFv) of the target antigen-antibody, consisted by the heavy chain variable regions and the light chain variable regions is specific to the TAA. The hinge and transmembrane structural domains serve to connect the extracellular and intracellular structural domains therefore leads to the CAR-T cell activation (25). Meanwhile, the length or flexibility of the transmembrane structural domain can also affect the function of CAR (26). The intracellular signal transduction structural domain mainly consists of the stimulatory factor CD3z chain and is often combined with other costimulatory molecules, activating T cell function (27). Generation of CAR−T Cells CAR-T cells are currently classified into five generations based on their intracellular signaling structural domains, with the main differences between CAR-T cell generations being specific costimulatory molecules ( Figure 2). The first generation of CAR-T cells is so concise that it included only CD3z as an intracellular signaling (28). For lacking costimulatory molecules, the first-generation CAR-T cells cannot provide prolonged triggering of T cell activation and therefore have limited antitumor effect. The second-generation CAR, with costimulatory molecules and inducible costimulatory were added to enhance T cell proliferation (29). Based on the fact that CD28-CAR-T cells are more potent in killing cancer cells, and 4-1BB-CAR-T cells exhibit lower depletion rates and longerlasting killing effects on cancer cells (30), third-generation CARs added both CD28 and OX-40/4-1BB (31). As cytokine secretion in third-generation CAR-T cells are upregulated and greatly inhibited cancer cell proliferation is enhanced (32,33), fourthgeneration CARs, also known as T cells redirected for universal cytokine-mediated killing (TRUCKs) (34), adds cytokineencoding genes to enhance cancer killing effect by secreting inflammatory cytokines. Promisingly, the fifth-generation CARs, replacing OX-40/CD27 by IL-2 receptor b, has shown potential effect via activating the Janus kinases and signal transducers and activators of transcription-3/5 pathway in tumors (35,36). However, both the safety and efficacy of the 5 th -generation are need to be investigated and the possibly damaged transduction efficiency of CAR-T cells also should be taken care (37). TARGET ANTIGENS FOR CAR-T CELL THERAPY IN CLINICAL TRIALS FOR THORACIC MALIGNANCIES In recent decades, the difficulty of CAR-T cell immunotherapy in thoracic malignancies has been mainly due to the lack of ideal targets. The ideal TAA for CAR-T cell immunotherapy is exclusively expressed on all or most tumor cells but not expressed or expressed at very low levels on normal tissues (38), which can enable CAR-T cells to trigger cancer-specific immune responses, thus sparing healthy tissues (39). However, it is difficult to obtain the ideal TAA for CAR-T cell immunotherapy in thoracic malignancies as CD19 in hematologic malignancies (40,41). Based on previous studies, we summarize a series of TAAs that could be used as antigenic targets for CAR-T cells in patients with thoracic tumors in Figure 3 and Table 1. B7-H3 B7-H3 (CD276), which is a member of the B7 immunoglobulin superfamily and is highly expressed in many malignant tumors, serves as a molecular target for cancer immunotherapy (42). Numerous studies have demonstrated that B7-H3 facilitates the development and progression of tumors by promoting the malignant biological behavior of cancer cells (43,44), such as cell proliferation, migration, invasion, apoptosis, and metabolism. Moreover, overexpression of B7-H3 inhibited the activation of T cells and effectively suppressed the proliferation and cytotoxic functions of activated T cells. For example, inhibition of B7-H3 promoted the viability of cytotoxic T lymphocytes (CTLs) and natural killer (NK) cells and reduced the number of tumorassociated macrophages and tumor load (45). Of note, B7-H3 was overexpressed in tissues of patients with thoracic malignancies (46)(47)(48), and antibody immunotherapy targeting B7-H3 did not lead to toxicity to vital organs (49). Scribner et al. (50) reported that the antibody-drug MGC018 targeting B7-H3 possessed antitumor activity in patient-derived xenograft models of breast cancer and lung cancer. The above studies indicated that B7-H3 may be an ideal TAA for cancer cell immunotherapy. Recently, several clinical studies showed that B7-H3-targeted CAR-T cells exhibited effective antitumor activity in hematologic tumors (e.g., acute myeloid leukemia) (51) and solid tumors (e.g., brain tumors, ovarian cancer, prostate cancer, melanoma) (52)(53)(54). Meanwhile, several clinical trials have been designed to test the safety, tolerability, and feasibility of B7-H3-targeted CAR-T cells against thoracic tumors, including NCT05341492, NCT04864821, and NCT03198052. Overall, B7-H3-targeted CAR-T cells may be a novel curative approach for B7-H3positive patients with thoracic tumors. CEA (Carcinoembryonic Antigen) CEA is a glycoprotein that belongs to the immunoglobulin superfamily, and its expression is positively correlated with tumor incidence (55). Meanwhile, analysis of the TCGA database revealed that CEA was highly expressed in thoracic tumors (e.g., lung, breast, and esophageal), and patients with high CEA expression showed a worse prognosis. Previous studies have also proven that CEA serves as an ideal target for the treatment of gastrointestinal tumors (56,57). Preclinical data have confirmed that the serum concentrations of CEA in patients with advanced non-small-cell lung cancer (NSCLC) were correlated with the occurrence of brain metastases (58), and high CEA expression was associated with clinicopathological characteristics in lung cancer patients, including lymph node metastasis and vascular infiltration (59). Recent studies confirmed that CEA-targeted CAR-T cells inhibited tumor growth and enhanced the overall survival time of tumor-bearing mice (60,61). Importantly, CEA-specific CAR-T cells exhibited an antitumor effect in patients with CEA-positive solid tumors and did not cause cytokine release syndrome (62). EGFR (Epidermal Growth Factor Receptor) EGFR, which is highly expressed on the membrane surface of many solid tumor cells and are involved in nearly all aspects of malignant cancer, belongs to the ErbB family of growth factor receptor tyrosine kinases (63,64). Previous studies have shown that EGFR expression is upregulated in the tissues of patients with thoracic malignancies (65)(66)(67), indicating that it can be an effective biomarker for the diagnosis and treatment of thoracic tumors (68). The results of an EGFR-positive relapsed/refractory (R/R) NSCLC clinical trial (NCT01869166) showed that none of the patients experienced significant toxic side effects after anti-EGFR CAR-T cell therapy, two patients achieved partial remission, and five patients had stable disease for 2-8 months. Xia et al. (69) reported that third-generation EGFR-targeted CAR-T cells exerted potent and specific suppression of triplenegative breast cancer (TNBC) cell growth in vitro and in vivo by activating the Fas/FADD/Caspase pathway. The above studies suggested that EGFR-targeted CAR-T cell therapy could be utilized in the treatment of patients with EGFR-positive thoracic malignancies in the future, although additional clinical studies are needed to confirm these results. Epcam (Epithelial Cell Adhesion Molecule) EpCAM is a transmembrane glycoprotein also known as CD326. Previous studies have demonstrated that overexpression of EpCAM is associated with poor prognosis in patients with esophageal squamous cell carcinoma (70), lung cancer (71), and breast cancer (72), and it can be used as a marker for circulating tumor cells involved in cancer cell metastasis (73). Meanwhile, EpCAM plays a key role in tumorigenesis and metastasis (74). Hiraga et al. (75) showed that high expression of EpCAM was closely associated with bone metastasis in breast cancer. Importantly, EpCAM is an excellent target for various therapeutic approaches, including immunotherapy, because it is uniformly expressed on the surface of tumor cells (76,77). As expected, a clinical trial also confirmed that EpCAM-targeted CAR-T cells are safe and effective in the treatment of EpCAMpositive gastric cancer (78). Taken together, EpCAM may be a promising target for CAR-T cell therapy in thoracic malignancies. FAP (Fibroblast Activating Protein) FAP is a marker expressed on cancer-associated fibroblasts in human solid tumors (79). Previous studies have found that overexpression of FAP facilitates cancer cell proliferation, invasion, and angiogenesis (80) and serves as a novel target for various cancer therapies (81). In addition, FAP has been reported to be an excellent target for immunotherapy in glioblastoma (82) HER2 (Human Epidermal Growth Factor Receptor 2) HER2 is a transmembrane glycoprotein that has become more widely studied as a target for tumor therapy in recent years. Previous studies have confirmed that HER2 is highly expressed in thoracic malignancies (85) and facilitates the proliferation, invasion, and angiogenesis of cancer cells (86). Of note, HER2 serves as a promising biomarker for the diagnosis and treatment of solid tumors (87,88), which has attracted many scholars to focus on HER2 as a novel target for cancer immunotherapy. For example, HER2-targeted CAR-T cells inhibited xenograft growth in esophageal cancer mouse m odels and reduced proinflammatory cytokine secretion (89). Another study demonstrated that third-generation HER2-targeted CAR-T cells exhibited an antitumor effect on HER2-positive and trastuzumab-resistant breast cancer in vivo (90). The above studies suggested that HER2 may be clinically effective as a target for CAR-T cell immunotherapy for the treatment of thoracic malignancies. Mesothelin (MSLN) MSLN is a cell adhesion glycoprotein and its overexpression was positively correlated with high tumor aggressiveness and poor prognosis in patients with thoracic malignancies (91)(92)(93)(94). Importantly, MSLN has been reported to be a more desirable TAA for CAR-T cell therapy in solid tumors (95). MUC1 (Mucin 1) MUC1 is a transmembrane protein that facilitates cancer cell adhesion and metastasis (102). Previous studies have confirmed that MUC1 is aberrantly overexpressed in thoracic malignancies, including lung cancer (103), breast cancer (104), and esophageal cancer (105), and serves as an oncogene in the tumorigenesis of various human adenocarcinomas. Of note, MUC1 has been reported as a reliable target for immunotherapy of solid malignancies (106). Wei et al. (107) showed that CAR-T cells targeting prostate stem cell antigen (PSCA) and MUC1 significantly eliminated tumor cells that were positive for both PSCA and MUC1 in NSCLC. Another study reported that MUC1-targeted CAR-T cells reduced the proliferation capability of esophageal cancer cells by activating the JAK/STAT pathway and inhibited tumor growth in transplantation models and patient-derived tumor xenograft (PDX) models of esophageal cancer in vivo (108). In addition, 6 clinical trials are currently evaluating the safety and efficacy of anti-MUC1 CAR-T cell therapy in thoracic malignancies (NCT03179007, NCT02587689, N C T 0 3 1 9 8 0 5 2 , N C T 0 3 7 0 6 3 2 6 , N C T 0 3 5 2 5 7 8 2 , and NCT05239143). Programmed Death-Ligand 1 (PD-L1) Targeting the programmed death-1 (PD-1)/PD-L1 signaling pathway has made substantial progress in the immunotherapy of thoracic malignancies in recent years (109). Numerous studies have confirmed that PD-L1 serves as an important immune checkpoint that is upregulated in various malignant tumors, including thoracic tumors (110,111). Previous studies have demonstrated that PD-L1 can inhibit T cell proliferation and activation by binding to PD-1 on T cells, ultimately leading to immune escape of tumor cells (112,113). Meanwhile, the treatment of malignancies with PD-L1 antibody has shown safe and exciting results in preclinical studies and clinical trials (114). Of note, preclinical studies demonstrated that PD-L1targeted CAR-T cells possessed potent cytotoxic effects against NSCLC (115) and breast cancer (116). Qin et al. (117) reported that CAR-T cells targeting PD-L1 significantly inhibited the growth of multiple types of solid tumors in PDX mouse models. Another study proved that PD-L1-targeted CAR-T cells exhibited antigen-specific activation, cytokine production, and cytotoxic activity against PD-L1 high NSCLC cells and xenograft tumors, and the addition of a subtherapeutic dose of local radiotherapy improved the efficacy of PD-L1-CAR-T cells against PD-L1 low NSCLC cells and xenograft tumors (115). Moreover, inactivation of the PD-1/PD-L1 pathway enhanced the toxicity of CAR-T cells against tumor cells (118). Currently, several clinical trials are investigating the safety and efficacy of PD-L1-targeted CAR-T cells in thoracic malignancies (NCT03060343, NCT04556669, NCT04684459). However, a pilot study of anti-PD-L1 CAR-T cell immunotherapy for advanced lung cancer in a phase I trial was terminated due to serious adverse events (NCT03330834). Therefore, further evaluation of the potential applications of anti-PD-L1 CAR-T cell therapy in clinical trials is needed. ROR1 (Receptor Tyrosine Kinase-Like Orphan Receptor 1) ROR1, a tyrosine kinase-like orphan receptor, is upregulated in both lung cancer and breast cancer but has very low expression in normal tissues (119). Zheng et al. (120) demonstrated that ROR1 was an independent prognostic biomarker for overall survival. Importantly, the antitumor activity of anti-ROR1 CAR-T cells was equivalent to that of CD19 CAR-T cells in human mantle cell lymphoma (121). In both breast and lung cancer, ROR1-targeted CAR-T cells significantly restricted tumor growth and prolonged tumor survival (122). A recent study demonstrated that treatment with anti-ROR1 CAR-T cells could effectively kill NSCLC and TNBC cells in a threedimensional tumor model (123). Thus, targeting ROR1 may be an effective strategy to improve CAR-T cell efficacy for the clinical treatment of thoracic malignancies. CURRENT CHALLENGES AND STRATEGIES OF CAR−T CELL THERAPY IN THORACIC MALIGNANCIES CAR-T cell immunotherapy in solid tumors, especially in thoracic malignancies, still faces many obstacles compared to various types of malignant hematological tumors. The following aspects need to be taken into consideration for CAR-T cell immunotherapy in thoracic malignancies (Figure 4) (1): on-target/off-tumor toxicity (2); tumor antigen escape (3); neurological toxicity (4); immunosuppressive microenvironment (5); CAR-T cell trafficking and tumor infiltration. In summary, overcoming these challenges is the current hot field of CAR-T cell therapy in thoracic malignancies. On−Target/Off−Tumor Toxicity The most critical problem with CAR-T cell therapy for solid tumors is the lack of an ideal TAA. The degree of on-target/offtumor toxicity is the key component to the success of these candidate TAAs for CAR-T cells (130). ERBB2 expression is relatively low in the normal lung tissues, however, Morgan et al. (131) reported that injection with anti-ERBB2 CAR-T cells resulted in a colon cancer patient developed respiratory distress 15 minutes later and eventually died after 5 days. Meanwhile, the off-tumor toxicity of CAR-T cells may cause normal organ dysfunction (132). Screening and discovery of novel tumor antigens (133), dual CAR systems (134) and suicide genes (135) possibly can avoid these risks. Recently, many novel tumor antigens [e.g., intercellular adhesion molecule-1 (ICAM1) (136), NKG2D (137), VEGFR2 (138), MUC4 (139), and cluster of differentiation (CD)70 (140)] were reported to be effective targets for CAR-T cell therapy of solid tumors. Wang et al. (141) showed that chlorotoxin as the targeting domain of CAR-T cells exhibited anti-glioblastoma (GBM) activity and resulted in tumor regression in orthotopic xenograft GBM tumor models with the potential to reduce antigen escape during CAR-T cell therapy. Moreover, a new technology, namely single-cell RNA sequencing, may provide a more accurate target antigen expression profile for TAA selection, which can better predict the efficacy and toxicity of novel CAR-T cell therapy in tumors (142). Choi et al. (143) demonstrated an elegant approach to overcome EGFRvIII antigen loss, with EGFRvIII-targeting CAR-T cells that secrete a bispecific T cell engager (BiTE) against wildtype EGFR, and CAR-T-BiTE cells did not result in toxicity against human skin grafts in vivo compared with EGFR-specific CAR-T cells. Furthermore, designing CAR-T cells targeting multiple targets in combination may also be an effective strategy to enhance tumor eradication (144). For example, Roybal et al. (145) found that anti-GFP and anti-CD19 dual-specific CAR-T cells significantly inhibited K562 cell proliferation and xenograft tumor growth. Meanwhile, preclinical studies showed that GD2/ B7-H3 (146) or ROR1/B7-H3 (147) SynNotch CAR-T cells killed tumor cells with high specificity and efficacy and without toxicity to normal cells expressing the target antigen. Neurological Toxicity Neurotoxicity is characterized by various neurological symptoms, including headache, aphasia, delirium, and even cerebral hemorrhage, seizures, and death (148). During this process the systemic inflammatory response associated with CRS may contribute to the risk of complications of neurotoxicity (149,150). The activation of endothelial cells possibly facilitates the occurrence to neurotoxicity (151), which has been verified by autopsy showing that the disrupted endothelial dysfunction and blood-brain barrier disruption (152). Importantly, neurotoxicity can be largely reversible and completely resolved after treatment with tocilizumab and dexamethasone, whereas neurotoxicity recovery was slower after treatment with tocilizumab for neurotoxicity patients with endothelial cell activation (153). Cytokine Release Syndrome (CRS) CRS is induced by T cell activation and commonly presented with fever, chills, muscle pain, generalized weakness, and systemic organ failure (154). Activated CAR-T cells is the leading cause of CRS and possibly result in a significant increase in the secretion of proinflammatory factors by immune cells (155). To avoid this disadvantage, a controlled gene "device", such as herpes simplex virus thymidine kinase (HSV-TK), human inducible caspase 9 (iCasp9), mutant human thymidylate kinase (mTMPK), and human CD20, for CAR-T cells was applied and has shown to be effective in reducing proinflammatory cytokine secretion and clearing CAR-T cells from the body in time for acute toxicity (156)(157)(158)(159). Apart from that, dasatinib can also act as a CAR-T cell "switch" to control the biological function of CAR-T cells upon entry into the body and protect mice from CRS (160). Moreover, optimizing CAR gene transfection can regulate the in vivo lifespan and kinetics of CAR-T cells (161) and using nanoparticles can reduce and avoid CRS (162). Overall, avoiding CRS damage after CAR-T cell immunotherapy will be a key issue in the treatment of thoracic malignancies in the future. Immunosuppressive Microenvironment Immunosuppressive TME is characterized by hypoxia, oxidative stress, and tumor-derived cytokine suppression, which is greatly restricted the CAR-T cell therapy (22). Suppressive immune cells, including regulatory T cells, myeloid-derived suppressor cells, and tumor-associated macrophages, can be activated by a variety of immunosuppressive factors released by tumor cells (163). Of note, preclinical studies have extensively shown that the TME is hostile to T cells (164,165). All these studies suggest that altering the immunosuppressive effects on the TME possibly enhance the anticancer effects of CAR-T cells. Some groups have demonstrated that PD-1-blocking scFv secreting CAR-T cells significantly prolonged the survival time of tumor-bearing (166) and CAR-T cells overexpressing the PD-1 dominant negative receptor could act as a "decoy receptor" to bind and block PD-L1/2 inhibitory signals (167). In addition, IL-7/IL-5 exhibited antitumor activity by promoting CAR-T cell proliferation ability, reducing CAR-T cell apoptosis, and reforming the immunosuppressive TME (168). Therefore, CAR-T cells coexpressing immune-related factors may be an effective solution for the clinical treatment of thoracic malignancies. CAR-T Cell Trafficking and Tumor Infiltration In the treatment of hematologic malignancies, CAR-T cells can effectively exert their antitumor effects by direct contact with tumor cells. However, the ability of CAR-T cells to infiltrate solid tumors is restricted when treating thoracic malignancies due to physical barriers (e.g., tumor-associated fibroblasts (CAFs) and dense extracellular matrix (ECM)) in the tumor tissue (15), which results in reduced antitumor effects. In addition, the immunosuppressive TME also limits the penetration and movement of CAR-T cells within solid tumors (169). Thus, improving the ability of CAR-T cells to specifically degrade ECM in stroma-rich solid tumors without compromising their cytotoxicity (170) might be an effective strategy to alleviate the above limitations. For example, Caruana et al. (171) reported that engineered CAR-T cells expressed heparinase, which degrades heparan sulfate proteoglycans, the main components of the ECM, and thus promoted T cell infiltration into the tumor and antitumor activity. Wang et al. (83) showed that FAPtargeted CAR-T cells possessed an antitumor effect on solid tumors by reducing tumor fibroblasts and enhancing host immunity without severe toxicity in xenograft models. Recent studies have confirmed that engineered CAR-T cells expressing chemokine receptors (e.g., CXCR1, CXCR2, CXCR4) contribute to enhancing CAR-T cell trafficking and tumor infiltration (172,173) as well as improving antitumor activity. Overall, further studies are needed to develop new delivery strategies to improve the penetration of CAR-T cells in tumor tissues, which will enhance the efficacy of CAR-T cells in thoracic malignancies. Tumor Antigen Escape Currently, other factors affecting the antitumor effect of CAR-T cells on malignancies may be related to antigen escape. For example, anti-CD19 CAR-T cell therapy caused the loss of CD19 target antigen in R/R B cell acute lymphoblastic leukemia (B-ALL) patients (174). In addition, target antigen escape is a major cause of R/R cancer and a key factor in the failure or stronger side effects of expanding the use of CAR-T cells toward solid cancers with multiple surface antigens (175). The construction of CAR-T cells containing dual targets may be an effective strategy to address this problem. For example, the therapeutic effect of CAR-T cells with dual targets of CD19 and CD22 in a phase I clinical trial (NCT03330691) for the treatment of R/R B-ALL was better than that of single-target CD19 or CD22, which could avoid the problem of target antigen escape that occurs with single targets. Moreover, anti-CD19/BAFF-R CAR-T cell therapy showed prolonged in vivo persistence and exhibited antigenspecific cytokine release, degranulation, and cytotoxicity against both CD19 -/and BAFF-R -/variant human ALL cells in vitro (176). Another study showed that CAR-T cells targeting BAFF-R could overcome CD19 antigen loss in B cell malignancies (177). These findings are important for developing approaches to overcome the risk of tumor antigen escape in CAR-T cell immunotherapy for thoracic tumors. OPPORTUNITIES TO IMPROVE CAR-T CELL SAFETY AND EFFICACY Previous studies have shown that uncontrolled CAR-T cell proliferation in patients with malignancies treated with CAR-T cells can cause severe toxicity (178,179). Currently, numerous studies have developed many methods to improve the safety and efficacy of CAR-T cell therapy in solid tumors, as described below. Removal of Residual CAR-T Cells The integration of "suicide genes" into T cells served as an inducible safety switch that allowed transduced CAR-T cells to kill themselves in the case of adverse events (180). Preliminary studies have shown that different suicide genes, such as HSV-TK, iCasp9, mTMPK, and human CD20, can be expressed in donor T cells (158,159) and have shown promising safe suicidal effects in early-phase clinical trials of CAR-T cell therapy. Functionally, activation of HSV-TK, iCasp9 and CD20 eventually resulted in effective T cell destruction; however, iCasp9 and CD20 induced immediate cell death, HSV-TKexpressing T cells required 3 d of exposure to ganciclovir, and mTMPK-transduced cells in all T cell killing rates reflected a poorer response (181). Klopp et al. (182) showed that depletion of T cells via iCasp9 increased the safety of adoptive T cell therapy against chronic hepatitis B. Another study showed that the HSV-TK suicide gene could enhance the safety of anti-CD44v6 CAR-T cell therapy in lung cancer (128). To date, only two suicide genes (HSV-TK and iCasp9) have demonstrated an excellent safety profile in clinical trials (NCT00423124; ChiCTR-OOC-16007779). ON/OFF-Switch for CAR Currently, engineered CAR-T cells, as autonomous "living drugs" for cancer treatment, lack precise control and may cause toxicity, suggesting that assembling ON/OFF switches for CARs with small molecules may address the above limitations (183,184). For example, Wu et al. (157) designed ON-switch CARs that enable small-molecule (e.g., AP21967) control over T cell therapeutic functions while still retaining antigen specificity. Similarly, another study established a new CAR structure with an integrated ONswitch system that controls the function of CAR-T cells, and CAR-T cells with integrated controllable transients exhibited antitumor activity under multiple cytotoxic cycles using small molecule drugs without severe toxicity (156). Jan et al. (185) constructed the ON-switch CAR (lenalidomide ON-switch split CAR) and the OFF-switch CAR (lenalidomide OFFswitch degradable CAR). Importantly, treatment with lenalidomide only restricts the short-term toxicity of CAR-T cell immunotherapy but does not affect the long-term antitumor effects of CAR-T cells. Moreover, Frankel et al. (186) proposed that bifunctional molecules could act as a bridge between cytotoxic T cells that can effectively kill cancer cells on one side and T cells that target CD3 molecules and associated antigens on the surface of tumor cells on the other side, thus activating T cells with a double switch and effectively destroying the target cells. Improving Trafficking Currently, the application of CAR-T cells for solid tumors can be performed by devices placed surgically (e.g., central nervous system tumors), by intra-arterial delivery, or by direct intratumoral injection. For example, Brown et al. (187) reported that inhibition of tumor growth and upregulation of immune cytokine levels by intracranial infusion of CAR-T cells targeting IL13Ra2 was not associated with toxic effects. Tchou et al. (188) showed that intratumoral injection of anti-cMET CAR-T cells halted tumor growth in patients with metastatic breast cancer and evoked an inflammatory response within tumors, and none of the patients had study drug-related adverse effects greater than grade 1. In addition, prompting CAR-T cells to express chemokine receptors may also be an effective strategy to accelerate CAR-T cell trafficking to tumors. (189) demonstrated that CCR4 can serve as a novel target antigen for the treatment of T cell malignancies by CAR-T cells. However, there is controversy about the optimal chemokine receptor used to improve CAR-T cell trafficking (190). Furthermore, many chemokines are used as target antigens for CAR-T cells in solid tumor treatment (172,(191)(192)(193). Improving CAR-T Cell Manufacturing Autologous CAR-T cells are patient-derived personalized products that can achieve long-term antitumor activity but still have many drawbacks, such as treatment delays (2 to 4 weeks), complex manufacturing procedures, and increased costs (194). Importantly, the development of universal CAR-T cells could simplify the manufacturing process and expand production, facilitating immediate delivery of immunotherapy at a lower cost (195). For example, Choi et al. (196) created universal EGFRvIII CAR-T cells using the CRISPR-Cas9 system and showed significant antitumor activity in preclinical glioma models and prolonged survival in mice bearing intracranial tumors. In addition, phase I clinical trials of universal CAR-T cells targeting MSLN (NCT03545815) and NKG2D (NCT03692429) are underway to seek safe and effective therapeutic methods. FUTURE PERSPECTIVES FOR CAR−T CELL THERAPY IN THORACIC MALIGNANCIES The success of CAR-T cell therapy in hematologic malignancies has inspired the thought dealing with thoracic malignancies and has entered a phase of rapid development (36). Future studies on CAR-T cells may include but not limited in (1): searching for more specific target antigens (2); reforming the CAR structure to enhance the efficacy, specificity, and survival time of CAR-T cells (3); decreasing the toxicity of CAR-T cells (4); constructing CAR-T cells that target the TME of thoracic malignancies (5); exploring combination therapies; and (6) establishing natural ligand-receptor-based CAR-T cells. Importantly, these modified CARs are being studied in animal models and clinical trials in an attempt to mitigate tumor antigen heterogeneity and may eventually form the next generation of CAR-T cells (197). In conclusion, the above efforts will provide safer and more effective clinical applications of CAR-T cell immunotherapy for thoracic malignancies. CONCLUSION We summarized the structure, history of CAR-T cells, the common and uncommon TAAs used in CAR-T cell therapy against thoracic malignancies, as well as pointed out current challenges and possible effective strategies. Thoracic malignancies, including lung cancer, breast cancer, mesenchymal malignancies, esophageal cancer account for nearly one third of new cancers and deaths worldwide. Thus, thoracic cancer is the leading cause of cancer-related death and a significant obstacle to enhancing life expectancy worldwide. Different from chemotherapy, radiotherapy and target therapy, CAR-T cell immunotherapy against thoracic malignancies represents a brand treatment choice. Although there is some limitations, the beneficial results of preliminary trials have provided a prospective future for their application in the subsequent clinical treatment of thoracic malignancies. On-target/off-tumor, tumor antigen escape, CAR-T cell associated toxicities, immunosuppressive microenvironment, CAR-T cell trafficking and infiltration are the major disadvantages. However, via screening specific target antigens, improving trafficking and improving CAR-T cell manufacturing, CAR-T cell therapy may improve its current status in the near future. CAR-T cells have obtained great success in the field of hematological tumors, stimulating many researchers to study the application of CAR-T cells of thoracic malignancies. Luckily, both experimental and clinical trials of CAR-T cells for thoracic malignancies are underway, which will greatly promote the application of CAR-T cell treatment clinically. DATA AVAILABILITY STATEMENT The original contributions presented in the study are included in the article/supplementary material. Further inquiries can be directed to the corresponding authors. AUTHOR CONTRIBUTIONS LC, FC and HN designed the study and wrote this manuscript. JL, YP, CY, and YW compiled and analyzed the literature. KL, YL and YH proposed the study, revised, and re-organized the manuscript. All authors read and approved the final manuscript.
2022-07-14T13:22:09.841Z
2022-07-14T00:00:00.000
{ "year": 2022, "sha1": "d767264786d39457f481ff25024224fcff8d3937", "oa_license": null, "oa_url": null, "oa_status": null, "pdf_src": "Frontier", "pdf_hash": "d767264786d39457f481ff25024224fcff8d3937", "s2fieldsofstudy": [ "Medicine", "Biology" ], "extfieldsofstudy": [ "Medicine" ] }
756822
pes2o/s2orc
v3-fos-license
End-tidal carbon dioxide monitoring during bag valve ventilation: the use of a new portable device Background For healthcare providers in the prehospital setting, bag-valve mask (BVM) ventilation could be as efficacious and safe as endotracheal intubation. To facilitate the evaluation of efficacious ventilation, capnographs have been further developed into small and convenient devices able to provide end- tidal carbon dioxide (ETCO2). The aim of this study was to investigate whether a new portable device (EMMA™) attached to a ventilation mask would provide ETCO2 values accurate enough to confirm proper BVM ventilation. Methods A prospective observational trial was conducted in a single level-2 centre. Twenty-two patients under general anaesthesia were manually ventilated. ETCO2 was measured every five minutes with the study device and venous PCO2 (PvCO2) was simultaneously measured for comparison. Bland- Altman plots were used to compare ETCO2, and PvCO2. Results The patients were all hemodynamically and respiratory stable during anaesthesia. End-tidal carbon dioxide values were corresponding to venous gases during BVM ventilation under optimal conditions. The bias, the mean of the differences between the two methods (device versus venous blood gases), for time points 1-4 ranges from -1.37 to -1.62. Conclusion The portable device, EMMA™ is suitable for determining carbon dioxide in expired air (kPa) as compared to simultaneous samples of PvCO2. It could therefore, be a supportive tool to asses the BVM ventilation in the demanding prehospital and emergency setting. Background In a prehospital setting, it is necessary that airway management is easily attempted and maintained [1]. Endotracheal intubation (ETI) is regarded as the gold standard for airway management in advanced life support but the procedure requires training and experience [2][3][4]. Prehospital ETI does neither increase survival rate nor neurologic outcome in trauma patients [5]. Therefore, bag-valve mask (BVM) ventilation should be the preferred technique as it is as efficacious and safe, particularly if healthcare providers are unexperienced [1,3,4,6]. On the other hand, it is most important to provide successful airway management using BVM [7]. Guidelines from the European Resuscitation Council (ERC) describes that all health care providers should be traind to use BVM for ventilation during cardiopulmonary resuscitation [8]. BVM, however, is dependent on provider technique and to facilitate the evaluation of this it could be beneficial to use a small capnography device (EMMA™). The aim of this study was to investigate whether a new portable device attached to a ventilation mask can give end-tidal carbon dioxide (ETCO 2 ) values corresponding to carbon dioxide measurements from venous blood gases (PvCO 2 ). Methods This was a prospective observational study. The study was approved by the Ethical Board of the Stockholm County, Stockholm, Sweden (2009/652-31/3). Twentytwo women undergoing breast surgery were included after they had given their written informed consent to participate. The surgeries consisted of mastectomies with or without evacuation of the axilla as well as other breast reconstruction work. The patient median age was 56 years (range 40-77) and they were all classified as ASA I or II according to the American Society of Anesthesiologists. The procedure was as follows: The patients were brought to the operating room where venous cannulas for sampling of blood were inserted antecubitally. They were monitored by ECG, pulsoxymetry, non invasive blood pressure (AISYS, Datex Ohmeda, WI, USA) and capnography built with mainstream technology (EMMA™ Emergency Capnometer, PHASEIN AB, Danderyd, Sweden) attached to a bag-valve apparatus. Before the patients were anaesthetized, vital signs were recorded and the patients were all hemodynamic and respiratory stabile prior to anaesthesia. The values are shown in Table 1. The patients were anaesthetized with a dose of fentanyl (1.4 micrograms/kg) followed by propofol for induction (2 mg/kg). After induction, the patients were put on an infusion of propofol (0.1-0.2 milligrams/kg/min) according to hospital practice. To establish the level of adequate anaesthesia, a clinical assessment (unconsciousness, cessation of spontaneous ventilation, absence of eye lash and bulb reflexes) was made by the attending anaesthesiologist to evaluate that the patient was properly anesthetized. The patient was ventilated by bagvalve mask during the whole study period. The total time of bag-valve ventilation for evaluation of the new device lasted at least 20 minutes. After the study period ended, a laryngeal mask was inserted and the breast surgery was performed. The same anaesthesiologist was the sole provider of bag-valve ventilation for all twenty-two patients. Every 5 minutes during the study period, sampling occurred for PvCO 2 readings together with simultaneous readings from the EMMA™ device (time points 1-4, with 5 minutes in between). The blood samples for venous blood gases and vital signs were collected by the same nurse. All blood gases (PvCO 2 ) were analysed at a nearby analyzer (Radiometer ABL 520, Copenhagen). See flowchart for the study procedure (Fig 1). Statistics Bland-Altman plots were used to investigate the differences between the EMMA™ device and venous blood gases at time points 1, 2, 3 and 4, where most of the differences between the two methods (95%) were expected to lie within the limits of agreement. The assumption of normality was investigated with QQ-plots and the Shapiro-Wilk W test. The Bland-Altman plots were performed using R version 2.9.2. All descriptive statistics used to illustrate the hemodynamic profile of the women undergoing breast surgery during bag-valve ventilation analysis was carried out using Microsoft Excel. Results There were no missing data concerning measurement of vital signs and ETCO 2 during the study. Regarding to PvCO 2 there were three (3) missing observations in blood sample two, three and four for the same patient. The patients were all hemodynamic and respiratory stable during anaesthesia. The hemodynamic and respiratory values are shown in Table 2. Bland-Altman plots are displayed for time points 1 and 3 (Fig 2). The bias, limits of agreement (LoA), and the associated confidence intervals are displayed in Table 3. A violation to the distributional assumption of normality was detected for time point 2. Due to interpretability and comparability over the time points no transformation was however performed and therefore the results should be considered with some caution for this time point. The bias, the mean of the differences between the two methods (device versus venous blood gases), for time points 1-4 ranges from -1.37 to -1.62. The associated limits of agreement were similar for all time points and ranged from -3.17 (lower) to 0.25 (higher). Discussion The aim of this study was to compare the efficacy of a new portable device, EMMA™, for measuring carbon dioxide in expired air compared to carbon dioxide levels in venous blood. The point was to see whether this device could be used as an auxiliary tool for evaluating the accuracy of bag-valve mask ventilation. The main conclusion is that when patients are well under anaesthesia, are hemodynamically stable and adequately ventilated by a trained provider, the device gives acceptable values for exhaled carbon dioxide as compared to venous blood gases. However, our results may not necessarily be transferable to less experienced BVM provider and patients in the prehospital settings. Further studies should include patients and health care providers from the prehospital setting. In an emergency setting, patients are not normally well monitored. Furthermore, many untrained personnel are involved and adequate airway management is sometimes difficult to evaluate. Conventionally, for unconscious patients, ETI is regarded as the gold standard for airway management in ALS, even if the airway management can be easily [5]. Particularly, there are disadvantages using ETI in prehospital settings when the procedure is performed by less experienced paramedics or when the tube cannot be inserted due to the lack of experience from necessary anaesthetic drugs. BVM ventilation is the basic technique for all health care providers [1] and guidelines from ERC states that all health care providers should be familiar with the BVM for ventilation during cardiopulmonary resuscitation [8]. There is an increasing interest for the use of end-tidal carbon dioxide measurement in the emergency care and previous studies have for i.e. described how nasal entidal carbon dioxide measurement could assess patients' acute respiratory problems in prehospital settings [9,10]. In this study we evaluated the EMMA™ device during BVM ventilation under ideal conditions with a trained provider and healthy patients were included. Capnography is a non-invasive infrared spectroscopy technology for continuous measurement of carbon dioxide (CO 2 ) content throughout the respiratory cycle. When capnograms are used to evaluate the end-tidal concentration of carbon dioxide it must be interpreted in conjunction with other clinical findings such as the work of breathing, CO 2 transport and elimination as well as changes in cardiac output during volume resuscitation [11]. Normally, when the partial pressure of carbon dioxide is measured invasively there is a slight discrepancy between blood values and expired carbon dioxide due to dead space of the lung and bronchial tree. This gradient is low, usually around 0.66 kPa at a lower ETCO 2 level. This gradient, however, could increase due to patient aging [12]. This was not adjusted for in this study. The results in this study underlines that when the patients are comfortably anaesthetized there is an acceptable agreement between ETCO 2 values by the device and simultaneously collected PvCO 2 blood samples. The Bland-Altman plots (Fig 2) show agreement between ETCO 2 and PvCO 2 within 2 SD. The limits of agreement are wide, reflecting the large variation, but considered clinically acceptable in view of the normal difficulties of providing an adequate airway by using BVM and also the spread of different ages of the patients. The strength in the study is that the same experienced anaesthesiologist was the sole provider of ventilation for all the patients. This can also be a limitation as he is able to influence the measurement from the device during BVM. The study did not start until the patients were fully anaesthetized and hemodynamically stable. The patients chosen were all ASA I and II and therefore easily maintained. A weakness could be the difficulty of keeping an adequate airway by BVM. This is highly dependable on the provider skill and technique. Furthermore, we used venous blood gases for simplicity and the lack of an arterial line. Mixed venous blood gases reflect desaturated blood which should more easily attract CO 2 due to the Haldane effect [11,13]. However, a recent study illuminates that peripheral venous blood correlates reasonably well with arterial values, at least for ph, bicarbonate and PCO 2 [14]. Conclusions We conclude that, the portable device, EMMA™ is suitable for determining carbon dioxide in expired air (kPa) as compared to simultaneous samples of PvCO 2 . It could therefore, when the patient has an inadequate respiration, be a supportive tool to assess the BVM ventilation provided there is adequate circulation.
2015-03-27T18:11:09.000Z
2010-09-14T00:00:00.000
{ "year": 2010, "sha1": "abf8a4eab9490bac48d0bf5d0412122421c644c9", "oa_license": "CCBY", "oa_url": "https://sjtrem.biomedcentral.com/track/pdf/10.1186/1757-7241-18-49", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "abf8a4eab9490bac48d0bf5d0412122421c644c9", "s2fieldsofstudy": [ "Medicine" ], "extfieldsofstudy": [ "Medicine" ] }
52912068
pes2o/s2orc
v3-fos-license
New N-(oxazolylmethyl)-thiazolidinedione Active against Candida albicans Biofilm: Potential Als Proteins Inhibitors C. albicans is the most frequently occurring fungal pathogen, and is becoming an increasing public health problem, especially in the context of increased microbial resistance. This opportunistic pathogen is characterized by a versatility explained mainly by its ability to form complex biofilm structures that lead to enhanced virulence and antibiotic resistance. In this context, a review of the known C. albicans biofilm formation inhibitors were performed and a new N-(oxazolylmethyl)-thiazolidinedione scaffold was constructed. 16 new compounds were synthesized and characterized in order to confirm their proposed structures. A general antimicrobial screening against Gram-positive and Gram-negative bacteria, as well as fungi, was performed and revealed that the compounds do not have direct antimicrobial activity. The anti-biofilm activity evaluation confirmed the compounds act as selective inhibitors of C. albicans biofilm formation. In an effort to substantiate this biologic profile, we used in silico investigations which suggest that the compounds could act by binding, and thus obstructing the functions of, the C. albicans Als surface proteins, especially Als1, Als3, Als5 and Als6. Considering the well documented role of Als1 and Als3 in biofilm formation, our new class of compounds that target these proteins could represent a new approach in C. albicans infection prevention and management. Introduction Candida spp. are normally commensals found in the gastrointestinal tract, genitourinary tract or oropharyngeal tract of healthy people, but can become opportunistic pathogens that cause superficial infections (oral or vaginal candidiasis), deep-seated infections or systemic infections. Candidiasis diagnosis have increased recently due to disproportionate use of broad spectrum antibiotics, use Because of the increase in the C. albicans infections prevalence, as well as the increase in antifungal drug resistance, anti-biofilm therapeutic strategies have become sorely needed [11,26]. The search for efficient inhibitors of Candida biofilm identified a series of natural compounds that could interfere with various stages of the process including: caffeic acid derivatives [27], usnic acid (a lichen secondary metabolite) [28], various lichen extracts [29], plant essential oils [30,31], probiotic cells supernatant products [32], 5-hydroxymethyl-2-furaldehyde from marine bacterium [33], magnolol [34,35], dracorhodin from the exudates of the fruit of Daemonorops draco [36], shearinines D and E obtained from a Penicillium sp. isolate [37], and other phytocompounds [38]. However, most of these inhibitors are either mixtures of natural compounds or highly complex structures that are not easily obtainable in the laboratory. In the series of small molecules with anti-biofilm activity we can include: Alizarin and chrysazin [39], miltefosin [40], filastatin [41], aliskiren [42], various phenylthiazole derivatives [43,44] and thiazole Schiff bases [45]. The most interesting compounds were those identified by screenings of large library of compounds; such as 9029936 and 7977044 discovered by Romo et al. [46] from a library of more than 30,000 compounds. A screening of more than 20,000 compounds performed by Pierce et al. [47] led to the identification of a diazaspiro-decane scaffold (compounds 61894700, 80527891, 95143226, 17159859). Based on the structure of known Candida biofilm inhibitors, our research efforts focused on obtaining a new scaffold that encompasses various structural moieties contained in different active molecules, as shown in Figure 1. Subsequently, we evaluated the general antimicrobial potential, as well as the general anti-biofilm activity. Our compounds proved to be selectively active against Candida biofilm formation with no effects against microbial cell viability or other microbial biofilm formation. In Subsequently, we evaluated the general antimicrobial potential, as well as the general anti-biofilm activity. Our compounds proved to be selectively active against Candida biofilm formation with no effects against microbial cell viability or other microbial biofilm formation. In order to propose a Molecules 2018, 23, 2522 4 of 23 possible mechanism for this biological activity, we also conducted a series of in silico determinations that suggest that our compounds act as Als inhibitors. The 4-(chloromethyl)-2-phenyloxazoles (intermediate compounds 2a-d) were obtained using the cyclisation of an amide with a α-haloketone to an oxazole, as shown in Figure 2. This method is based on the Blümlein-Lewy reaction, later reported by Bredereck and co-workers as "formamide synthesis" [50,51]. These intermediates were previously reported using a different synthetic protocol, a closed vial in hot oil bath [52,53]. Our modification of the technique confers the advantage of using the conventional reflux method under condenser, at atmospheric pressure, without catalysis. However, we admit the relatively low yields obtained (yields range = 23-35%) and the difficulty in product isolation. Molecules 2018, 23, x 4 of 24 order to propose a possible mechanism for this biological activity, we also conducted a series of in silico determinations that suggest that our compounds act as Als inhibitors. The 4-(chloromethyl)-2-phenyloxazoles (intermediate compounds 2a-d) were obtained using the cyclisation of an amide with a α-haloketone to an oxazole, as shown in Figure 2. This method is based on the Blümlein-Lewy reaction, later reported by Bredereck and co-workers as "formamide synthesis" [50,51]. These intermediates were previously reported using a different synthetic protocol, a closed vial in hot oil bath [52,53]. Our modification of the technique confers the advantage of using the conventional reflux method under condenser, at atmospheric pressure, without catalysis. However, we admit the relatively low yields obtained (yields range = 23-35%) and the difficulty in product isolation. The MS spectra of intermediates 2a-d revealed the molecular ions, with a specific isotopic pattern due to 35 Cl and 37 Cl isotopes. The IR spectra showed the lack of a strong νC=O signal, characteristic for a primary amide, which confirms the successful cyclisation of the primary amide to oxazole. Other specific signals that confirm the formation of the oxazole ring are: A sharp signal with medium intensity, found between 3091 and 3148 cm −1 (corresponding to the stretching of the νC5-H) and the endocyclic νC=N bond, with a medium-strong signal between 1586 and 1593 cm −1 . The aliphatic νC-Cl bond gives a strong signal between 690 and 702 cm −1 . Signals that are specific to the intermediate compound 2d are due to the nitro moiety that gave two characteristic signals caused by the asymmetric, respectively symmetric, stretching of the νN=O bond at 1522 and 1327 cm −1 . The intermediates 2a-d were used in alkylation reactions with previously described [48,49] thiazolidine-2,4-dione intermediates (3a-d), as can be observed in Figure 3. It is important to note that the alkylation reaction could have led to O-alkylation (like in the case of using chloroacetamide derivatives [54]) or N-alkylation (when non-amide substituents are used-Ph-CH2-Cl [55]). Our spectral data of the final compounds is consistent with the data proposed for N-alkylation structures. The MS spectra of all final compounds confirmed the presence of the molecular ions. By analyzing the IR spectra, we were able to identify a phenolic νO-H stretching, as a broad band between 3369-3523 cm −1, that has led us to confirm that a N-alkylation took place, and not an O-alkylation. The presence of the oxazole ring was confirmed by a sharp signal between 3106-3166 cm −1 (νC5-H) and a strong signal between 1515-1521 cm −1 (νC=N). The thiazolidinedione was The MS spectra of intermediates 2a-d revealed the molecular ions, with a specific isotopic pattern due to 35 Cl and 37 Cl isotopes. The IR spectra showed the lack of a strong νC=O signal, characteristic for a primary amide, which confirms the successful cyclisation of the primary amide to oxazole. Other specific signals that confirm the formation of the oxazole ring are: A sharp signal with medium intensity, found between 3091 and 3148 cm −1 (corresponding to the stretching of the νC 5 -H) and the endocyclic νC=N bond, with a medium-strong signal between 1586 and 1593 cm −1 . The aliphatic νC-Cl bond gives a strong signal between 690 and 702 cm −1 . Signals that are specific to the intermediate compound 2d are due to the nitro moiety that gave two characteristic signals caused by the asymmetric, respectively symmetric, stretching of the νN=O bond at 1522 and 1327 cm −1 . The intermediates 2a-d were used in alkylation reactions with previously described [48,49] thiazolidine-2,4-dione intermediates (3a-d), as can be observed in Figure 3. It is important to note that the alkylation reaction could have led to O-alkylation (like in the case of using chloroacetamide derivatives [54]) or N-alkylation (when non-amide substituents are used-Ph-CH 2 -Cl [55]). Our spectral data of the final compounds is consistent with the data proposed for N-alkylation structures. The MS spectra of all final compounds confirmed the presence of the molecular ions. By analyzing the IR spectra, we were able to identify a phenolic νO-H stretching, as a broad band between 3369-3523 cm −1, that has led us to confirm that a N-alkylation took place, and not an O-alkylation. The presence of the oxazole ring was confirmed by a sharp signal between 3106-3166 cm −1 (νC 5 -H) and a strong signal between 1515-1521 cm −1 (νC=N). The thiazolidinedione was characterized by 2 νC=O, in two groups, shown as strong signals at 1755-1717 cm −1 and 1664-1693 cm −1 . Specific signals were also identified corresponding to the NO 2 contain compounds (2 bands due to νN=O asymmetric and symmetric stretching between 1521-1512 cm −1 , respectively 1342-1336 cm −1 ) and the ether vanillin derivatives (νC-O-C ether bond appeared as strong signal between 1239-1284 cm −1 ). Molecules 2018, 23, x 5 of 24 characterized by 2 νC=O, in two groups, shown as strong signals at 1755-1717 cm −1 and 1664-1693 cm −1 . Specific signals were also identified corresponding to the NO2 contain compounds (2 bands due to νN=O asymmetric and symmetric stretching between 1521-1512 cm −1 , respectively 1342-1336 cm −1 ) and the ether vanillin derivatives (νC-O-C ether bond appeared as strong signal between 1239-1284 cm −1 ). The N-alkylation is also supported by 1 H-NMR data. As such, we can observe the lack of a broad signal from a very deshielded proton (N-H from thiazolidinedione) at >12 ppm, and the presence of a broad signal corresponding to the OH proton between 9.90-10.67 ppm. 13 Antimicrobial activity was evaluated using a series of Gram-positive strains, Gram-negative strains and a fungal strain. This determination aimed at establishing the biological profile of the newly synthesized compounds, namely their antimicrobial potential. Results of the initial in vitro qualitative screening are shown in Table 1. An overall analysis reveals that all compounds have a degree of antimicrobial activity, but it is very low compared with standard antimicrobial agents used as controls. The N-alkylation is also supported by 1 H-NMR data. As such, we can observe the lack of a broad signal from a very deshielded proton (N-H from thiazolidinedione) at >12 ppm, and the presence of a broad signal corresponding to the OH proton between 9.90-10.67 ppm. 13 C-NMR is also consistent with the proposed structures, as it shows 2 groups of C=O signals between 167.11-167.91 ppm and 165.87-165.06 ppm (corresponding to the thiazolidinedione), 3 carbon atoms from oxazole (C 2 : 160.01-160.86 ppm, C 4 : 138.54-137.98 ppm, C 5 : 136.61-135.79 ppm), and a -CH 2 -bridge between 37.72-37.53 ppm. Antimicrobial Activity-Initial In Vitro Qualitative Screening Study Antimicrobial activity was evaluated using a series of Gram-positive strains, Gram-negative strains and a fungal strain. This determination aimed at establishing the biological profile of the newly synthesized compounds, namely their antimicrobial potential. Results of the initial in vitro qualitative screening are shown in Table 1. An overall analysis reveals that all compounds have a degree of antimicrobial activity, but it is very low compared with standard antimicrobial agents used as controls. The best antimicrobial activity was obtained against C. albicans by compounds 4a, 4c, 6a, 7a that had an inhibition of growth zone diameter of 16 mm. However, this action is mediocre compared with the fluconazole standard (24 mm). Considering antibacterial activity, the compounds seem to be more active against Gram-negative strains compared with Gram-positive strains. The most significantly active compounds were 7c and 7d, against the E. coli strain. A mediocre effect was observed also against E. faecalis, while most compounds had a negligible activity against S. aureus. Antimicrobial Activity-In Vitro Quantitative Assay The quantitative assay was performed in order to more precisely evaluate the direct antimicrobial effects, as initial screenings determined a potentially moderate antimicrobial effect. The results from these investigations, shown in Table 2, clearly demonstrate that the newly synthesized compounds do not have relevant direct antimicrobial effect at the small concentrations that can be achieved at cellular levels during antibiotic therapy. Being deprived of direct antimicrobial effects, these compounds do not create an increase in bacterial resistance by increasing the selection pressure via bacteriostatic or bactericidal effects. This could be viewed as a positive feature, if these compounds possess anti-biofilm effects, as initially assumed when designing the scaffold. Also, by specifically targeting C. albicans biofilm, the new agents could be used without risk of causing imbalances of the commensal flora. Anti-Biofilm Activity Assay The crystal violet staining method provides a total quantification of biofilm biomass, as it includes various cells and extracellular matrix [7]. Anti-biofilm activity can be independent of the direct antimicrobial activity; as a consequence, new compounds must be evaluated in parallel for both properties. Our anti-biofilm screening results, presented in Table 3, indicate that the tested compounds were mainly active against C. albicans biofilm formation. Out of a total of 16 compounds, 14 of them were active against biofilm formation at minimal biofilm eradication concentrations (MBEC), smaller than the standard used (berberine). The most active compound was 5d, which still maintained anti-biofilm effects even at concentrations as low as 0.038 mg/mL. 10 of the compounds were active at concentrations (0.078 mg/mL) four times smaller than the standard. Activity against the biofilm of Gram-negative strains appears to be manifested only at high concentrations, or absent. Concerning Gram-positive strains, some of the compounds (6b, 6c and 7c) seem to be moderately active against E. faecalis biofilm formation with MBEC values equal to that of the standard. An optimal anti-biofilm agent has to be active at small concentrations, without exercising positive selection pressure via direct antimicrobial effect. In the same time, considering the fact that in vivo biofilm is usually polymicrobial, it could be beneficial for a new drug-candidate molecule to have effect on multiple microbial strains. However, our results clearly indicate that the tested compounds are active predominantly against C. albicans biofilm, which is in agreement with our research hypothesis of obtaining a new scaffold of molecules that target this fungal strain. In Silico Studies Given the biological evaluation results, we aimed at identifying a potential mechanism of action for our compounds. Due to the specificity against C. albicans biofilm formation, we investigated the affinity that our molecules could have against the Als family proteins, which are known to be key elements in Candida spp. adhesion, biofilm formation and virulence. Direct binding of the investigated compounds to the Als surface proteins could render them unavailable for key interactions that mediate their biological effects. Molecular Docking Study The Als proteins are structurally related, all having a basic structure formed by: A N terminus signal peptide, a 300-amino-acids immunoglobulin-like domain, a threonine-rich domain, a central domain made up of variable number of 36-amino-acid tandem repeats, a heavily glycosilated serine and threonine rich domain, and a glycosylphosphatidylinositol anchorage sequence that is cleaved in order to ensure the covalent binding of the protein to the cell wall [20]. The hydrophobic central domain seems to be involved in adherence by binding to some substrates like polystyrene [56]. Als3, which is a well-documented invasin, seem to be able to interact with specific receptors on the surface of host cells: E-cadherin on epithelial cells and N-cadherin on endothelial cells. These interactions presumably take place via the immunoglobulin-like domain of Als3 [3,20,24]. The tested compounds were docked into the binding sites of Als surface proteins of C. albicans. The predicted best binding affinity of the conformation of each compound to the binding site of the surface protein are presented in Table 4. In order to better asses the influence of the different substituents located on the phenyl-oxazole vs. those located on the benzylidene moiety, we compared the averages of the binding energies and calculated the standard deviation, as shown in Table 5. When interpreting these results, we considered that a higher standard deviation for the binding affinity shows that the variation of substituent induces bigger differences in binding mode, while a reduced standard deviation translates by a decreased influence of the substituent on the binding affinity. As such, we can observe that the substituents on the 2-phenyloxazole residue (H, 4-CH 3 , 4-Cl, 4-NO 2 ) tend to have a lesser impact on binding than those from the benzylidene moiety. The position of the OH group on the benzylidene seems to have the highest influence on binding affinity: Optimal affinity is achieved by inserting the OH in the 3rd or 4th position, while the insertion of an extra methoxy group is unfavorable. By analyzing the predicted inhibition constants, shown in Table 6, and also the binding affinities, it is apparent that all tested compounds tend to have a better binding potential than that of the berberine sulphate standard. All compounds, except 6d, have a good inhibition potential against Als1. This could explain the biological activity, as it is well documented that Als1 is key for C. albicans adherence and controls the initial "seeding" step leading to biofilm production. Also, together with Als3, Als1 modulates the initiation step and maturation step of biofilm development. Another noticeable feature is represented by the very good inhibition potential of all compounds against Als5 and Als6. Although Als6's roles are not yet fully understood, Als5 is proved to be a key adhesin together with Als1 and Als3. When considering the potential to inhibit Als3, the most significant Als target, results showed that all compounds are significantly superior to the standard berberine, and 5 compounds have a Ki <20 nM. Candida biofilm formation is a complex process that involves several regulators that act as transcription factors (Bcr1, Efg1), as well as multiple adhesins (Als, Hwp1, Eap1, PGA10) and other factors. It is important to note that these molecules perform complementary functions and work together for biofilm formation [3,15,21]. As a result, direct inhibition of one of these factors does not necessarily translate into good biologic activity, as compensatory mechanisms can be activated. From this perspective, compounds that act simultaneously as inhibitors of more targets are preferred. This could also explain the lack of a direct causality between the potential inhibition of Als and results obtained in the direct biological anti-biofilm determinations. As such, the most interesting compounds from our series seem to be 4b and 5d, which have good inhibition potential against 6 of the 9 Als targets, and especially against Als3, that is believed to be the most important Als protein for biofilm development and a key element of overall Candida albicans virulence. Docking Mode of the Compounds to the Binding Site of Als The docking pose of one of the most promising new compounds, 5d, against Als1 and Als3 is shown in Figures 4 and 5. All compounds, except 6d, have a good inhibition potential against Als1. This could explain the biological activity, as it is well documented that Als1 is key for C. albicans adherence and controls the initial "seeding" step leading to biofilm production. Also, together with Als3, Als1 modulates the initiation step and maturation step of biofilm development. Another noticeable feature is represented by the very good inhibition potential of all compounds against Als5 and Als6. Although Als6's roles are not yet fully understood, Als5 is proved to be a key adhesin together with Als1 and Als3. When considering the potential to inhibit Als3, the most significant Als target, results showed that all compounds are significantly superior to the standard berberine, and 5 compounds have a Ki <20 nM. Candida biofilm formation is a complex process that involves several regulators that act as transcription factors (Bcr1, Efg1), as well as multiple adhesins (Als, Hwp1, Eap1, PGA10) and other factors. It is important to note that these molecules perform complementary functions and work together for biofilm formation [3,15,21]. As a result, direct inhibition of one of these factors does not necessarily translate into good biologic activity, as compensatory mechanisms can be activated. From this perspective, compounds that act simultaneously as inhibitors of more targets are preferred. This could also explain the lack of a direct causality between the potential inhibition of Als and results obtained in the direct biological anti-biofilm determinations. As such, the most interesting compounds from our series seem to be 4b and 5d, which have good inhibition potential against 6 of the 9 Als targets, and especially against Als3, that is believed to be the most important Als protein for biofilm development and a key element of overall Candida albicans virulence. The binding pocket of Als1 is a rather flattened surface, shaped as 2 adjacent Y. In other words, each of its' distal and proximal extremities present 2 sub-pockets formed by protruding moieties that subdivide the central binding site. The distal extremity is characterized by a protruding moiety made up by the vicinal joint of Tyr243 and Ala186. As a result, 2 distal sub-pockets are formed: A small pocket with very limited use (non-druggable) and a larger, predominantly lipophilic one, which accommodates the phenol ring of compound 5d. The gateway to the large distal sub-pocket is an Arg168 residue that interacts with the π electrons from the phenol aromatic ring, via the guanidine moiety. The thiazolidinedione ring of compound 5d is situated in the central part of the binding pocket, away from residues capable of interaction, with the notable exception of a hydrogen bond formed between the Tyr243 phenol OH group and a C=O group from the thiazolidinedione. The proximal extremity of the Als1 binding site also presents a protruding moiety formed by the spatial proximity of the C-terminal end (Gly314-Tyr315) and a loop (Ala35-Ala36-Asn37). The 2 resulting proximal sub-pockets are mostly lipophilic, but each has at least one hydrophilic residue: The small sub-pocket has Tyr35, while the more voluminous sub-pocket has Asn37. The oxazolyl-phenyl structure from 5d acts primarily as a spacer, with the two rings being co-planar, and forming a large entity that cannot fit in the small proximal sub-pocket. However, this makes it capable of favoring the spatial orientation of the nitro group in the vicinity of the peptide bond Ala36-Asn37, found in the larger proximal sub-pocket. Also, the methylene moiety from 5d acts as a hinge and helps to direct the nitro-phenyl residue towards the more voluminous sub-pocket. As a result, polar interactions take place between the NO from the nitro group and the peptide bond Ala36-Asn37. In the case of Als3, the predicted binding site is located in the core of the protein. It is shaped like a channel running across the protein. Access to this area is controlled by 2 entities: A beta-strand formed by Leu293-Arg294-Trp295-Thr296 and then by Gly297-Phe298-Arg299, that border and provide lining to the pocket and an omega loop Val268-Asn269-Ser270 that partially obstruct the binding site entrance. The OH phenol group from compound 5d is conveniently oriented towards Ser170, with whom it forms a hydrogen bond, but also near Tyr271, thus allowing for supplementary interaction. The thiazolidine ring of 5d sits in a polar region and interacts via a hydrogen bond formed between a The binding pocket of Als1 is a rather flattened surface, shaped as 2 adjacent Y. In other words, each of its' distal and proximal extremities present 2 sub-pockets formed by protruding moieties that subdivide the central binding site. The distal extremity is characterized by a protruding moiety made up by the vicinal joint of Tyr243 and Ala186. As a result, 2 distal sub-pockets are formed: A small pocket with very limited use (non-druggable) and a larger, predominantly lipophilic one, which accommodates the phenol ring of compound 5d. The gateway to the large distal sub-pocket is an Arg168 residue that interacts with the π electrons from the phenol aromatic ring, via the guanidine moiety. The thiazolidinedione ring of compound 5d is situated in the central part of the binding pocket, away from residues capable of interaction, with the notable exception of a hydrogen bond formed between the Tyr243 phenol OH group and a C=O group from the thiazolidinedione. The proximal extremity of the Als1 binding site also presents a protruding moiety formed by the spatial proximity of the C-terminal end (Gly314-Tyr315) and a loop (Ala35-Ala36-Asn37). The 2 resulting proximal sub-pockets are mostly lipophilic, but each has at least one hydrophilic residue: The small sub-pocket has Tyr35, while the more voluminous sub-pocket has Asn37. The oxazolyl-phenyl structure from 5d acts primarily as a spacer, with the two rings being co-planar, and forming a large entity that cannot fit in the small proximal sub-pocket. However, this makes it capable of favoring the spatial orientation of the nitro group in the vicinity of the peptide bond Ala36-Asn37, found in the larger proximal sub-pocket. Also, the methylene moiety from 5d acts as a hinge and helps to direct the nitro-phenyl residue towards the more voluminous sub-pocket. As a result, polar interactions take place between the NO from the nitro group and the peptide bond Ala36-Asn37. In the case of Als3, the predicted binding site is located in the core of the protein. It is shaped like a channel running across the protein. Access to this area is controlled by 2 entities: A beta-strand formed by Leu293-Arg294-Trp295-Thr296 and then by Gly297-Phe298-Arg299, that border and provide lining to the pocket and an omega loop Val268-Asn269-Ser270 that partially obstruct the binding site entrance. The OH phenol group from compound 5d is conveniently oriented towards Ser170, with whom it forms a hydrogen bond, but also near Tyr271, thus allowing for supplementary interaction. The thiazolidine ring of 5d sits in a polar region and interacts via a hydrogen bond formed between a C=O group and the Arg171 residue. The p-NO 2-phenyl-oxazole form 5d lingers towards the exit of the binding pocket, and because of the widening of the binding site, the nitro group does not manage to form reliable interactions. Analysis of Als Proteins Structure A phylogenetic analysis of Als proteins reveals that these surface proteins are related, which was to be expected, as they are part of the same superfamily [18]. However, significant differences could be found between them. Figure 6 shows the sequence alignment for the Als targets, with amino acids found near the binding site marked in bold. After comparing the amino acid sequences, for every amino acid position in the studied Als proteins, the "normal" amino acid was considered the one which was identical in most proteins in that particular position. If the amino acid suffered a mutation and was different, it was depicted in red. If a mutation of an amino acid in a specific position is found in more than one protein, the letter corresponding to that mutation was highlighted with the same color, in all chains where it appeared. The degree of similarity was assessed by calculating a similarity matrix, presented in Table 7. Result showed a high degree of similarity between Als1, Als3 and Als5 (>80%). To better understand the degree of similarity, a phylogenetic tree was generated using phylogeny.fr [57], and is depicted in Figure 7. The length of the branches (blue) is proportional to the number of substitutions per site, while branch support (red) indicates the degree of similarity, as it characterizes common evolutionary background. for every amino acid position in the studied Als proteins, the "normal" amino acid was considered the one which was identical in most proteins in that particular position. If the amino acid suffered a mutation and was different, it was depicted in red. If a mutation of an amino acid in a specific position is found in more than one protein, the letter corresponding to that mutation was highlighted with the same color, in all chains where it appeared. The degree of similarity was assessed by calculating a similarity matrix, presented in Table 7. Result showed a high degree of similarity between Als1, Als3 and Als5 (>80%). To better understand the degree of similarity, a phylogenetic tree was generated using phylogeny.fr [57], and is depicted in Figure 7. The length of the branches (blue) is proportional to the Both the similarity matrix and the phylogenetic tree indicate that, despite the fact that studied ALS sequences have a common ancestry, ALS7 has a different evolutionary path to the other ALS proteins. Also, Als6 seem to have evolved separately, whereas Als9 and Als1-5 share a common evolutionary ancestor. As in the case of the similarity matrix, the phylogenetic tree underlines the close connections between Als1, Als3 and Als5 which is also supported by the known biologic roles of these proteins (they all share a common adhesin function). A comparative analysis of the active sites' binding pockets from different Als, shown in Table 8, revealed that Als1, Als3 and Als5 are the proteins with the biggest volume and widest surface from the series considered. This suggests that they would be able to better accommodate larger ligands in their active sites. Also, these proteins have the lowest hydrophobicity ratios, which could indicate their tendency to form polar interactions at the level of the binding pockets. Als1, Als3, Als5 and Als6 have the highest percentage of polar amino acids, which could account for their ability for polar Both the similarity matrix and the phylogenetic tree indicate that, despite the fact that studied ALS sequences have a common ancestry, ALS7 has a different evolutionary path to the other ALS proteins. Also, Als6 seem to have evolved separately, whereas Als9 and Als1-5 share a common evolutionary ancestor. As in the case of the similarity matrix, the phylogenetic tree underlines the close connections between Als1, Als3 and Als5 which is also supported by the known biologic roles of these proteins (they all share a common adhesin function). A comparative analysis of the active sites' binding pockets from different Als, shown in Table 8, revealed that Als1, Als3 and Als5 are the proteins with the biggest volume and widest surface from the series considered. This suggests that they would be able to better accommodate larger ligands in their active sites. Also, these proteins have the lowest hydrophobicity ratios, which could indicate their tendency to form polar interactions at the level of the binding pockets. Als1, Als3, Als5 and Als6 have the highest percentage of polar amino acids, which could account for their ability for polar interactions with various ligands, including our N-(oxazolylmethyl)-thiazolidinediones (which contain various polar substituents: NO 2 , OH, CO). General Information All chemicals were of analytical reagent grade purity, and have been purchased from Merck (Darmstadt, Germany) or Sigma-Aldrich (Taufkirchen, Germany). The uncorrected melting points were obtained by the open glass capillary method, using a MPM-H1 melting point apparatus (Schorpp Gerätetechnik, Überlingen, Germany). MS spectra were obtained by using an Agilent 1100 series, in positive ionization with an Agilent Ion Trap SL mass spectrometer (70 eV) instrument (Agilent Technologies, Santa Clara, CA, USA). IR spectra were recorded after compression of the samples in KBr pellets, under vacuum, using a FT/IR 6100 spectrometer (Jasco, Cremella, Italy). The device was controlled using the computer interface software Spectra Manager. Assignment of IR signals was made using Know It All 7.8 by Bio-Rad Laboratories (Hercules, CA, USA). The 1 H-NMR and 13 C-NMR were recorded on an Avance NMR spectrometer (Bruker, Karlsruhe, Germany) using DMSO-d 6 as solvent. Chemical shift values are reported in δ units, relative to TMS as internal standard. All spectral data were in accordance with the proposed chemical structures. Elemental analysis was performed by Vario El CHNS analyzer (Hanau, Germany). The results obtained for all synthesized compounds were in agreement with the calculated values. Chemistry General Procedure for the synthesis of the 4-(chloromethyl)-2-aryloxazoles (2a-d). 10 mmol of benzamides 1a-d and 10 mmol 1,3-dichloroacetone were mixed well in a round bottom flask with 5 mL of propylene glycol and 0.5 mL of dimethylsulfoxide. Reactions were performed in an open vessel, under condenser, with vigorous magnetic stirring. Between the reaction flask and the condenser, a valve was designed from a small inner diameter conical glass adaptor, equipped with a small glass ball that moved vertically freely within, in order to reduce the volatilization of dichloroacetone. The mixture was refluxed for one hour. Upon completion of the reaction, the mixture was cooled to room temperature, 5 mL of methanol was added and the mixture was stirred well. Further, water was added carefully dropwise, in order to obtain a precipitate. The resulted solid was filtered under vacuum. General Procedure for the Synthesis of the Intermediate Compounds 3a-d (Z isomers) was based on a Knoevenagel condensation in alkaline medium, provided by the anhydrous sodium acetate. The condensation between the corresponding phenolic aldehydes and thiazolidine-2,4-dione was made under microwave irradiation in acetic acid. The synthetic protocol and the characterization of the intermediate compounds 3a-d were previously reported [48,49]. General Procedure for the Synthesis of the Final Compounds (4a-d, 5a-d, 6a-d and 7a-d). 5-(hydroxybenzylidene)-thiazolidine-2,4-diones intermediates (3a-d) were selectively alkylated on the nitrogen atom from the thiazolidine-2,4-dione ring, in alkaline medium using a modified protocol [58]. Over 1.05 mmol of intermediate compound 3a-d and 1 mmol of intermediate compound 2a-d, dimethylformamide (DMF) was added dropwise until their dissolution, in order to obtain the highest possible concentration of the intermediate compounds, to ensure a high reaction rate. After that, 2 mmol of anhydrous potassium carbonate and 1 mmol of anhydrous potassium iodide were added to this solution. The mixture was stirred overnight at room temperature. Upon completion of the reaction, the mixture was poured over ice cold saturated brine. A 10% sulfuric acid solution was added dropwise until complete precipitation of the product had occurred. The resulted solid was filtered under vacuum and dried. The remaining residue was crystallized twice from acetone, giving the pure final compounds 4a-d, 5a-d, 6a-d and 7a- Biological Assays The biologic activity of the final compounds 4a-d, 5a-d, 6a-d and 7a-d was assessed by using 3 distinct approaches. The antimicrobial potential was determined for all compounds via an initial in vitro qualitative screening study, followed by an in vitro quantitative assay. Furthermore, we investigated the anti-biofilm, and thus antipathogenic potential of the new compounds. Our aim was to investigate the specificity of the compounds in their predicted biologic activity as anti-Candida biofilm agents. In order to prove the lack of direct antibacterial or antifungal effect, we selected an array of 2 Gram-positive strains (Staphylococcus aureus ATCC 25923, Enterococcus faecalis ATCC 29212), 2 Gram-negative strains (Pseudomonas aeruginosa ATCC 27853, Escherichia coli ATCC 25922) and 1 fungal strain (Candida albicans ATCC 10231). These strains were reference strains and their identity was confirmed using the VITEK 1 automatic system. Anti-biofilm tests were performed for all compounds investigated, regardless of their activity as direct antimicrobial agents, as it was this paper's aim to obtain and to prove that the new molecules selectively inhibit Candida biofilm formation and do not affect other microbial biofilm, nor do they have direct antimicrobial action. Antimicrobial Activity-Initial In Vitro Qualitative Screening Study This initial screening was performed using an adapted disk diffusion technique, previously reported [49,[59][60][61]. All tested compounds and standards were solubilized in dimethylsulfoxide (DMSO) to a concentration of 1 mg/mL. Microbial inocula (saline suspension of 0.5 McFarland density), obtained from microbial cultures grown on solid media for 15-18 h, were seeded on solid Muller-Hinton medium. The solutions where then applied directly on the solid medium and the resulting plates were allowed to incubate for 24 h at 37 • C and 48 h at 28 • C for the fungal strain. Antimicrobial activity was assessed as the diameter of the growth inhibition area, measured in mm. Antimicrobial Activity-In Vitro Quantitative Assay The quantitative assay was performed using 96-wells plates containing liquid Mueller-Hinton medium seeded with 20 µL microbial inoculum. The stock solutions of the tested compounds were prepared at concentrations of 5 mg/mL in DMSO. They were applied as two-fold serial dilutions ranging from 2500 µg to 2 µg mL −1 . The total broth volume was adjusted to 200 µg mL −1 . Standard antimicrobial agents were used (norfloxacin, fluconazole). Culture positive controls and blank DMSO dilution were used. The plates were incubated for 24 h at 37 • C for bacterial strains and 48 h at 28 • C for the fungal strain. The minimal inhibitory concentration (MIC) values were determined as the lowest concentration of the investigated compound that inhibited the growth of the microbial cultures, compared to the positive control, as established by a decreased value of absorbance at 600 nm (Apollo LB 911 ELISA Absorbance Reader, Berthold Technologies, Bad Wildbad, Germany) [60,[62][63][64]. Anti-Biofilm Activity Assay The microtiter plate method, previously reported [60,65], was used to ascertain the level of anti-biofilm activity of the tested compounds. In order to determine the ability to colonize inert substratum, the plates previously used for MIC determination were emptied, rinsed 3 times with phosphate buffered saline and then fixed with cold methanol 80% for 5 min. The biofilm was stained with violet crystal for 30 min, and then washed multiple times with water and finally suspended using a glacial acetic acid solution. Cell density was measured by evaluating the optical density of the colored solution at 490 nm. The lowest concentration of the compounds that inhibited the development of biofilm on the plate wells was considered the minimal biofilm eradication concentration (MBEC). Molecular Docking Study The tested compounds were docked in the binding site of the most important adhesion proteins of C. albicans, in order to understand the differences between compounds in terms of interaction with the microorganism's adhesion proteins. Our in silico study focused on finding differences of interactions between our compounds and the target proteins from the point of view of the various substitutions and isomerism in our molecules and from the point of view of the differences between the macromolecular targets. For Als3 and Als9, 3D structures obtained by X-ray diffraction were deposited in Protein Data Bank (PDB), as presented in Table 9. Because no structure could be found for the other members of the Candida Als superfamily, they were built by homology modeling. For this purpose, FASTA primary sequences of amino acid for the target proteins were taken from UniProt [66]. Using Swiss-Model [67], the new structures were generated, based on proposed PDB template structures with high coverage and identity percent with the primary amino acid sequence. The FASTA amino acid sequences used, the structures used as templates, and the degree of identity are presented in Table 10. The files of ligands (final compounds 4a-d, 5a-d, 6a-d and 7a-d) and the macromolecular targets were prepared as reported [60], using AutoDock Tools 1.5.6 [68]. In all structures the polar hydrogens were added, the non-polar hydrogens were merged and the partial charges were added. Amide bonds were configured as rigid. The Cartesian coordinates of the search space center for all Als proteins are presented in Tables 9 and 10. The search space was defined as x = y = z = 74 Å for all targets, in order to provide equal experimental conditions for all interaction predictions. The search space center Cartesian coordinates was configured in order to fit the entire active pocket of all Als surface proteins. The molecular docking study was performed using AutoDock 4.2 [68], 30 conformations were searched for every ligand-protein complex. The inhibition constant (Ki) was calculated based on the computed binding affinity energy (∆G) using the formula: Ki = e ∆G×1000 R×T , where R represents the Regnault constant = 198,719 and T = 298.15 K. Alignment of the primary structure and the similarity of the tested Als proteins was performed using Clustal Omega [69]. Analysis of the binding pocket of the Als proteins was performed using DoGSiteScorer [70,71]. Conclusions In an effort to obtain new agents that target C. albicans biofilm development, following an extensive review of the literature, we proposed a new molecular scaffold: N-(oxazolylmethyl)-thiazolidindione. A series of 16 new compounds bearing this moiety were synthesized and their structures were confirmed using physicochemical parameters and spectral data. A general antimicrobial activity screening was performed using both qualitative and quantitative methods against Gram-positive and Gram-negative bacteria, as well as fungi. Results showed that the compounds do not possess significant direct antimicrobial activity and thus are not estimated to determine selection pressure or to affect non-pathogenic commensal flora. The biologic anti-biofilm evaluation demonstrated that, as hypothesized when constructing this scaffold, the compounds are very active selectively against C. albicans biofilm formation. In order to provide a possible mechanism of action, we performed a docking study that proved these compounds have a very good binding potential against most of the Als surface proteins of C. albicans. All compounds seem to be able to bind to Als1, Als5 and Als6, while some are also capable of good interactions with Als3. Considering the well documented role of Als1, Als3 and Als5 as adhesins and key agents in biofilm formation, we postulate that these compounds selectively inhibit C. albicans biofilm formation most likely by interfering with the Als proteins.
2018-10-21T20:22:52.215Z
2018-10-01T00:00:00.000
{ "year": 2018, "sha1": "1f79c87b38326b441a3027e991bde981b5c5be14", "oa_license": "CCBY", "oa_url": "https://www.mdpi.com/1420-3049/23/10/2522/pdf", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "1f79c87b38326b441a3027e991bde981b5c5be14", "s2fieldsofstudy": [ "Biology" ], "extfieldsofstudy": [ "Medicine", "Chemistry" ] }
258367915
pes2o/s2orc
v3-fos-license
Natamycin Has an Inhibitory Effect on Neofusicoccum parvum, the Pathogen of Chestnuts This research aimed to investigate natamycin’s antifungal effect and its mechanism against the chestnut pathogen Neofusicoccum parvum. Natamycin’s inhibitory effects on N. parvum were investigated using a drug-containing plate culture method and an in vivo assay in chestnuts and shell buckets. The antifungal mechanism of action of natamycin on N. parvum was investigated by conducting staining experiments of the fungal cell wall and cell membrane. Natamycin had a minimum inhibitory concentration (MIC) of 100 μg/mL and a minimum fungicidal concentration (MFC) of 200 μg/mL against N. parvum. At five times the MFC, natamycin had a strong antifungal effect on chestnuts in vivo, and it effectively reduced morbidity and extended the storage period. The cell membrane was the primary target of natamycin action against N. parvum. Natamycin inhibits ergosterol synthesis, disrupts cell membranes, and causes intracellular protein, nucleic acid, and other macromolecule leakages. Furthermore, natamycin can cause oxidative damage to the fungus, as evidenced by decreased superoxide dismutase and catalase enzyme activity. Natamycin exerts a strong antifungal effect on the pathogenic fungus N. parvum from chestnuts, mainly through the disruption of fungal cell membranes. Introduction Chestnut (Castanea mollissima Blume) belongs to the genus Castanea and is one of the most important cash crops in China. Chestnut represents both the chestnut tree and the chestnut fruit, and, in this study, chestnut refers to the chestnut fruit. Chestnut fruits have high edible value, and their abundant nutritional elements have long been established [1]. Chestnut starch has a high concentration of resistant starches, which increase satiety and energy maintenance by extending the time spent in the digestive tract [2]. Furthermore, chestnut fruits are rich in protein, vitamins, fiber, essential fatty acids, and minerals, as well as polyphenols with antioxidant activity [3]. On the other hand, chestnut fruits exhibit medicinal value because of their multiple biological activities. According to Rodrigues et al. [4], mice whose diets include chestnut fruits accumulate low belly fat and have low serum cholesterol levels. Chestnut starch has probiotic activity and can be used to improve human gastrointestinal health by synthesizing isomalto-oligosaccharides through various pathways, thereby promoting Lactobacillus proliferation [5]. Chestnuts have an inhibitory effect on E. coli, whether they are digested in vitro or not [6]. Furthermore, extracts of chestnut pellicle can reduce scab disease caused by Streptomyces scabies [7]. A water-soluble polysaccharide derived from chestnut fruit can also cause human cervical cancer cells to undergo apoptosis via the mitochondrial route [8]. The oxidative stress caused by H 2 O 2 and dextran sulfate sodium salt on IPEC-J2 cells was greatly reduced, and the cellular activity was increased via pretreatment with the chestnut-Quebracho mixture for 3 h [9]. However, chestnut fruits are prone to rot and deterioration during the harvesting and storage periods because of their high water content, thus seriously affecting their quality and production value [10]. Fungal infestation is a major cause of chestnut fruit decay during storage, and Waqas et al. [11] first reported Neofusicoccum parvum as the pathogenic fungus that causes nut rot in Italian hazelnuts. Italian scholars Seddaiu et al. [12] have isolated and identified N. parvum from rotting chestnut fruits. N. parvum is a plant pathogenic fungus belonging to the Botryosphaeriaceae [13,14]. N. parvum is distributed worldwide and is a major source of infestation in various plants. N. parvum is the primary pathogen of Chinese gallnut brown spot; sequoia canker and dieback; Scaevola taccada leaf spot; and avocado branch cankers [15][16][17][18]. N. parvum acts as an endophytic fungus that can invade plant tissues and cells through wounds caused by cutting branches, thus weakening the tree and, in severe cases, leading to plant death [19,20]. However, the mechanism of infection of plant hosts by N. parvum infestation is not clear, and some studies suggest that the pathogenicity of N. parvum may be related to its ability to colonize plant tissues, the production of toxins, and the production of extracellular proteins that have toxic effects on plants [19]. Accordingly, pathogenic infestation needs to be suppressed to reduce plant diseases and increase agricultural product yield. Natamycin (NM) is a natural antifungal polyolefin macrolide that is internationally licensed as a microbial-derived antiseptic [21]. This type of preservative has the advantages of being environmentally friendly, safe, and effective, and it is widely used in food preservation [22]. The national standard of the People's Republic of China (GB 2760-2014) clearly stipulates that the maximum use level of natamycin in food is 0.3 g/kg and the maximum surface residue does not exceed 10 mg/kg [23].Galotyri cheese may be effectively protected from fungal growth while being stored using natamycin, thus extending the cheese's shelf life [24]. Guo et al. [25] discovered that, by regulating the antioxidant enzyme activity of green-skinned walnuts, natamycin could effectively inhibit the increase in the mold rot rate. Zhou et al. [26] applied a film treatment to red globe grapes and discovered that natamycin could effectively reduce the fruit's respiratory intensity and decay rate while significantly extending the fruit's storage period. Natamycin's antifungal mechanism is related to its molecular structure, in which the hydrophobic macrolide double bond can bind to sterol molecules in the fungal cytoplasmic membrane, thus increasing the cytoplasmic membrane permeability. The hydrophilic macrocyclic lactone polyol fraction, on the other hand, enhances membrane permeability by creating water pores in the membrane, allowing macromolecules such as proteins and nucleic acids to flow out of fungal cells [27]. Accordingly, this research investigated the antifungal activity of natamycin against N. parvum, by conducting in vivo and in vitro experiments on chestnut fruits, and further explored its antifungal mechanism of action to provide the scientific basis for the application of natamycin in chestnut preservation. Results and Analysis 2.1. Growth Curve of N. parvum N. parvum mycelium grows very quickly, and it took only 4 days to occupy a 9 cm-diameter plate of PDA medium at a constant temperature of 28 • C. At the beginning of growth, the mycelium was white. As the incubation time increased, the mycelium gradually changed to gray-green and finally to gray-black, from inside to outside. Natamycin treatment remarkably inhibited mycelial growth, as shown in Figure 1, and the inhibitory effect increased with increasing concentration. At the concentration of 200 µg/mL, the inhibition rate reached 100% each day, indicating that N. parvum growth was completely inhibited at this concentration. Therefore, 200 µg/mL natamycin can be considered as the lowest concentration (MFC) that can kill N. parvum. N. parvum mycelial growth was also inhibited by 10 and 50 µg/mL natamycin, with inhibition rates ranging from 47.66% ± 0.04% to 86.34% ± 0.01%. At the concentration of 100 µg/mL, the inhibition rate reached 100%, but the inhibition rate decreased with the extension of the observation time, indicating that this concentration was the MIC. CFW can specifically bind chitin in the fungal cell wall and produce bright blue fluorescence under excitation light; accordingly, the damage of the fungal cell wall was determined based on the intensity of fluorescence [28]. AKP was present between the fungal cell membrane and the cell wall and can leak outside the cell only when the cell wall is disrupted. However, as shown in Figure 2, our results did not detect the effect of natamycin on the cell wall of N. parvum. After CFW staining, the control group mycelium showed bright blue fluorescence, while the 6.25, 12.5, and 25 µg/mL natamycin-treated groups showed the same bright fluorescence. In comparison with the AKP activity of the control group, no significant change was observed in the natamycin group at 1/2MIC and subinhibitory concentrations, indicating that natamycin did not cause AKP leakage from N. parvum. Effect of Natamycin on the Cell Membrane of N. parvum Evans blue is a non-membrane-permeable dye that can be used to detect cell viability when the plasma membrane is damaged, and the dye can enter the cytoplasm and nucleus, thus staining them blue. As shown in Figure 3A, the mycelium of the natamycin-treated group was stained with Evans blue and visible under the microscope in a distinct blue color, while the mycelium of the control group showed no change. The staining deepened with the increase in natamycin action concentration. Therefore, natamycin caused damage to the cell membrane of N. parvum, and the degree of damage is positively correlated with the drug concentration. The synthesis of ergosterol in the cell membrane of N. parvum was inhibited after treatment with natamycin, and the resulting statistics are shown in Figure 3B. The results show that 50 µg/mL natamycin inhibited ergosterol synthesis by 21.42% ± 0.02%, and, when the concentration was increased to 80 µg/mL, the inhibition rate reached 59.57% ± 0.06%. Therefore, different concentrations of natamycin can substantially inhibit ergosterol synthesis when compared with the control group, and when the inhibition rate is dose-dependent. Effect of Natamycin on the Leakage of the Cellular Contents of N. parvum As shown in Figure 4, natamycin can cause intracellular nucleic acid and protein leakage in N. parvum, and the ratio of leakage increases with the administrated concentration. Effects of Natamycin on the Oxidative Stress Response of N. parvum The changes in the SOD and CAT activities of N. parvum, after different concentrations of natamycin treatment, are shown in Figure 5. Natamycin has a strong inhibitory effect on SOD and CAT activity, and both parameters showed a significant decreasing trend as the natamycin concentration increased. The results show that natamycin could exert antifungal effects through oxidative damage. In Vivo Antifungal Efficacy of Natamycin The experiment was carried out with the cutoff point of all chestnut kernels in the model group being infested with N. parvum, and Figure 6A depicts the degree of infestation in each group on the last day of the experiment. The control group remained consistent with the initial state, the degree of infestation by N. parvum in the model group of chestnut kernels was 100%, and the degree of the chestnut kernels' decay varied at different doses of natamycin treatment groups. The incident rates of chestnut kernels in each group were counted, and the results are shown in Figure 6B. All of the chestnut kernels in the model group rotted, which was highly significant when compared with the control group. Among them, the incident rates were 40% and 27% in the low-and medium-dose groups, respectively, while natamycin in the high-dose group could completely inhibit the infestation of chestnut kernels by N. parvum. In contrast to the model group, natamycin emulsion at low, medium, and high doses showed highly significant inhibition of N. parvum infestation, and the inhibition rate increased with the increase in concentration. Figure 6. Effect of natamycin on the infestation of chestnut kernels by N. parvum. Representative photograph of natamycin inhibiting the growth of N. parvum in chestnut kernels (A), and the effects of natamycin on the incidence of chestnut kernels (B). All chestnut kernel pieces were incubated for 4 days. "CK" stands for the control group; "Model" represents the group of chestnut kernel pieces infected by N. parvum. "Incident rates" represent the rotting rates of N. parvum-infested chestnut kernel pieces in each treatment group. ### p < 0.001 compared with the CK, *** p < 0.001 compared with the model group. Effect of Natamycin Emulsion on Postharvest Ripening Infection of the Chestnut Shell Bucket The above experiments have confirmed the effective antifungal effect of natamycin on chestnut kernels and further expanded the experiments to study the antifungal effect of natamycin on chestnut shell buckets after harvesting. As shown in Figure 7, the decay rates of chestnuts were counted based on the number and weight of decayed fruits. Both doses of natamycin emulsion inhibited the decay of post-ripe chestnuts during storage, while the decay rate in the five times MFC group was lower than that in the MFC group, indicating that the preservative effect of the natamycin emulsion was positively correlated with its dose. The natamycin emulsion could effectively reduce the decay rate of chestnuts during storage, improve the quality of chestnuts, and prolong the storage period. Discussion Natamycin is the only antifungal additive approved by the Food and Drug Administration in the United States [29] and is widely used in the food preservation industry, such as the preservation of dairy products, meat and meat products, fruits and vegetables, and bakery products [30]. This paper involves the determination of natamycin's antifungal activity against N. parvum, the causative agent of chestnut fruit rot, and its antifungal mechanism. Based on the plot of the growth curve of N. parvum, the MIC and MFC of natamycin were 100 and 200 µg/mL, respectively. On this basis, the infestation experiments were carried out by reincorporating N. parvum into chestnuts, and the results show that natamycin could completely inhibit the growth of N. parvum at five times the concentration of MFC. Next, the scope of the experiment was expanded to outside the laboratory, and the natamycin emulsion was sprayed into the postharvest mature chestnut shell bucket. The results show that five times MFC natamycin could prevent chestnuts from rotting. However, experimental studies on the preservation of post-ripe chestnuts have some limitations. Considering that chestnuts carry a large number of microorganisms by themselves, and given the inability to guarantee the sterility of the experimental process, the chestnuts in the control group also showed a high rate of decay. Moreover, in addition to fungal infestation, the ripening stage of chestnut fruit is susceptible to many factors such as temperature, humidity, and shell bucket maturity, resulting in many uncertainties. According to the aforementioned findings, natamycin has a potent inhibitory effect on N. parvum. However, compared with in vitro application, the effective concentration applied to chestnuts in vivo was much higher. This finding was obtained possibly because the antifungal effect of natamycin in chestnuts is influenced by many factors, and the nutritional conditions of chestnuts themselves are more suitable for fungal growth compared with a PDA medium, causing some interference for natamycin. Panagou et al. [31] found that 50 mg/L of natamycin completely inhibited the growth of surface fungi on chestnut fruits. A comparison of experimental methods revealed that Panagou et al. used immersion administration, while we sprayed natamycin on the surface of the chestnut shell bucket. The difference in the natamycin application methods most likely led to the difference in results. In the in vivo study on chestnut kernels, the highest concentration of natamycin was 1000 µg/mL, the mass of chestnut kernel pieces was 1 g, the volume of application was 5%, and the final calculated dose of chestnut kernel pieces was 50 mg/kg. In the in vivo study on chestnut shell buckets, the highest concentration of natamycin was 1000 µg/mL, the volume of application was 2% per kg of chestnut shell buckets, and the final calculated dose of chestnut shell buckets was 20 mg/kg. It can be seen that the amount of natamycin applied to either the chestnut kernel or the chestnut shell bucket does not exceed the maximum use level specified in the national standard. Moreover, natamycin is only sprayed on the surface of the chestnut shell bucket, which is removed before eating and will not affect the inner chestnut fruit. Washing before eating and consumption during storage will also further reduce natamycin residues on the surface of the chestnuts. We will specifically design experimental protocols in subsequent experiments to determine the actual residues of natamycin on the surface of chestnuts. On this basis, this experiment provides a preliminary exploration of the antifungal mechanisms of natamycin. Most of the current studies on antifungal mechanisms of action have focused on the fungal cell membrane, cell wall, and energy metabolism. Liu et al. [32] showed that dill seed essential oil (DSEO) could disrupt the cell wall of N. parvum and emit a weak blue fluorescence under CFW staining. However, in the present study, mycelium could still emit a bright blue fluorescence by CFW staining after natamycin treatment, which does not differ from the blank mycelium without natamycin action. Similarly, natamycin did not change the AKP activity of N. parvum. Therefore, natamycin has no destructive effect on the cell wall of N. parvum, and the cell wall is not its target of action. The different effects on the fungal cell wall may be related to the different types of antifungal drugs. Ergosterol is an important component of the eukaryotic cell membrane. Natamycin has a strong affinity for ergosterol on fungal cell membranes and can irreversibly bind to it to form polyene-sterol complexes [30]. Our experimental results showed that natamycin treatment remarkably decreased the ergosterol content but substantially increased the leakage of cell contents. Moreover, the mycelium of the natamycin group could be stained blue by Evans blue, and the higher the concentration, the darker the blue color, which formed a significant difference from the control group. Therefore, one of the targets of natamycin's action on N. parvum is the cell membrane, which disrupts the fungal cell membrane by binding specifically to ergosterol, further causing the vital fungal substances to leak out and die. This result is consistent with the findings of Aparicio et al. [33]. The effect of natamycin on the oxidation reaction of N. parvum was also investigated. Various biological processes in organisms result in reactive oxygen species (ROS), which cause oxidative stress. In response to such oxidative stress, organisms can deploy SOD and CAT to scavenge ROS, in order to protect cellular homeostasis [34]. In the present study, natamycin treatment remarkably reduced the SOD and CAT enzyme activities in N. parvum, indicating that natamycin can cause oxidative damage in N. parvum. The antifungal targets of natamycin on N. parvum mainly include the cell membrane and the antioxidant enzymes SOD and CAT, while the cell wall is not its target. However, the energy metabolism of the fungus has not been determined, which can be the next research direction. Experimental Materials After surface disinfection of the rotten chestnut samples with alcohol, the samples were cut with a sterile blade and a portion of the rotten pulp was transferred to a blank PDA plate and incubated at 28 • C. The pathogenic chestnuts were isolated and the mycelium at the edge of the colony was picked and transferred to a new PDA plate three times to obtain a single strain and recorded with a number, which was stored on slant medium at 4 • C. The strain was further identified by Zhou [32] as N. parvum, No. 20221022-6-1, based on ITS sequencing, combined with mycelial morphology and other characteristics. N. parvum was activated using a PDA medium, incubated at 28 • C, and stored at 4 • C. Chestnut shell buckets were harvested from Yutoushan Village, Yantianhe Town, Macheng City, Hubei Province. Nearly ripe chestnut shell buckets were harvested from the orchard, selected, and then stacked at room temperature for reservation, and the chestnut fruits were separated and counted after 10 days. The collected chestnut fruits were stored at room temperature and the process of rot development during storage was counted. Chestnut kernels were purchased commodities and originated from Luotian County, Hubei Province. Chestnut kernels that were intact and free from infestation damage were selected for the experiment, and the unused kernels were stored at 4 • C and used within their shelf life. Natamycin (food-grade) was obtained from Yino Biotechnology Co., Ltd., Ningbo, Zhejiang, China. Natamycin solution was prepared by dissolving the requisite amount in Tween-20 (0.1%, v/v). Evans blue (CAS:314-13-6) was obtained from BioFroxx, Calcofluor White (CAS:4193-55-9) was obtained from Sigma, and all the other chemicals were analyticgrade. A PDA medium was prepared by boiling 200 g potato, 18 g agar, and 20 g glucose with 1000 mL of distilled water. Unlike the PDA medium, the PDB medium does not require the addition of agar. Effect of Natamycin on the Growth Diameter of N. parvum The effect of natamycin on the mycelial growth of N. parvum was tested in vitro by using the agar dilution method [35]. Briefly, the natamycin stock solution was added to sterilized PDA medium to make final concentrations of 10, 50, 100, and 200 µg/mL. The medium containing different concentrations of natamycin (30 mL) was equally distributed to three plates and allowed to solidify. PDA medium without natamycin addition was used as a control group. A 5 mm sterile punch was used to punch holes at the edges of actively growing colonies, and the mycelium was placed at the center of PDA plates containing different concentrations of natamycin, with the mycelium facing downward. Each treatment was performed in triplicates and the plates were sealed with sealant and incubated for 3 days upside down in a biochemical incubator set at a constant temperature of 28 • C ± 2 • C. Every 24 h, the growth of N. parvum was monitored. The crossover method was used to calculate the mycelial growth diameter of each group of plates and plot the N. parvum growth curve. Calcofluor white (CFW) staining of N. parvum was carried out as described by Ouyang et al. [28]. An amount of 100 µL of the fungal suspension, at a concentration of 1 mg/mL, was added to a shake flask containing 25 mL of PDB medium and incubated in a shaker at 28 • C ± 2 • C and 200 rpm for 12 h. Various volumes of the stock solution of natamycin emulsion were pipetted at a concentration of 25 mg/mL into the conical flask to achieve final concentrations of 6.25, 12.5, and 25 µg/mL, which correspond to 1/16, 1/8, and 1 4 minimum inhibitory concentration (MIC), respectively. The control group was not added with natamycin emulsion. After administration, the incubation was continued for 3 h, the mycelium was collected, and 10 µL of CFW and 10 µL of KOH (10%) were added dropwise and stained under dark conditions for 5 min. Excess dye was removed and observed via confocal laser scanning microscopy (Leica TCS SP8 CARS). Fungal blocks were punched from PDA plates, placed in PDB medium containing natamycin emulsion at concentrations of 0, 50, and 80 µg/mL, and incubated in a shaker at 28 • C ± 2 • C and 200 rpm for 48 h. The supernatant was collected after centrifugation at 10,000 rpm for 10 min, the alkaline phosphatase (AKP) activity was measured strictly according to the AKP kit (Nanjing Jiancheng Bioengineering Institute, Nanjing, China) instructions, and the experiment was repeated thrice for each treatment. Effect of Natamycin on the Cell Membrane of N. parvum Fungal cell membrane damage was observed by Evans blue staining according to the method of Lucía S. Di Ciaccio et al. [36]. An amount of 100 µL of mycelial suspension at 1 mg/mL concentration was added to a conical flask containing 25 mL of PDB and incubated in a shaker at 28 • C ± 2 • C and 200 rpm for 12 h. Various volumes of the stock solution of 25 mg/mL natamycin emulsion were pipetted into the conical flask to achieve final concentrations of 6.25, 12.5, and 25 µg/mL, corresponding to 1/16, 1/8, and 1 4 MIC, respectively. The control group was not added with natamycin emulsion. Incubation was continued for 3 h with shaking under the same conditions as before. The mycelia were placed on glass slides and stained with 0.5% Evans blue dye solution dropwise for 5 min. The excess dye solution was washed off with PBS, and the staining situation was observed under an optical microscope (Olympus CX23, Beijing, China). Effect of Natamycin on Ergosterol Synthesis in N. parvum The content of ergosterol was determined by UV spectrophotometry, according to the method of Abhishek et al. [37]. Mycelial blocks were punched at the edge of the colony with a puncher, placed in a conical flask containing 25 mL of PDB, and incubated in a shaker at 28 • C ± 2 • C and 200 rpm for 48 h. The mycelium was homogenized with a homogenizer and then expanded to a sufficient amount. The cultured mycelial suspension was equally divided into 50 mL centrifuge tubes with 25 mL of sample per tube. Natamycin emulsion stock solution (25 µg/mL) was pipetted into a centrifuge tube to prepare final natamycin concentrations of 50 and 80 µg/mL (i.e., 1/2 MIC and subinhibitory concentration), respectively, and the samples were incubated with a shaker for 3 h. Then, the samples were centrifuged at 4000 rpm for 10 min. Approximately 1 g of mycelium was weighed, added with 5 mL of 25% potassium hydroxide ethanol solution, shaken vigorously for 10 min, and placed in a water bath at 85 • C for 4 h. A mixture of 4 mL of water and n-heptane was added at a ratio of 1:3, shaken for 10 min, and allowed to stand at room temperature for stratification. The n-heptane layer was transferred to a 5 mL EP tube and stored at −20 • C for 24 h. A UV spectrophotometer was used for scanning at the full wavelength of 230−300 nm. Then, the ergocalciferol content was calculated according to the following equation: In these equations, 518 and 290 are constants, and w is the mycelium wet weight. Effect of Natamycin on the Leakage of the Cellular Contents of N. parvum The release of cell constituents into the supernatants was measured using a previously described method [38] with minor modifications. Mycelial blocks were punched at the edge of the colony with a puncher, placed in a conical flask containing 25 mL of PDB, and incubated in a shaker at 28 • C ± 2 • C and 200 rpm for 48 h. Then, various amounts of natamycin emulsion were added to achieve final concentrations of 50 and 80 µg/mL (i.e., 1/2 MIC and subinhibitory concentration). Then, incubation was continued for 3 h. The leakage of macromolecular compounds from N. parvum was ascertained by collecting the supernatant after centrifugation at 10,000 rpm for 10 min. The absorbance values were determined at 260 and 280 nm. Each parameter was tested in triplicate. Effect of Natamycin on Oxidative Stress in N. parvum The fungus was cultured using the method under Section 4.3.4. Superoxide dismutase (SOD) activity and catalase (CAT) activity were measured using specific kits (Beyotime Biotechnology, Haimen, China). In Vivo Antifungal Efficacy of Natamycin The chestnut kernels were washed in sterile water and cut into small pieces of 1 g mass using a safety razor blade. Each chestnut kernel piece in the test group was inoculated with 50 µL of mycelial suspension at a concentration of 1 mg/mL, while the control group received no treatment. Each chestnut kernel piece in the low-, medium-and, high-dose groups was given 50 µL of natamycin at concentrations of 100, 200 and 1000 µg/mL, respectively, corresponding to MIC, MFC, and 5 times MFC of natamycin emulsion against N. parvum. Each group was given three plates with five chestnut kernel pieces on each plate. The samples were placed at room temperature, after being sealed with sealant, to observe the infection of the chestnut kernel pieces. The rate of chestnut kernel pieces rotting after N. parvum infestation was expressed as the incidence rate. Effect of Natamycin Emulsion on the Postharvest Ripening Infection of the Chestnut Shell Bucket The natamycin emulsion consisted of 2% natamycin, 1% Tween 80, and 97% distilled water. Natamycin and Tween 80 were dissolved in distilled water and, after complete dissolution, they were thoroughly mixed via ultrasonication. In addition, untreated chestnut shell buckets were used as the control group. The prepared natamycin emulsion was stored at room temperature for 30 days without any delamination, indicating its good stability. An amount of 5 mg of N. parvum mycelium was picked off from PDA medium with a sterile inoculating needle on an ultraclean bench, and added to 5 mL of PDB medium and homogenized to a 1 mg/mL suspension of mycelium. Initially, a 1 mg/mL suspension of N. parvum was sprayed on the chestnut shell bucket to cause chestnut pathogenesis, and this setup was considered as the model group. The model group of chestnut shell buckets was then sprayed with different concentrations of natamycin emulsion, and the decay was observed. Natamycin emulsions, diluted to MFC and MFC 5 times, were administered separately. The volume of mycelial suspension and natamycin emulsion sprayed was 2% of the weight of the chestnut shell bucket. The control group received no treatment. Each treatment group was assigned 30 chestnut shell buckets, and 3 replicates were set up for each treatment group. When the green color of the chestnut shell bucket completely disappeared, the bucket naturally cracked, and the shell of the chestnut fruit completely hardened and changed color, it was considered that the chestnut shell bucket had reached the ripe state. When the chestnuts ripened, the shell buckets were removed, and the chestnuts were observed every 4 days until the decay stabilized. The criteria for chestnut fruit decay are color change, loss of hardness, visible mycelial growth on the surface of some chestnut fruits, and a distinct smell of decay. Statistical Analysis Each group of experiments was repeated thrice, and the results were reported as the means ± SD. Analysis of differences between groups was evaluated via one-way analysis of variance based on Duncan's post hoc test through the use of SPSS25.0 (SPSS, Chicago, IL, USA). Differences with p values less than 0.05 were considered significant. Conclusions In this paper, the antifungal effect of natamycin on N. parvum was investigated, and its mechanism of action was initially explored. The results showed that natamycin could disrupt the fungal cell membrane and make its contents leak. The action of natamycin can also cause oxidative damage to the fungus, resulting in impaired cell function. In vivo experiments showed that natamycin emulsion could effectively prolong the storage period of chestnuts and could be used as a new environmentally friendly preservative to prevent postharvest chestnut fruit rot.
2023-04-28T15:15:39.227Z
2023-04-25T00:00:00.000
{ "year": 2023, "sha1": "78955de2ea1da37b1fc4b66d30ffa2c63089be1f", "oa_license": "CCBY", "oa_url": "https://www.mdpi.com/1420-3049/28/9/3707/pdf?version=1682423289", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "056c79faaacb1de76f1b2d3e00f048f602f40bd7", "s2fieldsofstudy": [ "Biology" ], "extfieldsofstudy": [] }
119667185
pes2o/s2orc
v3-fos-license
Quasi-Primary Spectrum of a Commutative Ring and a Sheaf of Rings In this work, the set of quasi-primary ideals of a commutative ring with identity is equipped with a topology and is called quasi-primary spectrum. Some topological properties of this space are examined. Further, a sheaf of rings on the quasi-primary spectrum is constructed and it is shown that this sheaf is the direct image sheaf with respect to the inclusion map from the prime spectrum of a ring to the quasi-primary spectrum of the same ring. Introduction The set of all prime ideals of a commutative ring R, called prime spectrum of R, denoted by Spec(R), is a well-known concept in commutative algebra. This set is equipped with the famous Zariski topology, where closed sets are defined as V (I) = {P ∈ Spec(R) : I ⊆ P } for any ideal I of R. Topological properties of Spec(R) are widely examined throughout the years and can be found in many of the standard commutative algebra and algebraic geometry references. Besides, there is a famous sheaf construction, named the structure sheaf, on Spec(R) which is a very useful tool to connect algebraic geometry and commutative algebra. For details of the structure sheaf the reader may consult [6], [1] and [2]. In [5], the authors generalized the Zariski topology on Spec(R) to the set of primary ideals of a commutative ring R, denoted by P rim(R), and they called it primary spectrum of R. They defined the closed sets as V rad (I) = {Q ∈ P rim(R) : I ⊆ Q} for any ideal I of R where √ Q denotes the radical of Q. They showed that these closed sets satisfy axioms of a topology on P rim(R). They investigated some topological properties of this space and compared them with the well-known properties of Spec(R). We note that, since any prime ideal is primary and equal to its radical, the space Spec(R) is in fact a subspace of P rim(R). When [5] is examined in detail, it can be realized that the given topological construction depends only the fact that the radical of a primary ideal is prime. So, this topology is in fact valid on a much larger set, the set of ideals whose radicals are prime. These type of ideals are first introduced by L. Fuchs in [3]. He named them as quasi-primary ideals. We aim to investigate the set of quasi-primary ideals of a commutative ring R equipped with a topology similar to the one defined in [5] and to construct a sheaf of rings on this topological space. In Section 2, after giving some general topological properties of quasi-primary spectrum, we examine irreducibility and irreducible components of this space. Besides we investigate disconnectedness of the space and finally show that the dimension of the quasi-primary spectrum of a Noetherian local ring is finite. In Section 3, we construct a sheaf of rings on the quasi-primary spectrum of and prove that this sheaf is actually the direct image sheaf under the inclusion map from the prime spectrum to the quasi-primary spectrum. Further, we conclude that this sheaf is in fact a scheme. The Quasi-Primary Spectrum of a Ring Throughout this paper all rings are commutative with identity. In this section, we define a topology on the set of all quasi-primary ideals of a ring and examine some properties of this topological space. First, we give some (known) properties of quasi-primary ideals we need in the rest of the paper. Let R be a ring. Following [3], an ideal I of R is called quasi-primary if the radical of I, denoted by √ I is prime. In this section Observe that, for any subset In [5], the authors defined a topology on the set P rim(R) of primary ideals of a commutative ring using the sets V rad (I) = {Q ∈ P rim(R) : I ⊆ √ Q} as the closed sets. In this construction they only used the property that a primary ideal has a prime radical. So we realized that the topology axioms for closed sets are in fact satisfied by the sets where I any ideal of R. Thus, QP rim(R) is a topological space with closed sets V q (I) where I is any ideal of R. Since any primary ideal is quasi-primary, we have P rim(R) as a subspace of QP rim(R). For the sake of completeness, we note some properties (from Proposition 2.2 to Corollary 2.6) of V q (I) without proofs. For details, see [5]. Proposition 2.2. Let I, J be ideals of R and {I λ } λ∈Λ a family of ideals of R. Then the followings hold: This topology is called Zariski topology on QP rim(R) and the space QP rim(R) is named as the quasi-primary spectrum of R. We note that any open set in QP rim(R) is of the form QP rim(R)\V q (S) for some subset S of R. Set U a = QP rim(R)\V q (a) for any a ∈ R. Theorem 2.4. Let R be a ring. The family {U a } a∈R is a base for the Zariski topology on QP rim(R). Note that U 0 = ∅ and U r = QP rim(R) for every unit r ∈ R. Theorem 2.5. Let R be a ring and a, b ∈ R. The followings hold: Quasi-primary ideals firstly introduced and examined thoroughly in [3]. It is generally studied on rings satisfying maximal condition; in other words, that every ascending chain of ideals is finite. It is also noted that the quasi-primary ideals in rings satisfying maximal condition can be characterized as follows: A quasi-primary ideal is either a power of a prime ideal or an intermediate ideal between two powers of one and the same prime ideal. In the view of this fact, the following theorem is given for rings satisfying maximal condition. Theorem 2.7. [3, Theorem 4] If Q 1 and Q 2 are quasi-primary ideals having the radicals P 1 and P 2 respectively, and P 1 ⊆ P 2 . Then Q 1 Q 2 is also quasi-primary having the radical P 1 . Theorem 2.8. Let R be a ring satisfying maximal condition and Q 1 and Proof. It follows from Theorem 2.7. It is known that Theorem 2.7 has no analogue in primary ideal theory. Similarly, Theorem 2.8 does not valid for the primary spectrum as can be seen in the following example. Example 1. Consider the residue class ring R = K[X 1 , X 2 , X 3 ]/(X 1 X 3 − X 2 2 ) where K is a field. It is clear that R satisfies maximal condition. Let x i denote the natural image of X i in R for each i = 1, 2, 3. Then, the ideal P = (x 1 , x 2 ) is a prime ideal of R but P 2 is not primary [7,Example 4.12]. It is trivial that P 2 is a quasi-primary ideal of R. Now take Q 1 = Q 2 = P. Then, we can see that P ∈ V q (P ) ∩ V rad (P ), however P 2 ∈ V q (P ) \ V rad (P ). Now, let us determine the closure of a point Q ∈ QP rim(R). The closure . Then, we obtain N (R) ∈ U . In a similar way, we get N (R) ∈ V . Consequently, U ∩ V = ∅. For the other part, let QP rim(R) be an irreducible space. Assume that N (R) is not quasi-primary. Then N (R) is not prime. Then there exist a, b ∈ R such that a, b / ∈ N (R) but ab ∈ N (R). Since a ∈ N (R), V q (a) = QP rim(R), that is, There is a one to one correspondence between points of QP rim(R) and irreducible closed subsets of QP rim(R). The next theorem gives that correspondence. Proof. Let Y = V q (Q) for any Q ∈ QP rim(R). Since V q (Q) = Cl(Q) and Cl(Q) is irreducible, Y is an irreducible closed subset of QP rim(R). Conversely, let Y be an irreducible closed subset of QP rim(R). Then Y = V q (I) for some ideal I of R. Now suppose that I / ∈ QP rim(R). Then √ I is not prime. Then there are elements a, b ∈ R such that ab ∈ I but a, b / ∈ √ I. Thus, So we obtain Y reducible which contradicts our assumption. Let I be an ideal of a ring satisfying maximal condition. Then, by [3, Theorem 5], the ideal I is the intersection of a finite number of quasi-primary ideals, say Q 1 , . . . , Q n with radicals P 1 , . . . , P n , respectively. Hence, there is no prime ideal containing I other than P i 's where i = 1, . . . , n. Then, for any ideal I in a ring satisfying maximal condition, every closed subset V q (I) can be written as the finite union of irreducible closed sets, that is, V q (I) = V q (P 1 ) ∪ · · · ∪ V q (P n ) by Proposition 2.2 (iii) and (v). Let V be a closed subset of a topological space X. A dense point of V is called a generic point. By the above theorem, we conclude that every irreducible closed subset of QP rim(R) has a generic point. The maximal irreducible subsets of a topological space X are called irreducible components. Proof. By Theorem 2.11, any irreducible closed subset of QP rim(R) can be written of the form V q (Q) for some quasi-primary ideal Q of R. The following lemma is easy to prove, so we left it as an exercise. Theorem 2.14. Let R be a ring. The following are equivalent: Then we have I + J = R and I ∩ J = IJ. So, we get R = R/I × R/J. (ii) =⇒ (iii) Assume that R ∼ = R 2 × R 2 where R 1 and R 2 are nonzero rings via an isomorphism φ. Then φ −1 (1, 0) is a nontrivial idempotent of R. The dimension of a topological space X is the number n such that X has a chain of irreducible closed sets V 1 ⊂ V 2 ⊂ · · · ⊂ V n and no such chain more that n terms. Theorem 2.15. Let R be a Noetherian local ring. Then the dimension of QP rim(R) is finite. be a chain of irreducible subsets. This chain can be written as where Q i ∈ QP rim(R). Let P i = √ Q i for each i. Then we have · · · ⊂ P n ⊂ · · · ⊂ P 2 ⊂ P 1 . By [4,Corollary 11.11], the dimension of R is finite. So the above chain of prime ideals must terminate. Therefore the dimension of QP rim(R) is finite, and in fact equal to the dimension of R. A Sheaf of Rings on the Quasi-Primary Spectrum In this section we define a sheaf of rings on the quasi-primary spectrum. Let φ : R → R ′ be a ring homomorphism. For any Q ∈ QP rim(R ′ ), it is easy to show that f −1 (Q) ∈ QP rim(R). So f induces the map which is called the associated map of φ. For any A ⊆ R we have (φ a ) −1 (V (A)) = V (φ(A)). So the map φ a is continuous. Let S ⊆ R be a multiplicative subset of R. Let φ : R → R S be the canonical homomorphism. Since √ IR S = √ IR S for any ideal I of R, the map φ a is an inclusion. The set U S = φ a (QP rim(R S )) is equal to the set of quasi-primary ideals of R whose radicals are disjoint from S. There is a one-to-one correspondence between quasi-primary ideals of R S and quasi-primary ideals of R whose radicals are disjoint from S. So, the space QP rim(R S ) is homeomorphic to the subspace U S of QP rim(R). In Proof. Assume that U a ⊆ U b for some a, b ∈ R. Then, for any Q ∈ QP rim(R), Since QP rim(R) contains prime ideals, this observation yields that a ∈ (b). Conversely, assume that a ∈ (b) for some a, b ∈ R. Let q ∈ U a . Then a ∈ √ Q. Since a is contained in the intersection of all prime ideals that contain b, we obtain that b ∈ √ Q. Therefore, we have Q ∈ U b . Our aim is to construct a sheaf of rings on QP rim(R). We assign to each open set U a the ring F (U a ) := R a , ring of quotients with respect to the multiplicative subset {1, a, a 2 , ...}, and define the restriction maps Since, by Lemma 3.1, we have U a ⊆ U b if and only if a n = tb for some positive integer n and t ∈ R, the map res U b ,Ua is well-defined. For an arbitrary open set U of QP rim(R) let where the projective limit is taken over all U a ⊆ U relative to the system of homo- With this construction, F turns to be a sheaf of rings on QP rim(R). In fact, this sheaf is the direct image sheaf under the inclusion map from Spec(R) into QP rim(R): Proof. The inclusion map ι is continuous. For any open set U of QP rim(R), direct image sheaf ι * is defined as follows: where O denotes the structure sheaf on Spec(R). For a ∈ R, we have ι −1 (U a ) = {P ∈ SpecR : ι(P ) ∈ U a } = {P ∈ SpecR : P ∈ U a } = {P ∈ SpecR : a ∈ √ P = P } The final set is a principal open set for Spec(R) and the corresponding ring for this set is R a . So we get ι * (U a ) = R a = F (U a ). For U a ⊆ U b , we have res U b ,Ua = ρ X b Xa where ρ X b Xa is the restriction map from principal open set X b to X a of Spec(R) with respect to the structure sheaf O. Thus the sheafs F and ι * F are the same. Similar to the structure sheaf O on Spec(R), the stalk F Q of F at a point Q ∈ QP rim(R) is R √ Q . Therefore, we conclude that (QP rim(R), F ) is a locally ringed space. Finally, since F is the direct image sheaf of O under the inclusion map ι : Spec(R) → QP rim(R), it is easily seen that (QP rim(R), F ) is a scheme.
2017-09-27T07:45:00.000Z
2017-05-16T00:00:00.000
{ "year": 2017, "sha1": "5278a151654674aaa30e6aa4c6c3b604155958b2", "oa_license": null, "oa_url": null, "oa_status": null, "pdf_src": "Arxiv", "pdf_hash": "5278a151654674aaa30e6aa4c6c3b604155958b2", "s2fieldsofstudy": [ "Mathematics", "Physics" ], "extfieldsofstudy": [ "Mathematics" ] }
266533781
pes2o/s2orc
v3-fos-license
Palliative care, homelessness, and restricted or uncertain immigration status Background: People experiencing homelessness have limited access to palliative care support despite high levels of ill health and premature mortality. Most research exploring these challenges in the United Kingdom has focused on people living in hostels or temporary accommodation. People with uncertain or restricted immigration status are often unable to access this accommodation due to lack of entitlement to benefits. There is little research about the experiences of those in the United Kingdom who cannot access hostels or temporary accommodation due to restricted or uncertain immigration status with regards to palliative and end-of-life care access. Aim: To explore the barriers to palliative and end-of-life care access for people with uncertain or restricted immigration status, who are experiencing homelessness and have advanced ill health, and the experiences of UK hospices of supporting people in this situation. Design: A multi-method cross-sectional study. Setting/participants: An online survey for hospice staff followed by online focus groups with staff from inclusion health, homelessness and palliative care services, charities and interviews with people experiencing homelessness. Results: Fifty hospice staff responded to the online survey and 17 people participated in focus groups and interviews (focus groups: n = 10; interviews: n = 7). The survey demonstrated how hospices are not currently supporting many people with restricted or uncertain immigration status who are homeless and that hospice staff have received limited training around eligibility for entitlements or National Health Service (NHS) care. Interview and focus group data demonstrated high levels of unmet need. Reasons for this included a lack of consistency around eligibility for support from local authorities, issues relating to NHS charging, and mistrust and limited knowledge of the UK health and social care system. These barriers leave many people unable to access care toward the end of their lives. Conclusion: To advocate for and provide compassionate palliative and end-of-life care for people with uncertain immigration status, there is need for more legal literacy, with training around people’s entitlement to care and support, as well as easier access to specialist legal advice. Introduction Health care from the National Health Service (NHS) is free for people who are ordinarily resident in the United Kingdom.For some non-UK nationals access to some aspects of NHS care (such as some secondary care) is chargeable.3][4] Overseas visitors could be charged up to 150% of the cost of NHS services, and additional powers came into play for Hospital Trusts to make and recover charges from chargeable patients.In 2017, these rules were updated placing a statutory duty on Trusts to charge patients upfront for non-urgent care, and to record patients' eligibility for free treatment.Despite this, safeguards do exist, and for non-UK nationals who are not entitled to free NHS care, treatment can be given without payment in advance if it is deemed 'urgent' or 'immediately necessary', although patients may still be charged or billed retrospectively.Only clinicians can make an assessment as to whether treatment is immediately necessary, urgent or non-urgent.'Urgent' treatment is that which, although not immediately necessary, cannot wait until the person can be reasonably expected to leave the United Kingdom.If the person is unlikely to leave the United Kingdom for some time (which will be the case for some undocumented migrants), treatment which clinicians might otherwise consider non-urgent (e.g.certain types of elective surgery) is more likely to be considered by them as urgent.With regards to access to palliative care, the interpretation of whether this could be considered urgent or necessary is complicated.Care provided by hospices is not chargeable, as it is usually only partially funded by the NHS, but palliative care in a secondary NHS care may be. In this paper 'restricted immigration status' includes people who have no legal status such as undocumented migrants (e.g. as visa overstayers, people whose asylum applications have been rejected as well as some EU nationals without settled status).None of these groups have access to public funds or benefits.We refer to people with 'uncertain or restricted' immigration status, because often without thorough legal investigation, it may be unclear what someone's actual immigration status is.Some people have leave to remain in the United Kingdom, but do not have entitlement to public funds (no recourse to public funds -NRPF). People with restricted or uncertain immigration status might be particularly vulnerable to homelessness as their immigration status means they frequently lack entitlement to benefits (have NRPF) and are not owed a statutory housing were also surveyed to see if they had experience of supporting people in this situation.The survey showed that hospices are not currently supporting many people with restricted or uncertain immigration status who are homeless, and that they have limited training around supporting people in this situation.In the interviews and focus groups, opinions were heard about challenges to palliative care support for people with uncertain or restricted immigration status who were experiencing homelessness.Professionals described how it can be hard to obtain support from local authorities, and also understanding rules about who has to pay to receive NHS care.People with uncertain or restricted immigration status who were also homeless did not always know how to access the UK health and social care system and had negative experiences of doing so in the past.As a result, many people are unable to access care towards the end of their lives.To provide compassionate palliative and end-of-life care for people with uncertain immigration status, there is need for more legal literacy, with training around people's entitlement to care and support, as well as easier access to specialist legal advice. duty by the local authority.][7] Furthermore, they may have been exposed to traumatic experiences, domestic violence, war or torture in their homeland or when fleeing to the United Kingdom. 8,9 2020, a large proportion of people sleeping on the streets in England were found to be not UK nationals, or their nationality was unknown. 10here is growing evidence that the scale and the support needs of this population are increasing. 5octors of the World completed an audit of people accessing their specialist casework support and legal advice due to being refused NHS care based on their immigration status.Almost everyone included in the audit (93%), was destitute, meaning that they did not have adequate accommodation or funds to meet their basic needs. 11en though people with restricted immigration status may have NRPF, support through adult social care, provided under the Care Act, is not a public fund.This means adult social care may have a duty to provide support (housing and basic financial support) for someone, if they are destitute and are deemed to have care and support needs (eligible under the Care Act 2014).The Care Act is seen by the Home Office as a justification for the NRPF status, as, in principle, it should ensure the most vulnerable are protected.In practice, this is not always the case. Homelessness is associated with high rates of long-term health conditions compared with the housed population. 12In England in 2022, the mean age at death of people who were street homeless or in emergency shelters was 45.9 years for men and 41.6 years for women. 13People experiencing homelessness rarely have access to palliative and end-of-life care, suggesting that their symptoms may be unmanaged and their deaths unsupported. 14,15evious work has identified a range of factors fuelling inequity of access to palliative and end-of-life care support between people experiencing homelessness and housed population. 14,16As healthcare is often only accessed in emergency situations, opportunities for planned support and the identification of palliative care need in this population are few.In addition, people experiencing homelessness often do not fit the profile of a 'typical' referral to palliative care. 17Within many community or homelessness services there is also a lack of knowledge and understanding about palliative care and how it could support people with advanced ill health. 18e majority of research exploring this in the United Kingdom has focused on the experiences for people living in homeless hostels. 14,16,18,191][22] The primary aim of hostels is not to facilitate access to health and social care, yet it is often through a connection with the hostel that people experiencing homelessness are able to access basic health and social care support. Most non-UK nationals cannot stay in hostels as they are not entitled to the benefits needed to pay for this.Therefore, further challenges to accessing support for people with restricted or uncertain eligibility for benefits who are homeless and very unwell are anticipated.This project sought to explore the challenges to palliative and end-of-life care access for people with uncertain or restricted immigration status, who are experiencing homelessness. Methods A multi-method cross-sectional study was conducted involving a survey for hospice staff, focus groups and interviews with people with uncertain or restricted immigration status who had advanced ill health and were experiencing homelessness and online focus groups with staff that support them from a range of professional groups. Sample The survey was open to any clinical staff working in a UK-based hospice. Recruitment The survey for hospice staff was circulated via newsletters aimed at palliative care professionals, through contacting individual hospices, via social media, and via conference presentations attended by palliative care professionals given by the authors. Data collection The online survey was developed using Microsoft Forms.Survey questions were developed by the research team after consultation with organizations specializing in supporting people with restricted or uncertain immigration status. Respondents were asked short demographic questions about their gender, ethnicity, job role, the hospice in which they worked and the services their hospice offers.They were then asked closed and open-ended questions about the following: • their experiences of supporting people with uncertain or restricted eligibility for benefits who were unwell and/or experiencing homelessness • their views on barriers to supporting people in this situation • knowledge about NHS charging and its exemptions and about potential support under the Care Act • training received about supporting people with uncertain or restricted eligibility for benefits • awareness of where to find support for people in this situation The survey was open between January and May 2022. Data analysis Quantitative data from the survey were analyzed using descriptive statistics within Microsoft Excel by BH.Qualitative open text responses were analyzed using thematic analysis by BH and discussed with the research team to create the final themes in the report.Data from the survey was analyzed separately from data gathered in the focus group and interviews. Sample Purposive sampling was used to select individuals that had experience of supporting people with uncertain or restricted immigration status, who had advanced ill health and were experiencing homelessness.They could come from any of the following professional backgrounds: health, homelessness, social care, palliative care, local authorities. Recruitment Professionals were recruited for the focus group via the research team's existing professional connections, via the Faculty for Homeless and Inclusion Health newsletter, and social media. Data collection Three focus groups and follow-ups were held online between January and May 2022.A topic guide was developed by the study team to explore professional's experiences of supporting people with restricted or uncertain immigration status who had advanced ill health and were experiencing homelessness to access health care services or support.Two focus groups were led by CS and a third by AB, all were supported by BH.Reflexive notes and peer debriefing were used following each focus group to mitigate risk of bias.Focus groups lasted up to 1 h and were conducted and recorded on Microsoft Teams.Some professionals attended interviews rather than focus groups due to scheduling conflicts and challenges. Sample To be eligible for the interviews, a person would need to have lived experience of homelessness, uncertain or restricted immigration status, and poor health. Recruitment People with uncertain or restricted immigration status, whose health was poor and who had experience of homelessness were recruited via the attendees of the focus groups for professionals.Focus group attendees were provided with an information sheet that they could discuss or share with people that they felt may be interested in participating.They also supported a connection with the research team, so that the researchers could discuss the study further with the potential participant and collect informed consent for participation. Data collection Interviews with people with current experience of uncertain or restricted immigration status, poor health, and homelessness were also conducted online and were also attended by the professional that identified them as a potential participant in the research, if requested.A topic guide was developed which was used to explore experiences of accessing services that could support their health needs.Interviews were conducted by CS. Data analysis Focus group and interview recordings were transcribed verbatim, anonymized and entered in NVivo for analysis using reflexive thematic analysis. 23This involved familiarization with the data by reading the transcripts to gain a sense of the experiences shared.Line by line coding of each transcript was undertaken to identify initial codes in the data which were then grouped into initial themes.These themes were reviewed, revised, and combined where necessary to provide the final themes reported in this paper which were refined through critical dialogue with all authors.Transcripts were independently analyzed by MY and BH, both female researchers, experienced in qualitative research.Data were reported in line with the Consolidated Criteria for Reporting Qualitative Research (COREQ) guidelines. 24 Ethical considerations The research team is experienced in conducting research with people experiencing homelessness, three are clinicians (two specializing in inclusion health), the fourth is a health psychologist.The importance of trust and anonymity were paramount in this research and individuals with lived experience were recruited via professionals that were known to them, and who had themselves participated in the research.All participants provided written consent prior to data collection.Participants were reminded that participation was voluntary and all data would remain anonymous. Patient and public involvement We have consulted organizations that support people with restricted or uncertain immigration status throughout the course of this project.Their support has been invaluable in shaping the questionnaire and the topic guide and we are very grateful for their input and the changes made to this study as a result. Experiences of supporting people experiencing homelessness who may have NRPF Just over a quarter of hospice staff reported supporting at least one person experiencing homelessness who was sleeping rough in the past 2 years (n = 13, 26%).A greater number had supported someone experiencing homelessness who was living in a hostel or temporary accommodation (n = 36, 72%) or sofa surfing (n = 36, 72%).Smaller percentages of hospice staff reported supporting people who they believed had NRPF (n = 4, 8%).Half of the sample responded that they did not know whether any of the patients they supported had NRPF. Discussions of immigration status in hospice settings Over half of hospice staff (n = 26, 52%) did not routinely discuss immigration status with service users.Almost a third rarely explored this (n = 15, 30%); (n = 1, 2%) always explored this, and (n = 8, 16%) did not know whether this was routinely explored.Similar patterns can be seen around exploring entitlement to NHS care.More hospice staff asked patients about their housing situation and entitlements to certain benefits (see Table 1).It was more common to enquire about a patient's housing status, although over a quarter of respondents (n = 14, 28%) reported rarely or never asking about this. Access to training about eligibility for benefits or immigration status for hospice staff Ninety-two percent of hospice staff (n = 46) reported never having received any training about entitlements or the responsibilities of adult social care services for people with uncertain or restricted immigration status.The majority (n = 41, 82%) did not feel confident in accessing advice relating to restricted or uncertain immigration status for people who were homeless and very unwell.Only 6 (13%) correctly identified that it is a hospital clinician's decision whether care could be provided without payment in advance to someone without eligibility for free NHS care.Over three quarters of respondents (n = 34, 76%) selected 'don't know' in response to this question. Challenges to supporting people with restricted or uncertain immigration status Hospice staff described practical barriers and receiving a low number of referrals, if any to support people with restricted or uncertain immigration status. Limited referrals to hospices were thought to be linked to limited awareness from both people with lived experience and those involved in their care regarding the potential role that hospices could play. Where referrals were received, practical issues encountered included language barriers, access issues linked to digital exclusion, and transportation issues linked to limited personal funds.Despite challenges, examples of how hospices have provided support for people in this situation were also described and are outlined in Table 2. Participants Seventeen people with a range of experiences were recruited to focus groups and interviews (focus groups: n = 10; interviews; n = 7).Among participants, the following professional groups were represented: frontline homelessness staff (n = 3), inclusion health staff (n = 7, mental health, social worker, GP, and nurses), government employees involved in immigration and advice (n = 3), legal immigration advisor (n = 1), people with lived experience (n = 2), hospice staff (n = 1). Participants described how inflexible, complicated, and fragmented services and systems blocked the provision of high-quality care and support for people with restricted or uncertain immigration status who were homeless and had advanced ill health.The complexity of legislation and entitlements to support from local authorities (including accommodation and subsistence) caused barriers to treatment and support for people with palliative and end-of-life care needs, whose health was deteriorating.Some people have leave to remain in the United Kingdom, but do not have entitlement to public funds.One inclusion health GP described the complexities within the system and the discretionary nature of some decisions: There's no quick fix unless you can do a 'change of condition' [i.e.removal of their no recourse to public funds status], but they need to be on a particular route [within the immigration system] in the first place. ..You can make an application on human rights and compassionate grounds and then it will be the home office's discretion whether that will be granted.You would need a good immigration advisor. . . it takes time.For people with severe care needs. . . the Home Office do have discretion to grant leave. ... (Inclusion health GP) As indicated in the previous quote, challenges around interpreting different pieces of legislation were common, with many staff feeling that they needed specialist advice to navigate processes.This process was often lengthy, and for people with advanced ill health, access to care and support was often time sensitive.However, access to specialist legal advice for both services supporting people in this situation and the individuals themselves was limited.Lengthy assessment periods.Participants described lengthy assessment processes regarding eligibility for local authority support.Without swift decisions people with uncertain eligibility were either waiting in hospital beds longer than medically necessary, were at risk of being discharged without the necessary support being in place or were not able to receive necessary treatment because of the lack of a suitable discharge destination.In some cases, this meant that necessary treatment was withheld.This quote from a charity representative describes the experiences of one man in his 40s who had NRPF, was destitute and had bowel cancer.He had just had an operation to remove a tumor but had been deemed by local authority to not meet the threshold for support with accommodation due to not having care and support needs. Without follow-up radiotherapy there was a 50% chance of the cancer recurring.They said they couldn't start the treatment because he didn't have an address and it was outpatient treatment.But then he was never discharged from hospital because there was nowhere for him to go.In the end he'd stayed in the hospital so long he could have had the whole cycle of treatment.It was stupid.He needed lots of medical supplies and stuff like that, so to me, even though he didn't need anyone to help him, that's a care need, isn't it?It was awful.(Inclusion health social worker) Clearly the absence of suitable accommodation can have a major impact upon a person's ability to adhere to treatment and their overall health. We had one individual who needed a lot of fluid and special nutritional fluid that he has to feed through a tube into his stomach.He wouldn't be able to store that on the street.Managing their condition, which otherwise would dramatically deteriorate, is hard on the street.I think even that wasn't sufficient to trigger that duty [by local authority].I think the provision just isn't there.(Inclusion health GP) Issues relating to NHS care In addition to challenges stemming from uncertainty about eligibility for support from local authorities, barriers related to challenges in accessing NHS support were also reported. Pressured and limited resources.Demands on the NHS are growing, and pressure is building to reduce lengthy waiting lists.This translates into increasing pressure to discharge.Participants felt this pressure negatively impacted the ability of services to take a person-centered approach to care, particularly for those to whom the provision of care may be complicated. The pressure to discharge astounded me.I've never experienced anything like it my life.Holistic healthcare's just a thing of the past. . .What's going to happen when I discharge this guy?Within the next 24 hours he's going to be in another hospital somewhere.It's like, well, it won't be here, hopefully.(Inclusion health GP working within a hospital team) Participants described how shrinking teams and inclusion health services within the NHS were resulting in a loss of expertise and connections within services.This reduced capacity hindered health care professionals' ability to advocate for the housing support from the local authority that was sometimes required for a patient to receive treatment.The cyclical nature of these challenges was clearly described by participants.This resulted in challenges to receiving quality care. There was only one social worker in the whole of the hospital at that point, so, that automatically caused a huge delay to them being discharged anyway.That often caused friction with the ward because of the delay and the fact we had documented that in the notes and once it was there, you couldn't take it back, they became immediately aware that meant that person was going to be on the ward with them for quite some time, based on the delay from the social worker. (Inclusion health hospital nurse) Issues related to NHS charging: Variation in health care professionals' knowledge, actions, and attitudes.Despite the fact that, everyone is entitled to primary care in England, participants shared experiences of people being denied access. People are just declined. ...with the assumption they had no right to register with primary care.When we know, anybody has a right to register with primary care, it is just heinous.So, yes, I would definitely say denial of care in primary care. (Inclusion health GP) There was often confusion and complexity around when someone would be required to pay upfront for care and this could often result in dangerous delays in treatment.Participants described how, among other factors, an individual or teams' awareness of entitlements to care and their opinion of the urgency of that care could have a huge impact on the experience of patients with advanced ill health. We supported someone who had cancer which resulted in the life changing surgery.GP records showed they approached their GP a year before their cancer diagnosis about a growth in their throat and was referred for a biopsy, but the team responsible for the biopsy refused to do the procedure because they believed they weren't eligible for free NHS care.They told the GP [to] not refer them again until his immigration status was resolved.In their letter they said they were a 'drain on NHS resources'.(Legal immigration advisor) It should be a clinical decision whether NHS treatment care is deemed to be urgent or immediately necessary, and if it is, it should be provided regardless of the patient's status or ability to pay upfront. Participants shared examples of variation in people's understanding, interpretation, and attitudes toward this, meaning people with restricted or uncertain immigration status were treated differently by different staff, even within the same hospital. The response that we would get from some consultants was really empathic, and not shortsighted and they were quite understanding and sympathetic to what was going on for them in the community. ..there is one ward in particular that I'm thinking of with two different consultants and you knew that if one consultant was on, you were going to have to really argue for them to be kept on the ward whilst accommodation was sought for them.If the other consultant was on, you knew it wasn't going to be an issue.(Inclusion health hospital nurse) Furthermore, participants in focus groups described a perception of intolerance toward people who were destitute, very unwell and with uncertain entitlements.This impatience focused on the difficulties faced in getting them discharged, rather than compassion for how they might need to be supported. There is greater haste around trying to discharge people, and also there's a measure that we're seeking to deal with, a measure of a kind of intolerance, really, or impatience with homeless people who can't be moved on as quickly.(Charity representative) This variation may also be a result of or be reinforced by policies in force around charging for NHS care and about entitlements more generally.To begin to address this variation, focus groups' participants described how working across professional boundaries might improve access to support and better outcomes for people.This might involve influencing decisions around granting someone settled status, or challenging decisions about eligibility for support. Fear, mistrust, and limited knowledge of the UK system Late presentation to care services.Participants described how people with restricted or uncertain immigration status often presented to health services at a very late stage. People put off addressing health for maybe because they feel that they don't want to draw attention to themselves or they feel they're not entitled, so by the time that people are presenting it's really compounded.It's usually complex, there are multiple health conditions, and they're fairly advanced, so it's kind of rare that we're seeing people at the very earliest stages of their condition.It's all pretty chronic.(Inclusion health GP) The reasons for this were varied.Fear and mistrust among some people in this situation toward the health and social care services, and the system in general were described as barriers to seeking support.journals.sagepub.com/home/pcr We see a lot of people self-discharging because they are afraid.They are afraid of the system and they don't know what's going to happen to them, and what's next.It's a fear response.(Inclusion health nurse) People were fearful of a range of things, from revealing their immigration status due to concerns about being discriminated against, treated poorly, becoming known to the system, or even being deported. Especially if they are seeking asylum, they may have had bad experiences with state bodies in their own countries and here as well.Their first contact here might have been police, or a heavy-handed immigration official.Hard to build trust.(Inclusion health GP) Even when people were able to access services, they were sometimes concerned about disclosing personal information which might have repercussions for continuity of care and facilitating access to other services and supports. There's been issues with people having assumed someone else's identity and then they try to unpick it when the person finds out that they're dying, because all this stuff's been entered into the wrong medical record.One woman was using her sister's identity.Her sister was a British citizen.Her sister had just had a baby and she was dying from cancer, so we had to get her an NHS number and try to stop everyone from writing all this stuff in her sister's record.(Inclusion health GP) Difficulty navigating systems.There was also a sense that the complexity of the benefits and welfare system in the United Kingdom meant that people were not sure what they were and were not entitled to.In addition, many people could not afford to take time off work to explore any health concerns, due to being on zero hours contracts or working cash in hand.This may have also led people to be fearful of being presented with large medical bills which they would be unlikely to ever be able to repay. There are people who know that they're ill but don't want to be asked to pay or don't think they're entitled, or you've just got that disengagement with authority, trying to stay under the radar thing, and I think there's the young men mainly who just work until they drop, and then the older people who just keep it quiet.(Day center worker) Without support, navigating complex systems in a different language was seen as challenging. Lack of English language skills is a big issue.[If] they can't make themselves understood. . .even accessing somewhere like the day centers, where they could find someone to help them access a service, is challenging.We only see the tip of the iceberg.There will be many more people that we haven't heard about, those that aren't accessing the services that they need.(Homelessness staff) We identified several issues around the use of interpreters.Face-to-face interpreters were reported to be expensive and so charities tend to use telephone interpreters.However, this was not consistently possible.Participants also noted a lack of specialism and choice among interpreters; the choice is often only a male or female interpreter.For example, interpreters specializing in medical interpretation were not always available.Although hospitals used in-person interpretation services, long wait times (often of days) meant that by the time the interpreter arrived, the person may have left. Where treatment was obtained for people with uncertain or restricted immigration status, it was often the result of advocacy from staff.A 60-yearold respondent with sickle cell disease and with lived experience of homelessness and uncertain migration status, described how an outreach worker was supporting him to access support via his GP.Advocacy from his support worker meant that was able to receive treatment for his sickle cell disease. The sickle cell, you get pain in the joints, it troubled my head.The pain in the joints and so on. . .I don't feel any more sickle cell pain now.Sometimes when I sit on a chair for too long, it's sore but I don't like to sit down too long.(Person with lived experience) Discussion In the United Kingdom more than 1.4 million people have restricted eligibility for public funds and cannot rely on support from the state should they become unwell, unable to work, or experience financial hardship. 25Around one in five (18%) people with NRPF in the United Kingdom have experienced homelessness or insecure housing. 26is study is among the first to look at the barriers to palliative and end of life care access for people with uncertain or restricted immigration status, who have advanced health issues and who are experiencing homelessness.We spoke to a range of people from different professional backgrounds and worked hard to recruit people who were currently experiencing this situation. The findings elucidate several barriers for people with uncertain or restricted immigration status who have advanced ill health and who are experiencing homelessness with regards to accessing the care and support they need.They demonstrate that current policies around NHS charging are costing lives, concurring with a report from the British Medical Association highlighting how charging is deterring vulnerable groups from accessing necessary healthcare. 3Doctors of the World have also highlighted how these policies cause delays to treatment for people who are destitute but whose immigration status is uncertain. 11n addition, despite no requirement for proof of address or ID to register with a GP practice, this is still being asked of people and acts as a barrier to accessing primary care.Resources have been developed to support heath professionals in addressing these barriers. 27 addition to issues around NHS charging policies, the thresholds for receiving accommodation and subsistence from local authorities for people with restricted immigration status are high and variable.Though local authorities may have a duty to provide support for people that have recognized care needs, regardless of their immigration status under the Care Act (2014) 28 our research has highlighted how local authorities are often surprised at being asked to use the Care Act for this purpose, and our respondents described situations where people with advanced illness and palliative diagnoses were found to be ineligible for support.This 'high bar' for support and lack of consistency in what constitutes a care need are major barriers to support.Without statutory guidance, it remains open to interpretation.As such, staff frequently spend large amounts of time advocating for support on their patient's behalf.The protracted decision-making processes evidenced here mean that people are being denied care due to a lack of suitable accommodation, resulting in devastating consequences.This variation in the responses of local authorities to supporting people with NRPF, and the delays that this causes are consistent with findings of prior research. 7,11veral factors preventing people with restricted or unsettled immigration status from accessing health care support until very late in their illness were also reported.These included previous negative experiences of health care services and subsequent mistrust, as well as more practical challenges such as understanding systems and booking or attending appointments.These issues have been highlighted in previous studies, 7,[29][30][31] but not previously in relation to palliative and end-of-life care support for this population. Implications for practice and future research The findings from this study suggest, that where they are in place, joined up teams that consist of health, homelessness, and social care professionals seem to be valuable for navigating challenging and uncertain situations and advocating for access to support.Multi-professional teams also seem helpful for ensuring that everyone has the knowledge and support needed to advocate for access to support for people in this situation and those close to them.The need for multi-professional working has been recognized in guidance in the United Kingdom. 32The support of likeminded colleagues is essential to maintain resolve and continue to advocate for patients despite the challenges that are faced. In addition, easier access to immigration advice is essential in securing access to support for people with uncertain or restricted immigration status.These services have been severely cut in recent years. 5Law centers and other charitable organizations operating in this field are an excellent source of advice and support and health care professionals should consider exploring those operating locally and nationally to support their advocacy for patients. There is clearly a need for additional training around entitlements to benefit and support and around NHS charging within the health including palliative care, homelessness, and social care sector, in relation to accessing support for people with advanced ill health. 29,30Training could include raising awareness that it is ultimately a clinician's decision whether NHS treatment can be given without charging upfront and include the need for care act assessments to consider circumstances beyond the hospital.There is an urgent need to raise awareness among palliative care professionals about entitlements and eligibility. There is a need for more consistency about who is eligible for support under the Care Act.In addition, local authorities do have a power to provide support, including accommodation, even if someone is deemed not to reach the threshold under the Care Act.Cases described in this study, such as people with advanced cancer, could be supported in this way. Good practice guidance to help councils in England provide a holistic response when an adult with NRPF is experiencing homelessness (supported by the UK Associated Directors of Children's Services) has been produced by the NRPF network in the United Kingdom. 33If this guidance is enacted, there would be more consistency and more protection of the most vulnerable. Limitations The recruitment of people with current lived experience in this area was challenging.This may be related to a hesitancy toward participating in research studies, given what was described in both this and previous literature about not wanting to become known to systems or services.We attempted to recruit people with lived experience via professionals that were known to them; however, there will be many more people with restricted immigration status who may benefit from palliative and end-of-life care who were not known to professionals.Their experiences may differ from this sample.Future research would benefit from trying to expand the inclusion of the voices of people experiencing this combination of issues. Conclusion Fundamentally, current UK policies around supporting people with uncertain or restricted immigration entitlements are complex and create barriers to accessing care for people that need it.If we are truly committed to addressing inequity in health care access, we need to be more proactive at supporting people, regardless of their immigration status.There is a need for future research and policy work to challenge existing policies and practice that are perpetuating rather than addressing inequalities.There is a need for training around eligibility to support under the Care Act, as well as a recognition that local authorities also have a power to support people even if they do not reach a threshold for having care and support needs.In addition, there is a need for NHS clinicians to understand their role in determining whether treatment can be given in advance of payment for people who may not be entitled to free secondary care.In the meantime, there is a role for the hospice community to demonstrate that hospice and palliative care services are there to support everyone.Working alongside organizations that already support people with restricted immigration status, who are experiencing homelessness and whose health is poor would be a good first step at raising awareness of what palliative care is, who it can help, and how it is largely separate from the NHS in the United Kingdom.It is encouraging that the hospice community expresses a desire to widen their reach and support for those who previously may not have been accessing their services, now more than ever, we need to show that we mean it. Table 1 . Question: To the best of your knowledge, when you have a referral for a new patient, are they asked about any of the following? Most have been receiving [NHS] treatment for their terminal diagnosis.The issue has been more around local authorities and social services. ... pushing for humanitarian reasons for people to be housed, has been a big barrier.One individual was a rough sleeper.Sadly, he passed away.Social services kept pushing back and said that he didn't meet the threshold for their care needs, and he was rough sleeping with a terminal diagnosis of cancer. ... (Inclusion health hospital nurse) Table 2 . Types of support provided by hospices to people with restricted or uncertain immigration status, who were experiencing homelessness and who had advanced ill health. Access to advice is almost nonexistent anyway.Access to immigration advice is hard, but harder with the practicalities of someone very ill, in hospital and has [mental] capacity issues.That's a real barrier.You want to get access and services sorted, Yeung: Formal analysis; Writingreview & editing.
2023-12-25T16:06:33.897Z
2023-01-01T00:00:00.000
{ "year": 2023, "sha1": "47b6dfb287b8571f36efd4c76e5a1c36d4af9bf8", "oa_license": "CCBYNC", "oa_url": null, "oa_status": null, "pdf_src": "PubMedCentral", "pdf_hash": "7d30151a5573d1507210dd3c4625cd0c41eccca4", "s2fieldsofstudy": [ "Sociology", "Medicine" ], "extfieldsofstudy": [ "Medicine" ] }
142791384
pes2o/s2orc
v3-fos-license
Surrender to the Void Life after Creative Industries An extended review of Terry Flew's The Creative Industries: Culture and Policy ( Sage, London, 2012). summarises well--known academic and policy work about the peculiarities of cultural or creative products-their riskiness, their use of flexible creative labour, the marketing and management strategies used to overcome this and so on.But this is distorted by the constant need to defend the normative claims of the creative industries agenda; not just to describe them but to describe them as a good thing. The world--changing scenario of mid--1960s Manhattan immediately shifts three decades later to Britain's New Labour, whose invention of the term 'creative industries' somehow set the seal on these global transformations.Flew claims that creative industries: 'is somewhat unusual as a concept in social and cultural theory as it has its origins in policy discourse'.(2) This is rather tendentious.Many designations originating within policy make their way as objects of investigation into social and cultural theory-'social exclusion', 'consumer choice', 'privatisation' or 'classless society' for example.These have proliferated further since the rise of policy think tanks over the last thirty years or so.Some are passing fads, some have more traction.Whether they become robust concepts depends on how they survive the rigorous testing of thought and debate.The creative industries began not as a concept but as a tactical political manoeuvre by the UK's Department of Culture, Media and Sport (DCMS) to secure more funding for culture from the Treasury.The replacement of 'cultural' by 'creative' may be symptomatic of more profound economic and cultural shifts-something this book quite legitimately argues forbut it was never meant to carry the freight of a full--blown theoretical concept.It was in academic writing associated with Terry Flew and his colleagues at the Centre for Creative Industries and Innovation at Queensland University of Technology (CCI) that it was fleshed out as a sociocultural concept.It is significant that this is never mentioned in the book. Chapter one outlines New Labour's introduction of the term 'creative industries' within policy discourse.A brief summary of Foucault's 'classic account of discourse analysis' (Foucault is a regular if contentious visitor to this book) and a 'lit.review' paragraph on policy studies are passed over without examination.This is a shame because it skips a key aspect of Foucauldian discourse analysis and of the cultural policy studies school of which Flew is part.'Governmentality' is the power to name and classify and to assemble people and things around a particular project of government.If creative industries subsequently became a concept, then it did so in order to take advantage of this on--going policy project.The objections to creative industries were two--fold.First, that it was incoherent as a concept.Briefly, 'creativity' was far too vague to characterise this particular industry-were not science, health, financial services creative?-and it consequently failed to identify the specificity of the object (its cultural element) thus undermining its effectiveness as policy.Second, that as a cultural and economic strategy for the United Kingdom's future it was either misplaced (it could never deliver on its economic promises) or undesirable (it reduced culture to economics).It was perfectly possible to take the first position and not the second; indeed, those arguing for the 'cultural industries' did this, as did many academics and consultants, who wanted to take (what they considered) an incoherent term and make it adequately describe this sector.This possibility is not considered in the book and there is a clear reason why not. The 'open secret' (as Slavoj Zizek would say) for most academics reading this book is that in picking up the term 'creative industries' CCI were not just fleshing out a policy term into a full--blown concept, they were developing a brand used to promote a re--vamped arts faculty and a newly constructed 'creative campus'.This brand did not just rely on the growth in size and profile of the creative industry sector-new careers, new skills, new research opportunities-but on the profile of the policy term.There was nothing new about New Labour's identification of the increased economic importance of culture or the tendencies to convergence brought about by digitalisation-these were already well rehearsed in the cultural industries literature of the 1980s and 1990s.What was important was the brand value of the term 'creative industries' when embraced by a high profile government and successfully (if unexpectedly) exported around the globe.The need to retain the brand meant the two objections-its conceptual confusion and its political undesirability or unfeasibility-must be rolled together and rejected as one. Objections to the 'creative industries' as a concept have to be characterised as objections to the creative industries as a sector.So when Flew worries about the current UK government turning away from creative industries and warns of 'Wimbledonisation'-'where Britain retains a strong symbolic association with the field, even where most of the ownership and action has moved elsewhere' (31)-he is not (presumably) expecting one of the world's most influential sectors to up sticks and move to Asia.He is worried that the brand might move elsewhere.If it is abandoned or marginalised in its policy homeland, the problem cannot lie with the concept but with that declining polity which has missed the wave of the future.These two moments, the explosion of commercial popular culture and its eventual recognition at the level of policy, frame the book.They also provide the structural binary that divides the field between creative industry supporters and its critics in cultural studies and the critical humanities (economists just get on with it). The critics are engaged in a normative attack on the creative industries as a 'Trojan horse' for neoliberalism and are thus the main focus of the book's polemical engagement.The binary is in effect double.First, that 'between those who see popular culture as an essentially democratising force in society and those who understand it in more critical and ideological terms'.(7) Second, between those who wish to understand this growing sector in order to: advise policy makers and those in these industries how they may work better in terms of economic indicators such as employment, sales and exports, [and those whose] purpose is to better understand [them] in order to more effectively critique their social and cultural impacts.(7) These binaries allow Flew to deflect all criticisms of the creative industries-even those concerned with its conceptual coherence-into the camp of either those refusing democratic popular culture (arts and cultural elitists/traditionalists) or of the 'revolutionary romantics' (186) who refuse any truck with capitalist policy-making-or, indeed, both. The notion that one can both embrace and be critical of popular culture, just as one can be critically opposed to a particular policy practice and its impacts (such as the creative industries agenda) and seek to change it to a different one, is simply not possible according to Flew's mutually exclusive binary system.If, for example, one were to acknowledge the transformative power of popular culture and suggest that its reduction to a narrowly economic agenda is not desirable, then the creative industries brand would be in serious jeopardy, and so such a position is disallowed. In The Guardian recently Jarvis Cocker of Pulp wrote of the Beatles: Four working--class boys from Liverpool who showed that not only could they create art that stood comparison with that produced by 'the establishment'-they could create art that pissed all over it.From the ranks of the supposedly uncouth, unwashed barbarians came the greatest creative force of the 20th century.It wasn't meant to be that way.It wasn't officially sanctioned.But it happened-and that gave countless others from similar backgrounds the nerve to try it themselves.Their effect on music and society at large is incalculable. 2e incalculable effect cannot be reduced to the profit Don Draper might recoup.Its economic effects are enormous, so much is clear, but they are also ambiguous, double edged.The same can be said of their social and cultural impacts.The creative industries agenda, as outlined in this book, simply cannot produce such a nuanced understanding of the dynamics and effects of the rise of popular culture, nor of finding ways to intervene in this growing sector for a range of economic, cultural and social purposes.Instead, it consists in accepting every development in commercial popular culture as completely legitimate and desirable and every government policy to support this as something to be welcomed without reserve and facilitated as best we can.This constant labelling of all critical thinking as 'revolutionary romanticism' or (more prosaically) 'Marxist' corresponds to the violence inherent in the governmental ability to designate and assemble people and things in policy discourse, and delegitimise those outside this designation.It is a form of conceptual violence dedicated to the protection of a brand, which will refute the critics by any means necessary. This conceptual violence can be found in the argumentative tropes used by Flew.There is the use of lists or thematic headings to summarise different strands of thought, as in a textbook.They give the cumulative impression of a body of evidence in favour of the creative industries but on closer inspection are often highly contentious, tangential or directly opposed to the overall thesis.In the chapter on public policy where we might expect a sustained analysis of the emergence of the creative industries we get a list of ten factors that when added up are supposed to make creative industries (for the doubters the last section is a critique of those who would call this neoliberalism).In chapter three the intellectual antecedents of the creative industries concept are itemised.From the Frankfurt School and its critics we move to 'political economy', then to 'cultural studies', then 'cultural economic geography', 'cultural and institutional economics' and, finally, 'production of culture/cultural economy'.These headings may just work in a textbook overview of different approaches to culture and economy but here their arguments are all lined up as milestones on the road to the creative industries moment.We are presented with a series of ill--digested summaries whose consequences for the overall thesis are not examined: what is important for Flew is that they somehow discuss culture and economy together and therefore must legitimate the creative industries agenda. A second trope is using authors who are otherwise critical (or who could be assumed to be critical) of the creative industries agenda to make the argument in its favour.Thus the Frankfurt School is taken to task for failing to understand the complexities and contradictions of the cultural industries, but the authors used to do this (Nick Garnham, Bernard Miege, Bill Ryan) are writing in the Marxist tradition, would vehemently resist being roped into a creative industries agenda and are most likely be first to be thrown into the revolutionary romantic gulag.Jean Baudrillard is used to criticise Marx's use/exchange value couplet in favour of sign value and thus the possibility of a cultural commodity.But the use to which this is put by Baudrillard-a highly critical account of the economy of the sign-is nowhere to be seen.Marcus Westbury and Ben Eltham are enlisted to critique subsidies for large-scale arts institutions, but their main point about subsidies for smaller organisations is dropped.The most egregious example (other than Foucault, as we shall see) is the use of Hesmondhalgh (labelled throughout the book as Marxist and thus carefully quarantined) to refute his fellow Marxists (such as Marurizio Lazzarato and Angela McRobbie) arguing that the creative workforce is part of a new 'precariat'. Hesmondhalgh's telling critique of the proposed alliance between those in precarious labour-the Filipino cleaning woman and the harassed new media worker-is used to reject the wider critique of the degradation of labour conditions in the creative industries, even though Hesmondhalgh has published extensively arguing this very point. 3third trope Flew uses is the 'neutral' textbook form, to lay out literature critical of the creative industries without addressing its specific points, and then present more favourable literature as if this were a corrective to the former.This is made worse when crucial evidence in favour of the creative industries is drawn from Flew's CCI colleagues without this being acknowledged.Combined, these tropes make for a disconcerting book, with its arguments morphing like an Escher drawing. The chapter 'Globalisation, Cities and Creative Spaces', for example, is a fairly standard summary of the literature on how cities bring locational advantages to creative industries-focusing mostly on Allen Scott.It makes some reasonable points about the conceptual confusion around creative clusters and how top--down clusters have not been successful policy--wise.The final section of the chapter discusses Richard Florida's creative class thesis as a further example of how urban policy might be used to attract professionals and thus promote creative industries. With one page to go, Flew notes the objections to Florida: technical ones about cause and effect (does the urban milieu come before or after the creative class?) and political ones about gentrification.These critiques are standard for critics of 'creative cities' and one might expect a defence of a concept so frequently linked with creative industries.But no.Gentrification is the displacement by creative professionals of small--scale cultural producers and other vulnerable users in favour of high--end consumption--led development.This has been the contention at least since Sharon Zukin set out to explore the Lower Manhattan of Warhol and Sterling Cooper, 4 and it can be seen in the 'artists against Florida' demonstrations in Toronto. But for Flew-after pages summarising Allen Scott's urban creative economies and with insert boxes about Creative London-creative class arguments become equated with 'hipsterisation strategies' and 'arts and culture' strategies dubiously claiming 'to benefit the wider economy'.(156) The consequence of promoting this line is that Jamie Peck's withering critique of both Florida and the creative industries agenda is reduced to a simple attack on investment in 'arts and culture'.If the creative cities agenda is reduced to arts, culture and 'hipsterisation', it is in the suburbs and unfashionable non--creative cities (such as Las Vegas!) that creative industry growth is happening.The evidence for this-ignoring the overwhelming mass of empirical evidence showing that the concentration of creative industries remains in large metropolitan centres-comes from Terry Flew himself and his colleagues, along with Chris Gibson who is on record as strongly opposing the idea of creative industries.Criticisms of gentrification and consumption--led development are instead used to justify a thesis-the rise of the creative suburbs-for which they were never intended, but this is, as we have seen, a key trope in the book.The rapid morph from the relational advantage of cities for creative industries to a deflection of the gentrification critique onto the 'hipster' eulogised on the very first page of the book is simply driven by the fear that Flew might be inadvertently favouring metropolitan arts and culture over the everyday creativity of the suburbs. The division between those for and against popular culture informs an assumption that arts policies are intrinsically elitist and that all attempts to assert cultural value within the economics of the creative industries-as the cultural industries approach did-is a surreptitious arts policy.Garnham, for example, is used as a critic of traditional cultural policy-that subsidies to protect art from the market can only be reactive and ineffectual.But this was part of his argument for an intervention within the economy, using economic tools, to secure cultural policy outcomes from that intervention.This cultural industries agenda is nowhere discussed, other than as a stepping--stone to that of the full recognition of the creative industries agenda.This fear of culture pervades the highly technical discussion on defining the creative industries (one for the enthusiasts this chapter).The problem with the DCMS list is that it is just a list-how do all these activities hang together?Two solutions emerge.The first, taken by Will Hutton (and the European Union), consists of trying to identify a specifically cultural sector and a wider creative sector.This model derives from David Throsby (an economist, a group otherwise held not to concern themselves with such niceties) whose concentric circle model has core arts at the centre followed by cultural industries and creative industries.The problem with these models is that a set of activities associated with the traditional arts is held both to involve a purer creativity and provide the original input into the value chain.Flew, and many others (including myself) disagree with this.However, Hutton's intent (like the somewhat fudged European Union version) was to identify products that were primarily cultural ('expressive value') and those that included cultural/expressive inputs but also had material--functional elements.There are lots of problems with this account, but it is an attempt to get over the problem that 'creative' is far too broad a concept and that policy might need to identify a more specific sector whose primary product is 'cultural'.For Flew, this distinction between expressive and functional value is one that is 'difficult to maintain', (27) and one that he cannot accept.He suggests that the distinction reflects a deeper tension in the creative industries concept in the United Kingdom: is it 'an economic policy associated with promoting generating [sic] successful industries and new forms of IP' or a 'policy to support the arts and cultural sectors'?For Flew, Hutton's argument is clearly a case of the latter, and he associates Hutton with Tessa Jowell at the DCMS and her attempt to re--assert 'excellence in the arts'.Despite Hutton's explicit plea for expressive value not to be equated with traditional art forms, and his invocation of video games and social media, Flew simply dismisses 'expressive value' as a 'traditional conception of aesthetics'.Flew persists, how would you identify expressive value in media?In genre?'Drama and documentary but not in soaps'?But what about comedy?And 'if there is expressive value in Metal Gear, then why not in Top Gear?' (28) It is not clear who exactly Flew is having an argument with here.The answer to where expressive value lies is 'content'.That is, all of the above.Hutton is not seeking to exclude Jeremy Clarkson and Michael McIntyre, just suggesting they might differ from a Dyson vacuum cleaner. The second solution, preferred by Flew, is referred to as the NESTA model (but was in fact authored by Burns Owens Partnership, Manchester's Creative Industries Development Service and myself). 5With this model, there is no distinction between cultural and creative, just between different business models-experiences (live), originals (one offs), creative services (design and so on) and content (media, games). Rather than an original invention, the model draws on some well--established distinctions between 'edit and flow', 'complex and simple' (in the sense of numbers of people involved in production) products that cultural industries literature has long rehearsed.Flew's assertion that the NESTA model was there simply to generate successful industries is incorrect.The NESTA model was there to facilitate policy decisions.If you want employment growth, one might choose services or content. Other priorities (such as urban regeneration) might emphasise museums or art or live music.That is, the model did not flatten all to a list but allowed intelligent policy choices to be made across a range of priorities based on different economic dynamics.Flew sees it as simply a taxonomy to direct economic investment; for Flew having a cultural and an economic policy is simply not thinkable.This is not to say that Flew has no cultural policy, it is just that is not made explicit: The DCMS Mapping Documents were described as 'nothing less than a new manifesto for cultural studies' (Hartley, 2003:118), as they flattened the traditional hierarchies of cultural authority and privilege, sitting art alongside architecture, software with Shakespeare, and Big Brother with the British Museum.Alas, it was not to last.(22 The quote is revealing.It endorses the cultural policy approach famously associated with John Hartley but with no explicit exposition or discussion, and it uses yet another CCI author as an authority without acknowledging their affiliation.More crucially, this 'flattening' of authority relates purely to the matter of public subsidy-art, Shakespeare and the British Museum now have to slug it out on the market with Big Brother, software and (rather oddly) architecture.That is, the market decides everything in the end.Leaving aside that this was never anywhere near the intentions of New Labour, this single criterion of justification is precisely the basis of that accusation of neoliberalism and populism levelled by the creative industry critics.But its implicit, almost utopian presence points to something elsethe combination in CCI between a governmentality approach when it comes to art and publicly funded culture (exemplified by Tony Bennett's 'culture is the object and instrument of government', quoted by Flew) but a Hartley--esque celebratory approach when it comes to popular culture.Why the latter can't be governmental is not clear, but in this account it sounds suspiciously like James Murdoch's claim at the 2009 Edinburg Television festival that 'profit is the only guarantee of independence'. Which takes us to the more polemical final chapters.By the time we get to the last chapter on public policy it has been established that cultural industries are either a precursor to the more full--developed creative industries or are a lapse into an art--centred public subsidy policy.That settled, Flew moves on to enumerate the diverse currents which have now converged onto the new creative industries-from 'technological change', the 'impact of political shifts', onto 're--thinking innovation policy' and the 'new politics of copyright'.There remain only the die--hards who associate creative industries with neoliberalism to deal with, which is the subject of the final section and conclusion. The distinction between empirical description and discursive construct outlined at the beginning has now been simplified: 'the emergence of creative industries policy discourse [is] a response to wider trends in media and cultural policy'.(176) But there are others (the 'critical humanities' of course) who see it 'less as a descriptive category of a distinctive way of framing media and cultural policy questions but instead as an ideological category' (176)-in particular as a manifestation of neoliberalism.The sense that discourse does not describe, but brings into being real entities has disappeared; Flew now has a binary of those who would simply describe creative industries against those set on portraying them as neo--liberal ideology.In response, Flew's rebuke is that neoliberalism, like postmodernism, means everything and nothing and like evil imperialism is 'everywhere and in everything'.That there is a tendency to characterise all manifestations of contemporary capitalism as neoliberalism and blame this for present woes is clear.However, to recognise a need for a more precise definition is not the same as dismissing the concept-otherwise creative industries would have disappeared long ago.Precise definitions of neoliberalism do exist and are used precisely, and it should be the task of a textbook such as Flew's to bring these out.Flew uses Andrew Gamble to disentangle some of its strands-an economic school, the programs of Thatcher and Reagan, the new global order-but does not discuss how these might relate to the creative industries.The point is simply to show neoliberalism is a complex concept and therefore cannot be applied to them. Flew then argues that the Marxist dominant ideology thesis (current default: neoliberalism) is functionalist; that it does not understand that markets have existed for a long time across different social formations nor that neoliberalism is related to liberalism and thus to democracy (quoting Ernesto Laclau and Chantal Mouffe in the familiar by any means necessary trope).We are told that neoliberalism has not spread across the globe (the example of China shows this pace David Harvey). Finally one critical (read: Marxist) cultural studies person says neoliberalism is a withdrawal of the state and another says there is a growth in state surveillance: therefore the concept is incoherent.After what is frankly a woeful passage of argumentation, Flew concludes that whatever problems with creative industries 'the criticism that it is emblematic of or furthers neoliberalism is one that now needs to be discounted'.(191) Not once does Flew engage with the specific criticism of the creative industries in relation to neoliberalism; he's put a stop to such criticism by abolishing neoliberalism.Deprived of its support, all that remains of his critics is a self--indulgent 'rhetorical flourish and a burnishing of [their] radical credentials'.(190) The notion that there has been a fundamental shift in relations between state and economy, bringing not only the erosion of public provision but the active involvement of the state in breaking down barriers to markets and (in the public sector) establishing quasi--markets, is pretty incontrovertible (for good or ill).That, as Flew argues, it has always been portrayed as negative (something Flew seems to take as an argument against the concept) is a reflection of forty years of its 'success', perhaps now come to some sort of impasse.The complex confluences involved in such a program are to be expected.Its radical modernising program was introduced by cultural conservatives (Thatcher and Reagan) waving traditional values and the flag.The increasing sense that culture too could be part of this radical restructuring of the polity has not only been subject of much cultural and sociological work but an essential aspect of cultural policy studies as it emerged in Australia in the late 1990s. If culture is the 'object and instrument of governance' then it is perfectly reasonable to explore the changing ways in which the cultural self has been reconstructed over the past forty years.How can such large--scale changes be unrelated to the cultural transformation evoked by Warhol's bohemian Manhattan?The creative self, the foregrounding of creative innovation, the construction of identity around consumption and the occupations and products that stem from these-if the changes designated by neoliberalism have occurred (for good or ill) then how can they not be implicated in the creative industries?This does not mean these industries have to be rejected tout court, but some critical appraisal of their relationship to neoliberalism (which might even be oppositional in certain contexts) is surely appropriate. Flew misrepresents critics of neoliberalism as attacking a 'dominant ideology', but the main tendency has been to approach it from the governmentality perspective on which cultural policy studies itself is based.This aspect of the argument draws on Foucault's notion of power as constructive, endlessly generative of new subjects and apparatuses of power to control them.It is not an ideology, though it sets ideas in motion as part of its operations.Flew refuses such an accommodation between 'Marx and Foucault' (that is, a critical Foucault) in a rather bizarre fashion: by making Foucault reject Marx and then tentatively embrace neo-liberalism.Escher is in full flow here.For Flew, Foucault's Birth of Biopolitics can't be used to develop a critique of neoliberalism in the age of Bush because he was writing in a different period. 6And in this period Foucault was critical of Marxists and Marxism and had a different view of power, politics and the state to them.Indeed, he calls on socialism to develop a theory of governmentality.Though Flew means this to be a counter argument, none of the neo--Marxist Foucauldians mentioned would demur from any of it.Flew than refers to Michael Behrent to the effect that Foucault was presenting a 'qualified endorsement of indirect methods of exercising governmental power preferred by the neoliberals, particularly when contrasted to the top--down "social statism" of the PS and PCF' (Parti socialist and parti communist francaise).( 180) 'At any rate', continues Flew, 'Foucault rejects the easy critique of neoliberalism as ideology'.He concludes by telling us the real problem Foucault presents for the creative industries does not concern neoliberalism but simply the question of 'too much or too little government'-public policy being about getting the balance right.Nobody with the slightest acquaintance with his life and work would accept Foucault as a proto--neoliberal, or that his critical analysis of contemporary governmentality boiled down to a nonsensical (for Foucault especially) question of 'too little or too much governance'.This conjunction of Marx and Foucault takes us back to the cultural policy studies moment from which this elaboration of the creative industries concept emerged.We might say cultural policy studies did two things with Marx.It separated the analytical from the emancipatory; Marx's theory and methods might still be useful but the emancipatory project that went with it-the reconciliation of subject and object of history if you like-was theological.Second, drawing on Foucault among others, writers such as Tony Bennett suggested Marx had no theory of politics or governmentality: ultimately the actions of the state simply responded to the deeper logic of capital or 'the economic base'. 7Three developments flowed from this.First, governmentality was now not conceived as ideology but as actively constructing the reality to which its actions were directed (a finding developed further in cultural economy literature).Therefore it had a much greater scope for autonomous action than the Marxists would allow.Second, that this expanded field of governmentality included that very culture which claimed autonomy from economy and state.This cast the emancipatory promise of culture-transposed by cultural studies to 'the popular'-in some doubt.The creation of the cultured self was a key site on which the modern state had built its foundations.This being the case, third, cultural politics had to be played out within the parameters set by the state rather than claim some transcendent critical purchase.Tony Bennett, writing of cultural policy in 1992 suggested: Intellectual work [should] be conducted in a manner such that, in both its substance and its style, it can be calculated to influence or service the conduct of identifiable agents with the region of culture concerned. 8Tony Bennett's call in that same piece to learn to 'talk to the ISAs' (ideological state apparatus) can be understood in a certain context in which radical critique went hand in hand with its own marginalisation.Indeed, becoming a 'situated intellectual' is now perhaps part of the modern academic persona.Even if we accept this assessment, two points need to be made.First, the recognition of limits does not necessarily mean an abandonment of critical thought; Foucault might not promise emancipation in the classic Marxist sense, but he was always clear--eyed about power.Second, between 'influence' and 'service' lies a whole range of choices, from crusading activist, through annoying gadfly, to full--on functionary.A growing tendency within cultural policy studies was that to influence or to serve one had to drop the critical thought, which is mostly equated with 'Marxist'.Flew's book clearly bears the marks of this pragmatic turn-setting out to provide governments with the policy instruments to promote the economic growth of the creative industries and bemoaning the infantile antics of those who hold to out--dated critique.But there was a problem-did not cultural policy studies imply that all we could do was choose between different govermentalities?Without its emancipatory charge, did not cultural policy become mere administration?John Hartley's work broke this impasse.Hartley agreed that art and culture were about governmentality; they were elitist and used to dominate the lower classes.That was not the whole story.Effectively overturning Bennett's proposition about governmentality, Hartley (notably in his 1999 introduction to Uses of Television) argued for the progressive self--education of the masses, the citizen-consumers, through their own self--generated popular culture. 9The explosion of commercial culture from the 1920s and the spread of the internet were the end points of this emancipatory process.We can now see how the two themes that frame Flew's book-the democratic promise of commercial popular culture and the need to service the interests of policy-culminate in a stark dichotomy between governmentality for art and culture (elitist, dominating, yet for all that characterised by 'market failure') and emancipation through popular culture.It is equally clear that the charge that creative industries reduces culture to economics therefore misses the point: the market is the privileged carrier of popular culture and thus any attempt to assert some cultural element which might temper the economic can only be an elitist attempt to reinscribe the hierarchical value of 'art'.The pursuit of the economic agenda for the creative industries is at the same time a pursuit of the democratic popular culture agenda. What stands in the way of this agenda?Obviously, the critical humanities and other revolutionary romantics, but once these have been dispatched to the dustbin of history there are three final issues that close the book.First, definitional questions still need work, especially the task of understanding the links between creative industries and 'entertainment'.But a second, larger problem looms-the hourglass structure of the creative industries: where a relatively small number of gate--keepers-be they major performing arts companies and centres, large media corporations, or cultural funding bodies-constitute a distributional 'bottleneck' between the large number of prospective creative content producers and their potential audiences.(191) This unfortunate propensity of the creative industries sector squeezes out 'individuals, small groups and SMEs' (small and medium sized enterprises) who lose out when faced by the 'political power and lobbying clout of the incumbents in the sector-large corporations, established trade unions and producer organisations'. (191) The internet and consumer groups will help challenge this and allow in these excluded individuals and small companies (though presumably not the hipsters in the metropolitan centres), who are now 'the mainspring of innovation in the arts, media and cultural sectors'.(191) Saving this rare programmatic statement for the penultimate page means it receives no elaboration, but it is as telling as the statement about Big Brother and Shakespeare.Flew completely ignores the many pages in which he has presented the findings of political and institutional economists about the specific dynamics at play in the creative industries.The reason we have large--scale media corporations is not because they are good lobbyists (though they are that) but because of the way they have dealt with the 'market forces' through which they must make a profit. Economies of scale, massive up--front capital costs, large marketing budgets, vertical integration, managing a dispersed, autonomous labour pool-these are some of the characteristics long noted in this risky sector.They are competing for limited attention and free time with uncertain products in a volatile market.That is, it is the realities of cultural commodity markets that create the 'distributional bottleneck' not the gate--keepers from the cultural sector or the corporate domination of public policy. Increasing the access of SMEs to the market has been one of the main focal points of cultural and creative industry policies for the last thirty years.There are a number of reasons for this attention: policy--makers have looked to local economic growth, at city and regional levels; concerned to increase access to participation in the sector for economically and socially excluded groups; promoting diversity of cultural expression, and so on.Policy--makers have had to face the realities of creative industry dynamics and structures with a range of policies, such as identifying potential subsectors; improving access to finance; promoting cultural entrepreneurship; identifying local products and skills; providing key infrastructures and much more.Despite its paeans to policy, none of this nimble and often precarious policy agenda is discussed in Flew's book.Neither does the book discuss real cases in the industries, where it is clear that larger companies are finding ways to draw on the innovation of SMEs and absorb it into their own structures-as they have attempted to do for the last century.Furthermore, the levels of innovation in large organisations, such as Apple or the BBC, often far outstrip those of small SMEs. In short, the notion of removing the bottleneck in the creative industries sector bears no relation to the sector's realities and betrays a deep naivety about the nature of markets-it amounts to saying: if only government would step back from supporting the big corporations and cultural gate--keepers, then the small businesses would flourish in the sector.It's the equivalent of the Tea Party for creatives. Equally, the idea that the internet is going to achieve this democratisation of production-consumption of its own accord bears no scrutiny in an age of Amazon, Apple, Google and Facebook.In effect we are left with a utopian fantasy in which, once the distributional bottleneck is broken apart by the internet, there will be the ecstatic creative communication of everybody with everybody, all consuming and producing interchangeably. Flew's third challenge is to declare that we must move from a creative economy to a creative society.It is highly appropriate that the two themes of the book should come together on the final page of the book.They converge in the figure of Li Wu Wei, a party bureaucrat--academic from the Shanghai Academy of Social Sciences, who calls for the universalisation of creative industries.Li Wu Wei's book is a fairly banal manifesto for creativity commissioned by the national government in Beijing, which is translated by another CCI colleague. 10Li Wu Wei's call for universal creativity prompts Flew to ask if we should not be thinking about a future creative society.The logical consequence, for Flew, is that the creative industries should move from a niche position to become central to '21st century culture and policy'. (192) This utopian vision of the creative economy-that is, of everybody consuming and producing creatively-is nowhere else discussed except at this belated stage, nor is the question of how 'creativity' might ground a social order addressed.What is striking is how far this vision is an unreflective repeat of other such visions of modernity: the emancipatory promise of 1960s' counter--culture, the interwar avant-gardes, traditional bourgeois aesthetics (Schiller), and indeed of that 'Young Hegelian', Karl Marx. While Flew appears oblivious to these historical echoes, they take us back to the question of modernity and its deepening or second acceleration since the 1960s. Culture has changed and has become intertwined with the economic in ways we are still figuring out.What is clear is, like the rest of modernity, it brings both opportunity and danger.What these might be and how we might face them are crucial, open questions.We have gone beyond the moment of cultural policy studies, which is in danger of being stuck in a sterile opposition of pragmatics or (impossible, totalitarian) revolution, and consequently trapped by its refusal of any function for culture other than unwitting governmental compliance and determination.While this has produced some lucid puncturing of the self--delusions of cultural reformers and radicals alike, it has also produced pragmatic accommodations with the agendas set by government indistinguishable from the most servile functionaries of authoritarian states-except that the latter mostly do it under duress.What is striking about the cultural policy studies' agenda is the thinness of its achievements-it talks real world, commercialisation and industry, but has little to show in these areas; cultural think tanks, consultancies and activists have been more rewarded for their willingness to ask radical questions.The obedient dog of policy studies has delivered the body of a once fearsome cultural studies to the feet of its master, who has politely and gingerly picked it up and put it on one side. We need the moment of critique, of the negation, in order to engage with the present.This will include working with (what can no longer be called in polite company) the ISAs; but it will also include opposing them.'Culture' as a sphere of (relative) autonomy and its promise of emancipation and fulfilment rapidly became characterised as a site for discipline and biopolitics.But it was never just that, as the late Foucault began to make clear.Its emancipatory promise remains elusive, but it remains nonetheless.It need be said that this was something John Hartley was correct to assert, but for him that promise could only be through commercial popular culture, the rest is elitist noise.Cultural policy inevitably has to work with government, as Adorno recognised in 'Culture and Administration', but its task is not simply to extend the remit of the powers that be or indeed to be reduced to the promotion of its economic dimension. 11ltural policy is more than a technology of economic growth; it must also mean care for culture, as the site of a self and a social formation in which a certain access to truth and meaning is made possible.This is surely what Raymond Williams meant by the idea of culture, and it also emerges in the late Foucault. 12Caring for culture means making a judgement; the grounds of that judgement are inevitably contestable and contested, but they have to include the economic conditions which make that culture possible, which may also threaten to make it thinner, poorer, subservient. There are rapid transformations in progress as cultures and economies morph and fold with digital communications and globalisation.Making the case for cultural values not against but within these processes is a complex and difficult take, and it does no service simply to promote 'economic growth' and dismiss the rest as arts elitism.In his arguments against neoliberalism, Flew tells us there are many forms of capitalism; accordingly, it seems perfectly reasonable not to want the neoliberal version and not thereby be classed a romantic revolutionary.Indeed, the treatment of economics in Flew's book is mostly superficial.It includes statements such as 'Human societies have always engaged in consumption' (110) or 'the study of markets is characteristically the domain of economics', (115) which are either banal or meaningless in a book dealing with the changing relations of culture and economics.What cultural economy has taught us is that these are highly historically specific and constructed entities.This does not mean they can be reconstructed at will; it does mean that understanding the specifics of, say, a market transaction, involving all manner of market 'devices' and subjectivities, needs to be undertaken in a critical, clear--eyed manner not assumed as an eternal and inevitable social reality to which critical theorists must bow down. If the creative industries agenda is really to get to grips with the complexities and contradictions of the contemporary cultural economy then it needs to start thinking again, not just-as this book does-devote its attention to demolishing its critics.CCI's celebratory brand of 'creative industries' fails to engage productively with the multiple and various critiques that have been thrown up against it, merely dismissing opponents as 'Marxists'.Defending creative industries by disavowing the legitimacy of critical thought is now surely dysfunctional.Without the moment of critique, Flew's model of creative industries (and that of CCI with which it forms part) becomes one dimensional and limited.Everything it espouses is necessarily good and wholly good at that (the internet, mobile phone apps, teen pop music, reality television, porn) and, conversely, everything that does not fit its model of the good is bad and irredeemable, such as 'high culture,' which is malign, elitist, anti-democratic and, worst of all, not sufficiently commercial.This CCI model requires the high--low cultural divide, even as it dismisses it.Once established in its dichotomy, we know that everything good is completely good and the rest is redundant, backward and not part of the future.This is the guiding theme not only throughout Flew's book, but throughout all of the CCI discourse and its treatment of favoured items of study and scrutiny.It is ideological in the older sense of the word; it lines people up on each side of a creative-commercial divide, where we immediately know which side is correct-the side of the future!Critique certainly needs to work with the materiality of the 'real', not just set up a transcendental ideal to which the real must aspire.But those who would stress 'reality' must also acknowledge the highly constructed nature of that 'real' to which thinking and writing contribute.Equally, critique must challenge the exclusive right of power to set the terms on which that real is engaged.Recently, a UK government minister responded to someone who said a university without philosophy is not a university with the words: 'then we will call it something else'. 13It recalls Bruno Latour's quote from Ron Suskind's encounter with a US aide: 'That's not the way the world really works anymore,' he continued.'We're an empire now, and when we act, we create our own reality.And while you're studying that reality-judiciously, as you will-we'll act again, creating other new realities, which you can study too, and that's how things will sort out.We're history's actors ... and you, all of you, will be left to just study what we do.' 14 Critique is not the posturing of a radical persona who knows the truth behind the ideological veil, as Latour tellingly argues. 15Access to that truth demands real thinking, of the kind that challenges the self as well as the object.In The Uses of Hutton clarifies: Expressive value (in the sense of symbolic value) is represented in software programs and video games such as the Grand Theft Auto and Metal Gear [and in] the range of user--generated material around on the internet.[Quoted in Flew (27)] To take a small smattering of examples from a voluminous literature, neo-liberalism has been associated with: the rising popularity of Bollywood-style weddings (Kapur, 2009); the prevalence of violence in recent Australian cinema (Stratton, 2009); the financial difficulties of the University of California (Butler, 2009); the death of politics (Giroux, 2005); standardized national educational curricula and national testing (Apple, 2004); the privileging of access to databases over space for books in Australian public libraries (McQueen, 2009); and the performative sexuality of the character of Mr. Garrison in the animated comedy series South Park (Gournelos, 2009).(178)
2018-12-05T13:03:57.909Z
2012-11-29T00:00:00.000
{ "year": 2012, "sha1": "5a187a4267e6f2bae298890088945ad537872631", "oa_license": "CCBY", "oa_url": "https://epress.lib.uts.edu.au/journals/index.php/csrj/article/download/3039/3413", "oa_status": "GOLD", "pdf_src": "Anansi", "pdf_hash": "5a187a4267e6f2bae298890088945ad537872631", "s2fieldsofstudy": [ "Sociology", "Business" ], "extfieldsofstudy": [ "Sociology", "Political Science" ] }
198322118
pes2o/s2orc
v3-fos-license
System Based Interference Analysis in Capella In embedded systems the emergence of System on Chip (SoC) offers low cost, flexible and powerful computing architectures. These new COTS capabilities enable new applications in aerospace domain with more integration of avionic functionalities on a same hardware. The main drawback of such integration is the difficulty of mastering application’s deployment on SoC architecture, while understanding miscellaneous emerging behaviors. Model Based Engineering techniques have been introduced to assist in system analysis at early stages of development process. For instance, Capella [BVNE] is a tooled language to support design of systems architecture ( http://polarsys.org/capella ). Capella helps to provide a consistent view of system architecture. However, Capella does is not satisfactory to understand emerging behaviors. For instance it is not useful to understand how deployment of different tasks (and their parameters) on different computing resources impacts conflicts (interferences) on interconnect between computational resources and memory. This problem is increasingly important with the integration of various functionalities. We propose to address this problem at different levels. First, we equipped Capella models with two kinds of reasoning capabilities. The first one is based on the worst case analytic evaluation of the interconnect interferences of a specific deployment (easy to compute but pessimistic). The second one is based on the (exhaustive) simulation and provides accurate interconnect interferences (more computationally intensive than the analytic methods but accurate). These reasoning capabilities help the designer considerably but he still has to explore several potential solutions by hand. To help him, we proposed a small DSL to express the exploration space from which the former reasoning can be performed automatically. d. Seconded from Thales Alenia Space Abstract In embedded systems the emergence of System on Chip (SoC) offers low cost, flexible and powerful computing architectures. These new COTS capabilities enable new applications in aerospace domain with more integration of avionic functionalities on a same hardware. The main drawback of such integration is the difficulty of mastering application's deployment on SoC architecture, while understanding miscellaneous emerging behaviors. Model Based Engineering techniques have been introduced to assist in system analysis at early stages of development process. For instance, Capella [BVNE] is a tooled language to support design of systems architecture (http://polarsys.org/capella). Capella helps to provide a consistent view of system architecture. However, Capella does is not satisfactory to understand emerging behaviors. For instance it is not useful to understand how deployment of different tasks (and their parameters) on different computing resources impacts conflicts (interferences) on interconnect between computational resources and memory. This problem is increasingly important with the integration of various functionalities. We propose to address this problem at different levels. First, we equipped Capella models with two kinds of reasoning capabilities. The first one is based on the worst case analytic evaluation of the interconnect interferences of a specific deployment (easy to compute but pessimistic). The second one is based on the (exhaustive) simulation and provides accurate interconnect interferences (more computationally intensive than the analytic methods but accurate). These reasoning capabilities help the designer considerably but he still has to explore several potential solutions by hand. To help him, we proposed a small DSL to express the exploration space from which the former reasoning can be performed automatically. Introduction The aerospace domain has a long tradition of dedicated hardware and software, tailored to space conditions and designed to be more resilient to SEU (Single Event Upset). Such design reduces the capabilities of embedded hardware and in many cases forced to disable some of its capabilities (for instance processor caches are disabled because they are very sensitive to SEU). Nowadays, better control of fault management techniques opens the door to Commercial Off-the-Shelf (COTS) hardware components for satellites. This makes it possible to define new computing architectures based on already existing hardware to enable the integration of various avionic features on the same hardware. These new architectures are essential for new low-cost satellites. However, beyond the study of fault tolerance mechanisms (important but not the focus of this paper), the computing power introduced by the use of COTS and the need to group various functionalities on the same hardware board makes it more difficult to control the deployment of the application on the architecture. For instance, it becomes more difficult to understand miscellaneous emerging behaviors such as, for example, the emergence of temporary bus congestion due to unexpected synchronizations between different tasks. Model Driven Engineering [Sch06] (MDE) has been introduced to assist in system analysis at the early stages of the development process. MDE is increasingly being used and is nowadays a common practice in many software related disciplines [WHR14]. For example, Capella [BVNE] is a tool-based open source language that supports system architecture design (http://polarsys.org/capella). It was introduced by Thalès 1 and is used in many other companies. Capella is a great help in providing a consistent view of the system architecture, which can be reviewed, shared, etc. Despite interesting features, Capella is not yet fully equipped to assist system designers with the understanding of emerging behaviors. This is mainly due to the generality of Capella, encompassing various disciplines, which forbids the definition of a full operational semantics from which simulation and behavioral analysis can be conducted. For instance, when defining new software and hardware architectures for satellite systems, it remains difficult to understand the impact of architectural choices at the early stages of the development process. This makes it impossible to explore different deployment and configuration solutions early on. In other words, despite the use of Capella models (i.e., MDE), it is nowadays difficult to tame the adequacy between the application and the architecture at an early stage of the development process. This is a fortiori the case when new computing resources are required to integrate new functionalities on the same hardware platform. Note that this is not a pure classical scheduling problem (although it still needs to be resolved) but rather a communication scheduling problem since the interconnect between the different computational resources and the memory becomes a potential bottleneck that must be used wisely. To do so, it is important to adjust (1) the deployment of the different tasks of the system on the right computational resource and (2) to schedule their communication in such a way to avoid interference on the interconnect; i.e., it avoids communications initiated by different tasks to use an interconnect at the same time, for instance by delaying the start of some communications at specific point in time. Of course, such decisions should be consistent with more traditional scheduling analysis, i.e., with respect to periods and deadlines. What we present in this paper is the use of Capella models allowing the exploration of different architectures, in terms of deployment and parameterization, regarding interference in interconnect. The models we used suitable for defining both hardware and software models, as required in the context of the ATIPPIC project (a collaborative industrial project). The main contribution is the development of a semantics adapted to two different but complementary approaches to compute the level of interference on interconnects of a systems. The first approach defines analytical method from which latency bounds can be obtained in an interconnect. While pessimistic, this gives a coarse grain idea at low cost,i.e., without expensive computing, of possible interference on interconnect. The second approach defines an operational semantics for Capella (based on the GEMOC studio). Based on this semantics it is possible to run simulation, potentially exhaustive. These simulations allow computing interconnect usage and latencies in task communications due to interference. While it requires simulations, this method provides a fine grain understanding of interference in interconnects. Finally, based on each of these methods, we provide another small contribution, a small Domain Specific Language (DSL) from which it is possible to specify the domain of the parameters we want to explore. Then, we automatically generate the different models for these domains, simulate them and provide a representation of the results to help the designer to choose the appropriate configuration. The modeling concepts and technologies used for this study are described in section 2. Then a simple running example as tutorial for the proposed methods is presented in section 3, section 4 describes the implementation for analytic and operational solutions and debates on design exploration extension, with in section 5 the demonstration of the evaluation on the ATIPPIC avionic use case. Finally section 6 documents the state of the art of such approaches, before concluding in section 7. Modeling Technologies Capella Capella is an open source Model Based System Engineering (MBSE) solution hosted at PolarSys working group of the Eclipse Foundation 2 . Capella provides formalisms and toolsets that implements the ARCADIA method developed by Thales [BVNE,Roq16]. The method defines a four-phase workflow: operational analysis and system analysis to identify operational and system level needs, logical and physical architectures to identify components that meet these needs. For each phase of the workflow, Capella provides a set of diagrams to support system description, such as functional data flow diagram to describe functions and their exchanges, functional chains diagram to identify functions necessary to realize a given requirement and scenarios to describe a sequence of messages exchanged over time. In this paper we only focus on Physical Architecture description and more precisely on Physical Architecture Blank diagram (PAB). Indeed, PAB provides a suitable syntax for Hardware and Software co-modeling. However, as we target low level details of an hardware architecture, we still need to complete the model with micro-architecture specific information that is not covered by standalone Capella meta-models. KitAlpha Kitalpha 3 is an set of eclipse plugins, based on Capella, that allow to extend Capella models with Domain Specific information. Also hosted in PolarSys repository, it enables customization of Capella syntax for a specific viewpoint. Developing a viewpoint allows to describe specific concerns on top of Capella's generic ones. For instance, this mechanism was used to define the specification of fault tolerant mechanism directly in Capella 4 . Gemoc Studio The GEMOC Studio is a set of eclipse plugins that provides generic components through Eclipse technologies for the development, integration, and use of heterogeneous executable modeling languages 5 . It embeds a set of metalanguages that allow to define the operational semantics of these languages. When a semantic definition is provided, it automatically generates an interpreter, an fully aware debugger and recently a compiler. Modeling Interferences The performance and deadline of an embedded system are mainly affected by the communication scheduling of the application and in particular by possible interference on access to shared hardware resources. We focus our approach on data memory transaction because it is the major factor on communication scheduling for an application. What we call Interference is the result of a concurrent access to a bus. From a task point of view, Interference produces latency on bus communication interface, i.e., memory transactions are delayed. In the following, we define bus interference as the duration during which more than one task attempts to access to the same bus. The duration is computed on an hyper period during which each task that uses the bus is executed at least once. For example, let's consider two tasks on two different computing resources, of 1ms period. They both start their execution by reading data from memory for 30µs. The first task t1 runs for 200µs and produces data to memory with a transaction that takes 20µs. The second task t2 runs for 350µs and produces data to memory with a transaction that takes 40µs. In this case, both t1 and t2 try to access to the memory bus at the same time as they start: there is an interference. Either t1 or t2 accesses the bus first and the other access is delayed, in this case for 30µs. The other communication (writes) do not interfere since the execution time of the tasks are different. Therefore in this case the interference is equal to 30µs. Note that the interference rate can be calculated by dividing the interference by the total transfer time over an hyper period. For example in the previous example the interference rate = 30 30+30+40+30 = 30 130 ≈ 23%. Such high level analysis is important because memory and interconnect components can be the bottleneck in the communication scheme of the application. So it is important to master the bus performance in a chip to detect and minimize the local interference. In this paper, we use the simplified example of Figure 1 to illustrate the two approaches we propose for interference analysis. Since it is not representative of real world software and hardware architecture, a concrete use is presented below. The example was done under the Capella tool, using the PAB diagram which was extended with a Kitalpha viewpoint (see section 4.1). The hardware architecture is composed of four Physical Components, two represents CPUs, the third is an interconnect and the last one is a memory. These Physical Components are connected by Physical Links which are considered as bus connections in our case of study. In addition, we allocate two Physical Components Behavior to represent the software tasks, one on each CPU. The data dependency between the two tasks is abstracted by the Component Exchange Link that directly connects the Output Flow Port of Task1 to the Input Flow Port of Task2. These output and input ports are then allocated on Component executions Physical Ports and the data dependency is explicitly translated into two transactions carried by two physical Paths. The first Physical Path in red consists of two physical Links and connects CPU1 to the memory passing through the interconnect. It represents the transaction path for writing Task1 data to memory. The second Physical Path in blue is also realized by two physical Links and connects CPU2 to the memory passing through the interconnect. It represents the transaction path for reading Task2 data from memory. More generally, without considering all the Capella formalisms, Task1 and Task2 are both periodic (respectively 20ms and 30 ms) with offsets (respectively 0ms and 7ms. Task1 executes on CPU1 for a certain time interval (between BCET 9ms and WCET 12ms), then writes data (5 MBytes) to Memory using cpu1_to_interconnect (red link) and interconnect_to_memory (black link) buses. While Task2 is allocated on CPU2, reads data (5 MBytes) from Memory through interconnect_to_memory (black link) and CPU2_to_interconnect (blue link) buses and then runs for a certain time interval (BCET 5ms and WCET 7ms). The frequency and data width of the three bus are identical, 125MHz and 8Bytes, enabling a bandwidth of 1GBps. The execution scenario described is similar to the producer-consumer problem replacing the shared variable by a shared resource which is depicted in this example by the interconnect_to_memory bus (black link). In the following and for the sake of readability, we will avoid using Capella naming. We will only refer to diagram elements with the name of the concepts they represent. Proposition The proposal is to compute the interference and influence on bus load of the onchip communication (buses) based on Capella models. These properties are mainly influenced by the timing of the component's uses of buses. To compute them it requires 1) to identify the concepts from Capella that are involved in communications and 2) to define the parameters of these concepts that affect bus communication schedules. As the computation targets the early stages of the application design, we do not consider, in a first step, optimization features of components (e.g. interconnect memory buffers). Moreover local memory transactions such as data cache access and control from processor are not considered. We abstract such mechanisms and only consider communication with bus interfaces. We focuses on specific parts of Capella model level; namely the Physical Archi- tecture. Interesting elements of the Physical Architecture are represented in PAB diagrams, which represent allocation of functions into behavioral components mapped to hardware execution components. Applied to a SoC, this represents in one hand mapping of behaviors defined in software component to general purpose processors and in the other hand mapping of behaviors defined in hardware IP to hardware execution support (e.g., Programmable Logic part). In other words, such diagram describes deployment of software architecture on an hardware architecture. However, Capella's expressiveness is not rich enough to describe all domain specific properties required by our analysis. Consequently, in order to define their execution semantics, we identified in the following, a set of concepts on which properties for analysis are defined. Note that these concepts are mainly inspired by the UML Profile for MARTE [OMG09] especially the Hardware Resource Modeling (HRM) and Software Resource Modeling (SRM) packages and are mapped onto Capella (see section 4.1): • Execution Components (tasks): Components representing software or hardware implementation of tasks. They are the initiators of data traffic. These components follows a read-compute-write semantics, meaning that all readings from memory are done before the computation and all writings to memory are done after the computation. • Computation resources: General purpose processors and programmable logic units, on which tasks are allocated. They provide computational capabilities and communication interfaces for tasks. • Communication resources: Components responsible of routing and transferring data, like for instance Buses, Interconnects and IO interfaces. • Memory resources: Component representing data storage. They are sources or destinations of some transactions, used as data storage and data exchange zone for tasks. • Sensors/actuators: Components respectively representing first data providers and last data consumers. Based on these concepts, we select for each of these components a set of properties and parameters that are relevant for bus performance analysis. Conflicts on the interconnect occurs when two or more data memory transactions use the same bus at the same time. Thus time properties have a significant impact on interferences and associated latencies. We consider timing properties related to concepts previously identified, which have an impact on the communication scheduling. We propose the main following properties, organized according to the concepts defined previously: • Component execution (tasks): -Trigger: This enumeration defines whether task is triggered periodically or triggered upon receipt of a specific event on a task port. -Period: When the task trigger is periodic, its period specifies the task execution rate. -Offset: When the task trigger is periodic, the offset represents the date from which the task is executed. -ExecutionTime: Defines the duration for which the task is executed. It is defined by an interval specifying the best and worst execution time (BCET, WCET) -DataSize: Defined by inputDataSize and/or outputDataSize, which respectively defines the size of data read before the execution and the size of data written after the execution. -DataPath: Union of an input data path and an output data path, specifying the succession of buses (the route) used respectively to read and write data from and to a memory. • Communication resources (buses): -InterfaceSize: Defines the size of packets that can be sent by the communication resource. -Frequency: Represents the speed at which packets are handled by the communication resource. • Sensor: Holds same properties as component execution but only those related to data production. • Actuator: Holds same properties as component execution but only those related to data production. Finally concepts identified previously but not carrying properties (e.g., Memory resources) are used to ensure structural correctness of models. For instance we can check that a data path is started or ended by a memory resource, thus avoiding two other types of component from communicating without storing data in memory. Note that such path can be derived from the existing notion of Physical Path in Capella. Same kind of structural correctness can be verified on sensors and actuators. Extending the Capella model In Capella, for the description of components and properties set, we specialize the abstraction level of PAB objects to include missing information. To do so, Kitalpha tool is used to generate viewpoint extension. On the running example of Figure 3, we distinguish three different kinds of Physical Components, computation resources for CPU 1 and 2 (in blue), a bus controller for interconnect (in orange) and a memory resources (in red). Physical Links mapped on buses Physical Components Behavior mapped on component execution are in also depicted in section 3. In addition to properties, Kitalpha also provides the possibility to extend the model by defining operations describing actions associated and realized on each element. These operations are needed for simulation of operational scenario. Figure 2 shows the tab view generated by Kitalpha for setting components properties and calling behavioral methods. In our study, we only implemented a subset of hardware and software components and properties enough to cover our analysis. We are aware that for the sake of compatibility and standardization, a better solution would be to rely on an existing solution (e.g., by using the MARTE profile as implemented in Time4Sys 6 ). This may be done in the future but it requires efforts to adapt the behavioral semantic layer to the selected profile. Analytic Solution The first approach we propose is based on analytic evaluation of effect of memory transaction on architecture to estimate interference (latency) on software elements (task) and hardware elements (bus), and to evaluate bus occupation. Only static context is considered meaning following assumptions on memory transactions: 1) all transactions are atomic 2) the interconnect arbitration is not considered 3) all transactions crossing a bus port interface are concurrent. As a consequence, performance properties like delays and interferences are computed under worst case hypothesis. While possibly pessimistic, calculated values can be used as bound to have a first idea of the design performances all along the design activity. The computation of the various performance properties are defined as follows. We first provide intermediate definitions used as helpers in the computation: • let T be the set of all tasks in the system. Operational Solutions The second approach we propose, is based on model simulation. As we want to reason at high-level of abstraction, we use the Capella model as a reference for our simulation, which allows us to evaluate interferences from the model parameters. So we used GEMOC studio facilities to define an operational semantics of the Capella PAB diagram for interference estimation. In our model, communications are initiated by tasks. In most cases, a task reads data from memory, executes itself and writes back results to memory. Some tasks are scheduled periodically and others are triggered as soon as their input data are available. We do not consider a fixed execution time for tasks, instead execution time is randomized between its best and worst case. However when on a read or write access the bus may be busy by another on-going transaction. Then the task should wait for the bus until the current transaction is complete which generates latency. Thus, a communication bus can only process one transaction at the same time, it is considered busy until the end of the transaction. Encoding the behavior of the system under Gemoc In the Kitalpha viewpoint, we first need to define the set of operations which describe the system behaviour. In the case of component execution (task), we define the following operations: start(), stop(), execute(), read(), write(), waitForRead() and waitForWrite() as shown in figure 2. Once we defined these operations, we define the actions to be performed in each operation and the dynamic information required to monitor the evolution of the system during execution. To illustrate this, let's consider the case a task waiting for a bus: the wait() operation is thus executed increasing the bus latency counter and updating the value of the bandwidth. In Gemoc Studio, dynamic information are called Runtime Data (RTD) and execution function, defined for the Domain Specific Actions (DSA), are implemented in Kermeta 7 . A DSA implements the execution semantic for operation defined from system data. The second step is the implementation of the control flow semantic. We first define the Domain Specific Events (DSE) which trigger the execution functions (from DSA). We may have several instances of a concept (e.g. in the running example of figure 1 there is two instances of task). For each instance Gemoc generates a Model Specific Event (MSE). For example, in the context of task, we define a start, stop, execute, read, write, waitForRead and waitForWrite DSE. Applying this to the running example, in which we have two task instances, Gemoc will generate an instance of the corresponding DSE for each task as following: MSE_Task1_start, MSE_Task2_start, MSE_Task1_stop, MSE_Task2_stop. . . etc. A MSE is an ordered set of event occurrences that will execute the associated DSE function instances. To ensure a correct execution of the system, MSE occurrences should trigger the execution functions in a specific order (e.g. the start of a task occurs before the execute). This order is obtained by setting constraints between DSE events, using specific invariants defined in CCSL [And09] (Clock Constraint Specification Language), and in MoCCML (Model of Conccurrency Modeling Language) extension. By reasoning on temporal properties of the DSE, these two languages allow us to define the order in which MSE events occur. MoCCML expressions can be implement by automata for more complex scenario. The built model semantic includes six types of tasks, each type having its own execution semantic. Four of them are scheduled periodically and the two others are data triggered. For the task execution semantics, the event schedule is defined by the physical time requirement (e.g execution time included between a BCET and WCET given in µ). It is necessary to take physical time into consideration when defining the model semantic. The different task behaviors are described as following (a wait can preceded read and write when bus is busy): 1. Starts, reads data from bus, executes, writes data on bus and stops, scheduled periodically. 2. Starts, reads data from bus, executes and stops, scheduled periodically. 5. Starts, executes, writes data to bus and stops, scheduled when data dependency is satisfied. 6. Starts, executes and stops, scheduled when data dependency is satisfied. We implement the above semantics using two MoCCML automata, one for periodic tasks and another for data triggered tasks. Each transition in the automata triggers a time event. The physical time is build according time event scheduling build with resolution from our system requirement (1µ). Figure 4 shows the automata describing the periodic tasks behaviors. A periodic task starts in the Ready State and waits until its offset decount reach 0. Depending on the bus state (taken or idle), a task can read data from bus, wait for the bus or execute. In the MoCCML automata, this is managed by taking one of the three transitions leading to waiting_r state, reading state or running state. Choosing one transition depends on its associated conditions and events of the DSE which is triggered. For instance, when transitioning from ready to reading, the conditions are task of type 1 or 2 and dataInputSize > 0, and the associated event as the read MSE of the task. However, we also to constrain all the read and write occurrences on the same bus, as it is impossible to have more then one task communicating on the same bus. A CCSL constraints is used to exclude all the read and write combinations allocated to the same bus. In the context of the running example, we exclude all occurrences of Task1_read from triggering simultaneously with Task2_write, forcing one of the two tasks to trigger its wait event. Once a task is in the reading state, task triggers read event and stays in reading until the bus transfer time is completed. When the task read ends, it enters in the running state by triggering the execution event of the transition. As the execution time is included in BCET and WCET interval, the execution timing event is randomize leading to different execution time from one period to another. The writing state, is similar to the reading state with the possibility of going into waiting state if the bus is busy. Finally, Figure 5 -Results of simulation under Gemoc environment task enters to the finished state by triggering the stop event and continues to consume time until it reaches the next period's schedule. In this MoCCML automata, each transition event is linked to a MSE event. Triggering an event causes the execution of the associated execution function in the DSA, updating the runtime data for changing the system state. Simulation and results analysis The operational method applied to the running example of Figure 1 evaluates Interf erenceRate bus and Load bus by simulating the model. We have developed different scenarios in which we vary the value of Task2 offset. In some scenarios, tasks are no more schedulable due to interference effects induced by the task execution time which varying randomly between BCET and WCET. For instance, if the Interf erenceRate CP U 1_to_Interconnect is greater than 15%, and if Task1 executes for 12ms, then Task1 cannot end before its 20ms deadline. To validate the system requirement, we generate different simulation scenarios based on different hypothesis on system parameters (task scheduling, data decomposition or aggregation) which allows to estimate Interf erenceRate bus and Load bus value. Figure 5 shows the results of simulation of the running example under Gemoc Studio for a configuration where the offset of Task2 is fixed to 9ms. If we compare the results between the operational and the analytic solution on the running example, we can conclude that 1) the non schedulable scenario cannot be detected by analytic solution, 2) the schedulable scenario of operational solution is always bounded by value calculated in analytic scenario. Design Space Exploration In previous sections we show how we equipped Capella with analytic and simulation based reasoning capabilities. Based on them, we can retrieve information helping the designer to evaluate the quality of system design candidate. However, at early stage of the development process, some characteristics of the system may not be totally known, for instance the exact kind of hardware bus and its performances. Also, the Figure 6 -Abstract syntax of the little DSL for Design Space Exploration algorithm used by the task and their scheduling properties may change. If we consider a camera-based system, different compression algorithm may be used. Some of them take more time to compute but require less data to transfer (better compression) while others are faster but produce more volume of data. Exploring all these possibilities manually may be painful for a designer. Consequently we proposed a small DSL dedicated to defining the potential solutions to be explored. Then, from such a description we automatically generate the models with the appropriate characteristics that we can simulate (possibly in parallel) to obtain a whole sets of simulation results. These results can then be explored in different manners. In our experiments we used Jupyter 8 , an in the web notebook, to easily explore the results of the simulations. DSL for Domain Space Exploration Our DSL (whose abstract syntax is represented Figure 6 is adaptable to any EMF model and proposes to represent the range of variation of different model attributes. This DSL remains very simple for our current use but is subject to several evolutions. An Exploration is importing a model by using an ImportStatement. This allows to access to all elements of the model to be explored. An important point is that an exploration defines the languageName with which the exploration must be conducted. This is important since it indicates which semantics should be apply to drive the simulation. Then, for each element in the domain to be explored, the user creates a PropertyVariation with reference to an Attribute from the initial model into a reference named variableProperty. Finally, a VariationDomain on this property is defined as a Float interval. This clearly means that only Float compatible attributes can vary during our exploration. We have several extensions of this DSL under study (supporting different data types but also topology variation, for instance to represent different allocations of the tasks on the hardware); but this simple DSL was expressive enough in our initial ATIPPIC use case. To make it usable we defined a textual concrete syntax using Xtext 9 . An example of the use of this syntax is given in Figure 7. In the tooling, we provided efforts to ensure that completion is enabled for both the choice of the language and the navigation to the model attributes, helping users to make a correct exploration model. Exploitation of an Exploration Model Once a designer defined an exploration model, it can be used to export a set of executable artifacts, each generated according to different configuration of the model Figure 7 -a Simple example of use for our DSL depending on the property variations defined in the exploration model. More precisely, based on the Cartesian product of the different values covering the exploration domain of each property variation, a model is generated and then compiled into a java Class allowing its monitored execution. By monitored execution, we mean that it is possible, by changing arguments of the main function, to log values of interest during the simulation in a csv format. With the set of executable artifacts, a script to launch the different executions is generated. During the execution of each model, a csv file and a corresponding gnuplot script is generated so that it is possible to have a first view of the results. Additionally, to make easier the exploration of the results, we used jupyter lab, which provides, among other things, an easy way to generate a dedicated interface to browse the resulting csv files according to the parameters. A screen shot of the resulting view is provided in the companion webpage. Case study The avionic use case developped in the ATIPPIC project targets the market for low cost earth observation or communication mini satellites cruising at Low Earth Orbit (LEO) build upon COTS (non Rad-Hard components). The avionic supports standard featuresto perform its control and maintenance, and embeds a payload. The satellite control is operated with a star tracker sensor to determine the orientation (or attitude) of the spacecraft with respect to the stars, a GNSS compatible with GPS or Gallileo band to acquire its positioning, standard RF communication means for Telemetry/Telecommand (TM/TC) actions, and internal communication interfaces as CAN bus or SPaceWire (SPW) used to acquire sensors signals and to actuate the satellite propulsion, or the energy management with solar panel orientation. The SPW or CAN communication allows the integration of an application payload. For the sake of the study, the avionic use case is completed with an earth observation payload application. This is a typical use case supported by as the ATIPPIC On Board Computer(OBC) which offers extensive functionalities for payload integration such as storage capacity though mass memory implementation and fast RF communication links to download image to ground through Telemetry Image interface (TMI). The On Board Computer (OBC) architecture is based on redundant architecture, not scope of the analysis and so not detailed here, embedding a SoC as main computing processor. The objective of this study is to evaluate in the early stage of development of the system architecture, how to balance control and application between the Programmable Logical (PL) part and the CPU of a SoC (to reduce implementation constraints on the PL area), by analyzing the required timing requirement of the communication scheme to detect possible overloads of the internal bus communication. The analysis is performed on a sub-part of the overall system architecture comprising an on-board avionic to control and acquisition of the signal of three optical head via SPW interfaces, to format the raw data performing star tracker resolution and to reconfigure its heads if necessary. The payload includes the acquisition of raw images through SPW interface, data formatting and compression for storage in an external flash mass memory (controlled by an integrated memory controller implemented in the PL area), and transmission to the TMI interface connected to an external RF TMI-X transmitter. The OBC electronic is build with a SoC from the Xilinx Zynq7000 family offering PL capabilities. The SoC is decomposed with a Processing System area (PS) including a dual CortexA9 processor in the Application Processor part, a central interconnect connected to a set of I/O devices and a memory interconnect to allow access to external DDR memory device. Note that DDR access is also possible directly from L2 cache controller of the dual CortexA9 or from the central interconnect. The PL provides a user configurable area to allow the integration of the required hardware IP components. The PL is connected to the PS via the central interconnect with AXI General Purposes ports (GPx) and via the memory interconnect with AXI High Performance ports (HPx) to access to DDR. The use case architecture, see Figure 8, manages access to DDR from different source: from the PS by the CorteXA9 cores via the SCU, or from PL area by the swp_IP addressing the AXI_HP0 port of the DDR_Interconnect, or by the spw_payload_camera_IP addressing the AXI_HP2 port of the DDR_Interconnect. In this architecture interference may occurs locally on following component : • AXI_HP2_Interco for the image control since the transfer of acquired raw images (1,2 Mb acquired every 0.01s) to the DDR realized by spw_payload_acquisition can be concurrent with the Flash writing of the compressed image (factor 2) from the DDR to the Nand Flash controlled by Memory_payload_manager. • SCU and inside the DDR Controller as image compression Image_compression_payload and image formatting spw_payload_driver can interfers with access to image data because they are assigned to different cores of the CortexA9, and executed asynchronously as the compression process is slowest then the driver's image formatting. This design choice is motivated by future extensions of image processing functions, and by the reservation for new functions implemented in the PL area but not described in this use case. The timing requirement of the architecture must be evaluated and we provide the means to assess them early in the design by identifying the software latency on tasks DelayTime task , the AXI bus hardware latency and load respectively Interference bus and Load bus . This allows to challenge our design, offering the advantage of relaxing PL occupation size, and giving the means to compare alternative solutions. Note that alternative solutions may also vary depending on the configured scheduling of software tasks, or on the reassignment of software tasks into the PS. In the two graphs of Figure 9, we compare bus average interference value (total interference duration on each bus divided by the number of transaction achieved by the bus) generated for two different approaches of image compression. In the first strategy, we consider a 2 by 2 image compression strategy. The second strategy compresses the images by group of 4. The first strategy executes more often than the second one and generates smaller transactions. While the execution rate of the second is lower and exchanged data size is larger. The results show that the first approach generates interference only on CortexToDDR bus (linking the DDR and the SCU of Cortex A9), less than the 4 images compression strategy which besides generating interferences on the CortexToDDR also generates interferences on the 2 other bus (the 2 buses linking PL to DDR through the interconnect). More information and simulation results based on the use case can be found on the companion webpage:https://project.inria.fr/interferenceanalysis/ Related Works There exist many tools for network simulation (e.g., NS3 [Car10] or OMNET++ [VH08]). However, these tools are used for accurate simulation of network protocol and usually does not focus on the node, viewed only as traffic generators. Consequently in the following we consider only platform close to our domain (embedded systems). Also, we did not tried to make some guided design space exploration like in [BZS18] or equivalent but these methods are compatible with ours, their fitness function requiring a simulation or the evaluation of properties by using the analytic method. Consequently we do not compare to them. Platform Architect 10 distributed by Synopys provides an industrial solution to perform SoC architecture analysis and optimization for performance and power. It is based on SystemC TLM libraries with accurate modeling for interconnect, memory controller and virtual processor (accurate memory transaction definition and resource consumption). It allows through simulation and trace analysis to evaluate performance of a multicore or SoC architecture. This tool is used by SoC designer to optimize their design. However, compared to our approach, analysis requires strong hardware skills. It seems not adapted to the system design level and does not interface with the MBSE system design methods and tools, to our knowledge. TTool\Diplodocus[AB12, AMAB + 06, GA16] 11 is a modelling tool based on UM-L/SySML, developed by Telecom ParisTech. One of its extension is a tool for SoC partitioning by finding the best candidate software and hardware architecture for executing a set of functions. It encodes a semantic for software execution derived from task graphs and uses concept for hardware definition (CPU, memory, interconnect, router..), whose parameters can configured, allowing exploration. The simulation is done by translation of the model element to (predefined) SystemC blocks. Compared to our approach it requires to have the predefined SystemC block and their execution environment which is realized outside of the modeling tool. Consequently it does not take benefit of model level simulation, debugging and exploration. Conclusion In this paper we have presented a use of model driven engineering that enables reasoning on bus interferences of an aerospace integrated architecture. We have identified the required information for such analysis. The analysis is based either on an analytic method, which provides instantaneous but pessimistic results or on a simulation based method, which provides more accurate results but require a simulation step. To help the designer in choosing the best parameters for its architecture we provided a small DSL based on which he can define exploration space. The exploration space is used to generate different instance of models from which reasoning can be conducted. Results are then presented in a small dedicated interface in a notebook. Many future works are envisioned. Here is a few of them: the use of new design properties to allow more accurate results (caches, bus arbitration, DMA, etc); the use of more powerful analysis based on exhaustive simulations to guarantee temporal properties; the introduction of stochastic information (about execution time or data size) to provide stochastic results; the integration of the exploration DSL and/or results in dedicated exploration tools based, for instance, on parallel coordinates chart. Satellite and the first Telecom Spacebus 4000 satellite before to take skill management function in TAS organisation. He joined IRT Saint Exupery in 2017. Contact him at christophe.moreno@irt-saintexupery.fr .
2019-07-26T06:21:58.186Z
2019-01-01T00:00:00.000
{ "year": 2019, "sha1": "5e93084f544bd3984e7277cf4565f8684e62a0f7", "oa_license": null, "oa_url": "http://www.jot.fm/issues/issue_2019_02/article14.pdf", "oa_status": "GOLD", "pdf_src": "Adhoc", "pdf_hash": "37f2f352406ff1a318577b7f392369b0f019b2db", "s2fieldsofstudy": [ "Engineering", "Computer Science" ], "extfieldsofstudy": [ "Computer Science" ] }
7277745
pes2o/s2orc
v3-fos-license
Ultra-widefield fundus imaging in gas-filled eyes after vitrectomy Background To evaluate the quality of the images obtained by an ultra-widefield device in gas-filled eyes after vitrectomy for a retinal detachment. Methods Retrospective case series. The ultra-widefield scanning laser ophthalmoscopic images (Optos 200Tx imaging system) of 40 eyes that were gas-filled with 40 to 90% of the vitreous cavity after vitrectomy for a rhegmatogenous retinal detachment were studied. The rates of detecting the rates of reattachments and the causative retinal tears that were treated and were in the superior or inferior areas in eyes with intravitreal gas of ≥60% were compared to that to eyes with intravitreal gas of <60% of the vitreous cavity. The widefield images recorded with 532 nm (green) or 633 nm (red) wavelength laser lights were compared to determine which wavelength had clearer images in 20 eyes of retinal detachment with superior retinal tears and were more than 50% gas-filled. Results The ultra-widefield images showed a retinal reattachment in all eyes on postoperative days 1 to 40 (mean; 8.7 ± 7.5 days). A superior retinal break was not visible in 5 of 26 eyes due to a reflection from the intravitreal gas bubbles when the gas was <60%. However, the superior retinal breaks were visible when the patients were requested to gaze downward to reduce the reflection of the gas bubble. The retinal breaks treated with laser burns and the retinal vasculature were imaged better with green laser than red laser light, and the choroidal vasculature was seen better with red laser light. Conclusions Ultra-widefield fundus images can be used to evaluate and document the retinal breaks and retinal reattachments in gas-filled eyes. The green and red laser lights can image different depths of the retina and choroid in gas-filled eyes. Background A rhegmatogenous retinal detachment is a separation of the sensory retina from the retinal pigment epithelium (RPE) caused by the entry of fluid into the subretinal space through retinal tears. Rhegmatogenous retinal detachments are treated by scleral buckling, pars plana vitrectomy, or a combination of both [1][2][3][4][5]. Scleral buckling has been the conventional treatment for retinal detachments, but recent developments in vitrectomy have made pars plana vitrectomy a more common treatment of retinal detachments. During the vitrectomy, different types of gas are injected into the vitreous cavity to tamponade the retina, however the intravitreal gas can decrease the visibility of the retina, and reflections from the intravitreal gas bubble can make it difficult to observe and photograph the fundus. Ultra-widefield imaging of the fundus allows clinicians to evaluate the retina far beyond the equator of the fundus in a single image [6][7][8]. The Optos ultra-widefield imaging system is a scanning laser ophthalmoscope which can obtain widefield images with scanning laser wavelengths of 532 nm (green) or 633 nm (red) [9]. The two images can be viewed separately or superimposed to yield a semi-realistic color image. The design of the ellipsoid mirror of the Optos makes it possible to obtain ultra-widefield images of 200 degrees horizontally without pupillary dilatation. In addition, the use of two laser wavelengths is advantageous over the conventional imaging systems because the red laser penetrates deeper into the retina and the choroid, and the green laser light provides better images of the superficial layers of the retina and retinal vessels. Anderson and associates [7] presented a case report of a retinal detachment evaluated with the Optos ultrawidefield images prior to the surgery, after scleral buckling surgery with intravitreal gas injection, and after vitrectomy with a gas tamponade. They reported that the ultra-widefield fundus imaging system was able of delineating the extent of the retinal detachment even in the presence of intraocular gas after the vitreoretinal surgery. The aim of this study was to evaluate the efficacy of ultra-widefield imaging of the fundus to delineate the causative and treated retinal tears and reattachments in gas-filled eyes after vitrectomy for a rhegmatogenous retinal detachment. In addition, the efficacies of green and red laser wavelengths in gas-filled eyes with superior retinal breaks were determined. Methods The medical records of 40 eyes of 40 patients who had undergone vitrectomy with a gas tamponade for a retinal detachment were reviewed. The patients were examined by indirect ophthalmoscopy and slit-lamp biomicroscopy with a non-contact or contact wide angle lens to detect all retinal breaks preoperatively, and all the retinal breaks were treated with endolaser during the vitreous surgery. Ultra-widefield scanning laser ophthamloscopic images (Optos 200Tx imaging system, Optos PLC, Dunfemline, Scotland, UK) were recorded from dilated eyes in the primary position after the vitrectomy. The eyes were filled with different types of gas with a volume of 40% to 90% of the vitreous cavity. In the initial investigation, the rate of detecting the causative and treated retinal tears and reattachments with intravitreal gas to ≥60% of the vitreous cavity was compared to that with gas of <60% of the vitreous cavity. The ability to detect the retinal reattachments was evaluated in eyes with retinal breaks in all quadrants but the detection of the retinal breaks was evaluated in eyes with retinal breaks only in the superior or inferior areas. A positive identification of retinal breaks was defined as the identification of all the retinal breaks in a quadrant in eyes with multiple retinal breaks. The identification was made by two retinal specialists (MI, TK) who were masked to the patients' information including the locations and numbers of the retinal breaks. The rate of detecting retinal tears and retinal reattachment in the images were evaluated. When the decision was not the same, a third investigator (KH) examined and discussed the findings to make the final decision. For the second investigation, the efficacy of the red and green wavelengths in evaluating the retina in gasfilled eyes was compared. To avoid the effects of the location of the retinal breaks and the volume of the gas, the ultra-widefield images of 20 eyes filled with a gas volume of more than 50% of the vitreous cavity and with superior retinal breaks were rewiewed. These images were divided into two identical images taken with the 532 nm or 633 nm laser lights and exported as blackand-white images. The two images were examined to determine which wavelength gave better images of the superior retinal breaks and the retinal and choroidal vasculatures. The ability to detect superior retinal breaks and retinal and choroidal vasculatures through the intravitreal gas was scored into 3 grades; 2 = clearly detected, 1 = moderately detected, 0 = barely or not detected. The scoring was made by two investigators (MI, TK) who were masked to the patients' information including the location and number of the retinal breaks. Vitreous surgery was performed with 25-gauge instruments, and the retina was tamponaded after the vitrectomy with air, 20% sulfur hexafluoride (SF 6 ), or 14% perfluoropropane (C 3 F 8 ) gas. The volume of intravitreal gas was determined by one of the authors (MI) based on the level of the inferior gas meniscus at the retina observed with an indirect ophthalmoscope in a sitting position [10,11]. Results Combined cataract surgery was performed on 29 eyes, lens-sparing vitrectomy was performed on 3 eyes, and 8 eyes were pseudophakic. Scleral shortening for macular hole retinal detachment was performed on one eye [12], and laser in-situ keratomileusis (LASIK) had been performed on one eye prior to the vitrectomy. The Optos ultra-widefield images showed that the retina was reattached in all eyes regardless of the volume of the gas in the vitreous cavity. The Optos ultra-widefield images showed that the retina was reattached even in the 15 eyes filled with gas volumes of 80 to 90% (Fig. 1). Clear images were also obtained from one eye with severe nystagmus due to the morning glory syndrome and from one eye of a patient with intellectual disability. No new retinal breaks were found ophthalmoscopically or in the ultra-widefield images of the gas-filled eyes postoperatively. The detections of retinal breaks and retinal reattachments were identical for the two investigators (interclass correlation coefficient (2,1), r = 1.00). Identification of sites of the causative retinal tears and retinal reattachments Ultra-widefield images were taken on postoperative days 1 to 40 with a mean ± standard deviation of 8.7 ± 7.5 days. The retinal tears were located in the superior retina in 20 eyes, superior and inferior sectors in 6 eyes, the superior-temporal quadrant in one eye, temporal sector in one eye, inferior sector in 3 eyes, peripapillary area in one eye, and in the macular area in 8 highly myopic eyes with a macular hole retinal detachment. The rate of detecting tears in the superior area was significantly higher in eyes with a gas volume of ≥60% (100%; 11 of 11 eyes) when the eyes were in the primary position than in eyes with gas volume < 60% (67%; 10 of 15 eyes, P = 0.046, Fisher's exact probability test, Table 1). In the primary position, the incidence of detecting retinal tears in the inferior area of eyes with gas volume of ≥60% (71%; 5 of 7 eyes) was not significantly different from that in eyes with gas volume of <60% (100%; 2 of 2 eyes, P = 0.583, Fisher's exact probability test). Retinal breaks in the superior area were not detected in 5 of 26 eyes due to reflections from the intravitreal gas bubble, and all of these eyes had intravitreal gas volume of <60%. When the size of the gas bubble was smaller, the superior retinal breaks in 6 eyes were difficult to detect in the primary position due to the strong reflections of the gas bubble. However, the retinal breaks in these 6 eyes could be detected when the patients were asked to gaze downward to reduce the reflections from the intravitreal gas bubbles (sensitivity = 100%, specificity = 100%). Inferior retinal breaks were detected in 2 eyes when the gas bubble did not cover the retinal breaks and 5 eyes through the gas bubble. In 2 other eyes, the inferior retinal breaks were not detected due to the reflection from the intravitreal gas bubble (sensitivity = 78%, specificity = 100%). The macular hole was not detected in all of the 8 eyes with a macular hole retinal detachment. The intravitreal gas was seen as a double-layered image with superior and inferior areas in 17 of 17 eyes with intravitreal gas volumes of 40 to 50% (Fig. 2). The two areas were not seen in 11 eyes with intravitreal gas of 80 to 90% which was significantly fewer than that with gas of 40 to 50% (P < 0.0001, Fisher's exact probability test). The superior oval area of the gas bubble originated from the image seen through the gas bubble but the inferior, banana-shaped area, originated from the mirror images by the reflection from the inferior retina and not through the gas bubble. The mirror images through the inferior banana-shaped area was confirmed in 15 eyes by the pattern of the retinal vasculature. Comparisons of images with green and red laser lights in gas-filled eyes Superior retinal breaks were identified in 18 of 20 eyes through the intravitreal gas with the green laser Postoperative image taken on the day after vitrectomy with 80% intravitreal gas showing that the retina is reattached. The ora serrata (white arrowheads) and ciliary epithelium can be seen through the gas. c Magnified images of Fig. b showing that the ora serrata (white arrowheads) and ciliary epithelium are visible. d Ultra-widefield image taken 6 months postoperatively shows retinal reattachment without gas tamponade, but the ora serrata and ciliary epithelium are not visible wavelength and in 17 eyes with red laser wavelength ( Fig. 3, P = 0.50, Fisher's exact probability test). In the other 2 eyes with superior retinal breaks, a retinal reattachment was identified but the retinal breaks were not detected in both the green and red laser wavelength images. The score for identifying superior retinal breaks was 1.6 ± 0.5 (mean ± standard deviation) with the green laser which was significantly higher that with the red laser with a score of 0.5 ± 0.5 (P < 0.001, Mann Whitney U test). The score for retinal vasculature with the green laser was 1.4 ± 0.6 which was also significantly higher than that with the red laser at 0.7 ± 0.6 (P = 0.002). The score for choroidal vasculature was 0.1 ± 0.3 with the green laser which was significantly lower than that with the red laser at 1.7 ± 0.6 (P < 0.001). Discussion The ocular fundus has been traditionally examined by monoocular or biocular indirect ophthalmoscopy in combination with detailed drawings or panoramic photographs with a fundus camera. Ultra-widefield images of the fundus have been recently recorded with the Optos system, and these images can determine the extent of a retinal detachment even in the presence of 80 to 90% intraocular gas as was found in this study. Ultra- Fig. b shows retinal reattachment with less visible laser burns and less reflection of gas, but choroidal vessels are seen more clearly widefiled imaging has been used to record retinal detachments, monitor their repair, and follow the retinal status postoperatively [7]. Ultra-widefield fundus imaging is based on scanning laser technology in combination with a large ellipsoidal mirror. We assumed that scaning with a shorter wavelength laser light might increase the reflection of intravitreal gas, but the fundus in the ultra-widefield images was clearly visible through the intravitreal gas with both wavelengths. Confocal scanning may reduce the reflection from the intravitreal gas. The images through the intravitreal gas consisted of two components, the image of the retina through the gas in the superior area and the mirror image of the inferior retina from the reflection of the gas surface in the inferior area (Fig. 4). The images through the intravitreal gas was minified which enabled the ultra-widefield images including a view to the ora serrata in some of the gas-filled eye (Fig. 1b) but not through the vitreous fluid. The intravitreal gas was seen to be oval-shaped and the inferior area as bananashaped. The Optos ultra-wide field images have been described to be stretched by 1.12-fold in the horizontal direction with respect to the vertical direction [13]. The ellipsoid mirror of the Optos ultra-widefield system may produce these horizontally elongated images that are oval-and banana-shaped. The banana-shaped mirror images were seen when the intravitreal gas became less than 80% of the vitreous cavity and a larger intravitreal gas bubble hid this mirror image below the image through the gas. We should be aware of this type of artifact in evaluating gas-filled eyes. The red wavelength laser scans had better penetration into the deeper layers of the retina and the choroid, and green wavelength gave better images of the surface layers of the retina and laser burns [9]. The diagnostic capabilities of ultra-widefield imaging with two wavelength lasers have been suggested to be a potential method to differentiate malignant ocular tumors from nonmalignant lesions with high diagnostic accuracy in clinicallydiagnosed melanocytic choroidal tumors [9]. As in gas-filled eyes, two laser scans enabled an evaluation of retinal images while maintaining the beneficial features of each wavelength. However, we evaluated only eyes with superior retinal breaks because the detection of inferior retinal breaks was more dependent to the volume of the intravitreal gas. A macular hole was not detected in all highly myopic eyes with a macular hole retinal detachment. This may be becase we did not perform laser photocoagulation on the macular hole, and the macular hole was often surrounded by depigmented chorioretinal atrophy. The extremly large depth of focus of the ultrawidefield system allows the peripheral retina, intraocular gas bubble, and posterior pole to be in focus simultaneously in gas-filled eyes. A faster scan with green and red lasers and an ultra-widefield images with a greater depth of focus may allow the detection of retinal images in eyes with nystagmus or in patients with intellectual disability. However, it may be affected by peripheral lens opacities, corneal opacities including intracorneal implantation of corneal inlay, cilia, and eyelids [28]. Postoperative cataracts caused by intravitreal gas in phakic eyes can reduce the clarity of the ultra-widefield images temporarily. There are limitations in this study. This was a retrospective case series conducted in a single academic hospital. The number of patients was small and each patient had a different type of retinal detachments and retinal breaks. The grading was essentially subjective which might have affected the results. Therefore, prospective studies of randomized eyes with larger sample sizes may be needed to confirm these results. Conclusions The ultra-widefield fundus imaging is useful in evaluating postoperative retinal breaks and retinal reattachments in gas-filled eyes. The two wavelength light sources maintain the efficacy of different scanning depth can also be used in gas-filled eyes. These advantages are useful in documenting the clinical course before and after vitreous surgery with gas tamponade.
2018-04-03T00:41:57.434Z
2017-07-03T00:00:00.000
{ "year": 2017, "sha1": "f9b8fc35d61272d2dd0a14d39971f190e8f71126", "oa_license": "CCBY", "oa_url": "https://bmcophthalmol.biomedcentral.com/track/pdf/10.1186/s12886-017-0510-7", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "f9b8fc35d61272d2dd0a14d39971f190e8f71126", "s2fieldsofstudy": [ "Medicine" ], "extfieldsofstudy": [ "Medicine" ] }
270654023
pes2o/s2orc
v3-fos-license
On-Site Chlorine: A Promising Technology in Drinking Water Treatment in Santa Cruz, Bolivia : Water availability and quality are still challenges around the world, but access to safe drinking water is essential for human development. This study analyzed the chemical parameters of drinking water quality in the Santa Cruz de la Sierra region of Bolivia. Residual chlorine, pH and concentration of dissolved solids were measured in water supplied by drinking water and basic sanitation service providers (EPSA). The water quality results indicated that the water supplied met the requirements established by the Bolivian Standard NB 512 in terms of residual chlorine, pH and concentration of dissolved solids. However, a decrease in residual chlorine concentration was observed as the water moved away from the disinfection point. Microbiological testing is recommended to ensure the absence of viable organisms in the distributed water. In conclusion, this study highlights the importance of chlorination, as the only treatment performed in the study area, the pH and the concentration of dissolved solids as indicators of drinking water quality. Automation of chlorination processes and continuous monitoring of these parameters is suggested to ensure a safe and high-quality water supply in the study area. Introduction Access to clean and safe drinking water is one of the most important challenges facing humanity in the 21st century.Water constitutes the vital resource for life; therefore, the supply of potable water is one of the essential services to ensure the health and well-being of the community [1][2][3][4].Although significant progress has been made in this field in recent decades, there are still many parts of the world where safe drinking water is a critical unmet need [5]; particularly in low-and middle-income countries, access to safe drinking water and adequate water treatment is still scarce, leading to a number of public health and environmental problems [6].In Santa Cruz de la Sierra, the water sources are underground and have stable physicochemical characteristics when analyzing the parameters over the years.That is why water treatment plants [7] only require a chlorination and/or disinfection system.Chlorination alone can ensure quality water in specific areas without an excess of minerals and/or suspended solids [8,9]. A widely used method for water purification in Bolivia is water disinfection through chlorination, along with other industrial methods such as ozone and ultraviolet light, among others applied to water.Chlorination can be carried out in different forms [10], using liquid chlorine, solid chlorine, or chlorine gas [11], all of which are efficient with their corresponding advantages and disadvantages, including the formation of by-products.The method of choice for the water utility provider (EPSA) in the study area is liquid chlorine, supplied in large quantities each month.However, liquid chlorine tends to degrade over time, which leads us to propose in situ chlorination using the electrochemical method to produce hypochlorite, presenting a promising and safe alternative for this process.This method employs electrochemical devices that generate active chlorine from the salt dissolved in the water, eliminating the need to transport and store hazardous chemicals. In this study, the research was carried out in the drinking water cooperative Pampa de la Isla COOPAPPI, in the sectors of well 7, corresponding to the Guápilo area, to evaluate the effectiveness and feasibility of in situ chlorination using the electrochemical method.Measurements were taken in the homes of users near the study well to determine the concentration of residual chlorine in the water after the disinfection process, confirming the efficiency of the method and compliance with Bolivian regulations [12]. Water is an essential element for human beings, but it is not available to everyone and is becoming increasingly scarce.The Bolivian standard NB512 establishes physical, chemical, pesticide [13,14], etc., parameters to determine whether drinking water is potable [15].Due to the importance of this regulation and the desire to preserve the health of the citizens of Santa Cruz, the development of sodium hypochlorite by electrolysis is proposed.This would allow water service providers (EPSAs) to include it in their water supply network, ensuring that the substance received by members has low microbiological levels, as indicated by Bolivian standards.In addition, periodic monitoring of supply networks is proposed since the responsible use of this chemical compound is crucial to avoid health problems [16,17]. Chlorination as a Water Treatment for Disinfection In the process of disinfecting drinking water, three stages can be differentiated, in which it is necessary to apply different procedures: In this stage, the amount of chlorine necessary to overcome the breaking point is added.This ensures that the residual chlorine level is suitable for subsequent disinfection.In general, chlorine dosing is done in proportion to the flow rate of water to be treated. Disinfection-Storage-Maintenance This stage takes place inside the warehouse and is the moment when the disinfection itself takes place.If the residence time is long, it is necessary to maintain a residual level of chlorine to ensure that no new microbiological contamination has been possible.In order to carry out the corresponding chlorine supply, a monitoring and dosing system is necessary in the tank.The same working methodology would be applied to tanks that act as a lung and receive water that has already been previously treated or in a closed system such as a swimming pool. Post-Chlorination Once the water has left the reservoir and is distributed for use, additional chlorine may be required to ensure that residual chlorine levels are as required at the point of consumption.These are rechlorination stations in extensive distribution networks.In this case, the in-line control equipment is of great importance, as it will be ultimately responsible for maintaining the chlorine level. Types of Chlorine For the determination of the chlorine samples, the three types of disinfectant substance used in our country and authorized by the competent water authority were taken into account, such as liquid chlorine in the form of sodium hypochlorite, solid chlorine in the form of calcium hypochlorite and chlorine gas, as such. Chlorination Prototype The prototype developed, shown in Figure 1, has a capacity of 25 L, for which a voltage source and 6 electrodes were adapted that will act as cathodes and anodes to carry out the chemical reaction using the electrochemical method, in which the sodium hypochlorite will be obtained through the passage of the electric current in a saturated solution of sodium chloride.This will be done through the electrochemical method, which will guarantee obtaining chlorine in situ so that the cooperative is supplied in the well and can generate the necessary amount for disinfection and dosage according to the required water flow. Types of Chlorine For the determination of the chlorine samples, the three types of disinfectant substance used in our country and authorized by the competent water authority were taken into account, such as liquid chlorine in the form of sodium hypochlorite, solid chlorine in the form of calcium hypochlorite and chlorine gas, as such. Chlorination Prototype The prototype developed, shown in Figure 1, has a capacity of 25 L, for which a voltage source and 6 electrodes were adapted that will act as cathodes and anodes to carry out the chemical reaction using the electrochemical method, in which the sodium hypochlorite will be obtained through the passage of the electric current in a saturated solution of sodium chloride.This will be done through the electrochemical method, which will guarantee obtaining chlorine in situ so that the cooperative is supplied in the well and can generate the necessary amount for disinfection and dosage according to the required water flow.For the operation of the developed prototype, a concentrated solution of sodium chloride is prepared and subjected to electric current, where chemical reactions are carried out in each of the electrodes present to obtain it; these electrodes used have materials that are easy to obtain and clean.To this end, laboratory tests were carried out to verify the stability of the hypochlorite obtained; through oxide titration reduction with potassium iodide, where it was evident that, by subjecting the concentrated brine solution to 12 Volts and 10 Amps, stability was achieved in the % of active chlorine, in addition to the fact that the time that elapses to obtain it is shorter compared to other voltage and amperage values, it is for this reason that it is decided to increase the number of electrodes available to carry out the chemical reaction in a way that is faster and more efficient for the operator.The chemical reaction that takes place is as follows: In sodium chloride, chloride (Cl − ) and sodium (Na + ) ions are taken into account by the semi-reaction: In water, hydrogen ions (H + ) and hydroxyl ions (OH − ) are present, which are shown in the semi-reaction: These ions, present through the passage of 12 V and 10 A direct current through the graphite and stainless-steel electrodes acting as cathode and anode, respectively, transform chlorine into chlorides and release sodium ions.This takes place at the anode.For the operation of the developed prototype, a concentrated solution of sodium chloride is prepared and subjected to electric current, where chemical reactions are carried out in each of the electrodes present to obtain it; these electrodes used have materials that are easy to obtain and clean.To this end, laboratory tests were carried out to verify the stability of the hypochlorite obtained; through oxide titration reduction with potassium iodide, where it was evident that, by subjecting the concentrated brine solution to 12 Volts and 10 Amps, stability was achieved in the % of active chlorine, in addition to the fact that the time that elapses to obtain it is shorter compared to other voltage and amperage values, it is for this reason that it is decided to increase the number of electrodes available to carry out the chemical reaction in a way that is faster and more efficient for the operator.The chemical reaction that takes place is as follows: In sodium chloride, chloride (Cl − ) and sodium (Na + ) ions are taken into account by the semi-reaction: In water, hydrogen ions (H + ) and hydroxyl ions (OH − ) are present, which are shown in the semi-reaction: These ions, present through the passage of 12 V and 10 A direct current through the graphite and stainless-steel electrodes acting as cathode and anode, respectively, transform chlorine into chlorides and release sodium ions.This takes place at the anode. In the cathode, hydrogen is converted into gaseous hydrogen when an electron is captured, leaving in the water the hydroxyl ions that bind to the sodium to form the sodium hydroxide NaOH that remains in solution with the rest of the brine that has not been consumed in electrolysis, which in the presence of chlorine Cl 2 binds to it to form sodium hypochlorite. Obtaining the desired hypochlorite concentration allows us to have a residual free chlorine remnant in the drinking water network. Portable Chlorine Meter The pocket free chlorine meter or checker is of the brand HANNA HI701; it is a digital colorimeter to perform chlorine tests with a measurement range of 0 to 2.5 mg/L of residual chlorine that has two cuvettes with lids and HI701-25 reagents with DPD reagents (N,N Diethyl paraphenylenediamine) for precise and simple measurements whose values are shown on the screen of the equipment after 5 min of the reaction of the reagent with the sample.This equipment complies with the UNE-ISO17381 Standard for water quality.It has a 525 nm LED light source that, when activated, passes through the glass cell that contains the sample with the reagent, which changes the coloration of the water; finally, the intensity of light received by a silicon photocell is translated into a numerical value that indicates the concentration of residual free chlorine in the water in parts per million (ppm) (Oscar and Anna Nardo, Limena, Italy Hanna Instruments, 2018) Sampling Points The samples were collected at fixed sampling points near well 5 of the drinking water cooperative "Pampa de la Isla", located by what is indicated in NB-512 in point 24, which specifies that the location of the sampling points must follow the criteria of being in areas of high population density.Areas at risk of contamination are representative of the network and at points near and far from the well of influence.For this purpose, the following summary of the fixed sampling points in the service area of the well is available, as shown in Figure 2. Method To perform the in situ chlorination tests, a prototype was developed, as shown in Figure 1.It consists of an electrolytic tank of 25 L in volume in which a voltage source and 6 electrodes that will act as cathodes and anodes to carry out the chemical reaction in which sodium hypochlorite will be obtained through the passage of the electric current in a saturated solution of sodium chloride.This will be done through the electrochemical method, which will guarantee obtaining chlorine in situ so that the cooperative is supplied in the well and can generate the necessary amount for disinfection and dosage according to the required water flow, as shown in Figure 3. Method To perform the in situ chlorination tests, a prototype was developed, as shown in Figure 1.It consists of an electrolytic tank of 25 L in volume in which a voltage source and 6 electrodes that will act as cathodes and anodes to carry out the chemical reaction in which sodium hypochlorite will be obtained through the passage of the electric current in a saturated solution of sodium chloride.This will be done through the electrochemical method, which will guarantee obtaining chlorine in situ so that the cooperative is supplied in the well and can generate the necessary amount for disinfection and dosage according to the required water flow, as shown in Figure 3. Method To perform the in situ chlorination tests, a prototype was developed, as shown in Figure 1.It consists of an electrolytic tank of 25 L in volume in which a voltage source and 6 electrodes that will act as cathodes and anodes to carry out the chemical reaction in which sodium hypochlorite will be obtained through the passage of the electric current in a saturated solution of sodium chloride.This will be done through the electrochemical method, which will guarantee obtaining chlorine in situ so that the cooperative is supplied in the well and can generate the necessary amount for disinfection and dosage according to the required water flow, as shown in Figure 3.The method for the measurement of residual free chlorine, which is the amount of chlorine that remains unreacted in the water and that guarantees the purification of the water, is established by the Bolivian standard and indicates that the DPD Spectrophotometric, iodometry, or DPD Colorimetric method must be used for analysis.In our case, we use the colorimetric method DPD (N, N Diethyl paraphenylenediamine), which consists of taking the sample of water that has been chlorinated, that is, at a tap belonging to the distribution network to which the DPD is added and with the help of counters ample of deionized water that is the target and with which the color change is compared, whose value is obtained with the digital colorimeter, which measures the passage of light through the sample and gives us as a result the value of the concentration of residual free chlorine present in our water sample in units of concentration known as parts per million-ppm.Having the presence of residual free chlorine in a sample guarantees the absence of micro-organisms present in the water, as shown in Figure 4.The method for the measurement of residual free chlorine, which is the amount of chlorine that remains unreacted in the water and that guarantees the purification of the water, is established by the Bolivian standard and indicates that the DPD Spectrophotometric, iodometry, or DPD Colorimetric method must be used for analysis.In our case, we use the colorimetric method DPD (N, N Diethyl paraphenylenediamine), which consists of taking the sample of water that has been chlorinated, that is, at a tap belonging to the distribution network to which the DPD is added and with the help of counters ample of deionized water that is the target and with which the color change is compared, whose value is obtained with the digital colorimeter, which measures the passage of light through the sample and gives us as a result the value of the concentration of residual free chlorine present in our water sample in units of concentration known as parts per millionppm.Having the presence of residual free chlorine in a sample guarantees the absence of micro-organisms present in the water, as shown in Figure 4.For the determination of chlorine that is needed for dosing in the distribution network, it is necessary to know the flow rate of the water supply well; for this, a portable ultrasonic flowmeter LANRY DF6100-EH was used that was added to the outlet of the well to determine the flow rate of the water it supplies and identify the hours of greatest demand for water of the population that is supplied in the test well.It is necessary to know For the determination of chlorine that is needed for dosing in the distribution network, it is necessary to know the flow rate of the water supply well; for this, a portable ultrasonic flowmeter LANRY DF6100-EH was used that was added to the outlet of the well to determine the flow rate of the water it supplies and identify the hours of greatest demand for water of the population that is supplied in the test well.It is necessary to know the information of the well, since, in that way, it will be possible to calculate and define the amount of disinfectant that will be needed daily, which will help us to compare the 3 chlorination methods used and which of them is most efficient. Determination of the Amount of Sodium Hypochlorite and Calcium Hypochlorite for Water Disinfection Here, we determine the amount of chlorine needed in disinfection as a treatment for water purification. To determine the amount of chlorine to be produced, it is necessary to know the flow rate of the well, the working hours of the well, and with this, the volume of water to be disinfected is determined.It is also important to know the concentration of hypochlorite given by the manufacturer and the dose in mg/L by which it is theoretically desired to exit at the wellhead.For this purpose, the following formula was used: where: V chlorine is the volume of chlorine needed in liters for sodium hypochlorite and in grams for calcium hypochlorite; it must be divided by 10 to convert units. V water is the volume of water that is going to be disinfected; for this, you must know the flow rate of the well and the hours of service in liters. D water is the dose or concentration of the chlorine solution with which you want to leave the wellhead, in mg/L. C chlorine is the concentration of chlorine indicated by the manufacturer in mg/L. Determination of the Amount of Chlorine Gas for Water Disinfection The amount of chlorine gas required in disinfection as a treatment for water purification is determined using Equation (5). It is necessary to know the flow rate of the well, in addition to the dose of chlorine to be injected in mg/L; this value must be in the range of 0.2 to 1.5 mg/L, which is the value required by the Bolivian standard.For this purpose, the following formula was used: where: D is the chlorine required, to be regulated in volumetric indicator of the chlorometer (gr of chlorine/hour). C is the dose of chlorine to be injected, the desired chlorine concentration in the water, (in mg/L). Q is the flow rate of the water to be treated (in m 3 /h). To evaluate the quality of the water in the distribution network and check the efficiency of the disinfection method, as well as its scope compared to traditional methods using known disinfectants, depends on the concentration that is read in the portable device, in addition to the pH and TDS, at the sampling points strategically located in the supply network of the study well.The monitoring at the sampling points of the distribution network was carried out from 18 October to 21 October 2022, for a period of four days, for each of the 10 points located throughout the service area of the study well.The recorded values correspond to the main characteristics that describe the behavior of the water, allowing us to conclude about its potability.These records facilitated the elaboration of graphs that allowed to describe the behavior in the concentration of residual free chlorine in the water distribution network compared to the 3 methods evaluated. Once the data of the initial characteristics of the water have been collected and it has been mentioned by the operators of the service that they feel satisfied working with sodium hypochlorite-since it is in a liquid state, it is easier to handle-the prototype is developed that uses the electrochemical method to obtain it; the hypochlorite obtained will be prepared daily to prevent the concentration from being reduced and ensure that it does not generate too much waste that could have been used in the past nor clog the chlorinator. Results of the Electrochemical Method The results the determination of the well flow after the study carried out are 13.34 L/s, in which it was determined that the hours of greatest consumption of the population are in the morning shift from 5 to 6:30 and at night from 6:00 p.m. to 9:00 p.m For the measurement of the amount of chlorine in each of the methods, the data shown in the tables below were obtained according to the calculations made in each one by the type of chlorine used; for example, for liquid chlorine, 15 L, 10 L and 20 L as shown in Table 1 and Figure 5.For the measurement of the amount of chlorine in each of the methods, the data shown in the tables below were obtained according to the calculations made in each one by the type of chlorine used; for example, for liquid chlorine, 15 L, 10 L and 20 L as shown in Table 1 and Figure 5.It can be noticed that as the distance increases, the chlorine residue decreases.In this way, it is determined that the M2 sample is the best option according to the NB 512 standard. In the Figure 6 and Table 2, shows the amount of sodium hypochlorite, in solid form, that was added to the water distribution network and shows the results of residual chlorine.It may be noted that the farther away the sample is, the lower the amount of residual chlorine.This depends on the amount of solid chlorine that was added to the drinking water distribution network.The M2 additive sample was determined to meet the requirements of NB 512.It can be noticed that as the distance increases, the chlorine residue decreases.In this way, it is determined that the M2 sample is the best option according to the NB 512 standard. In the Figure 6 and Table 2, shows the amount of sodium hypochlorite, in solid form, that was added to the water distribution network and shows the results of residual chlorine.It may be noted that the farther away the sample is, the lower the amount of residual chlorine.This depends on the amount of solid chlorine that was added to the drinking water distribution network.The M2 additive sample was determined to meet the requirements of NB 512. In the Figure 6 and Table 2, shows the amount of sodium hypochlorite, in solid form, that was added to the water distribution network and shows the results of residual chlorine.It may be noted that the farther away the sample is, the lower the amount of residual chlorine.This depends on the amount of solid chlorine that was added to the drinking water distribution network.The M2 additive sample was determined to meet the requirements of NB 512.In Figure 7 and Table 3, shows the amount of chlorine gas that was added to the distribution network and shows the levels of residual chlorine obtained.It can be noticed that as the distance increases, the level of residual chlorine decreases, which depends on the amount of chlorine gas that was added to the drinking water distribution network.The results indicate that the M2 additive sample meets the requirements set by the NB 512 standard.In Figure 7 and Table 3, shows the amount of chlorine gas that was added to the distribution network and shows the levels of residual chlorine obtained.It can be noticed that as the distance increases, the level of residual chlorine decreases, which depends on the amount of chlorine gas that was added to the drinking water distribution network.The results indicate that the M2 additive sample meets the requirements set by the NB 512 standard.From the Table 4 of initial parameters, the following results are presented in Table 5: In Figure 8 show that the residual chlorine value decreases as the distance increases. Water 2024, 16, x FOR PEER REVIEW 10 of 14 In Figure 8 show that the residual chlorine value decreases as the distance increases.In Table 5, we refer to the fact that initially, taking samples from taps of partners close to the water distribution source, there is residual chlorine with a maximum of 0.7 ppm, although the Bolivian Standard allows us higher values.There may be user rejection due to the characteristic smell of chlorine, according to the EPSA that provides the service.It can also be seen that as it moves away from the distribution source, the value of residual chlorine decreases, which gives us indications that it has been consumed along the distribution network, perhaps reacting with some compounds or minerals found in the water or present in the network.To ensure that disinfection is being effective, microbiological tests should be carried out to guarantee the non-presence of living organisms in the drinking water distributed throughout the area. The Figure 9 shows that the pH level remains constant throughout the distribution network, and we can say that the chlorine content does not modify the physicochemical characteristics of the water.In Table 5, we refer to the fact that initially, taking samples from taps of partners close to the water distribution source, there is residual chlorine with a maximum of 0.7 ppm, although the Bolivian Standard allows us higher values.There may be user rejection due to the characteristic smell of chlorine, according to the EPSA that provides the service.It can also be seen that as it moves away from the distribution source, the value of residual chlorine decreases, which gives us indications that it has been consumed along the distribution network, perhaps reacting with some compounds or minerals found in the water or present in the network.To ensure that disinfection is being effective, microbiological tests should be carried out to guarantee the non-presence of living organisms in the drinking water distributed throughout the area. The Figure 9 shows that the pH level remains constant throughout the distribution network, and we can say that the chlorine content does not modify the physicochemical characteristics of the water. The Figure 10 shows that the level of total dissolved solids remains the same throughout the distribution network, which means that chlorine is not reacting to form chlorination by-products that can be harmful to health. or present in the network.To ensure that disinfection is being effective, microbiological tests should be carried out to guarantee the non-presence of living organisms in the drinking water distributed throughout the area. The Figure 9 shows that the pH level remains constant throughout the distribution network, and we can say that the chlorine content does not modify the physicochemical characteristics of the water.The Figure 10 shows that the level of total dissolved solids remains the same throughout the distribution network, which means that chlorine is not reacting to form chlorination by-products that can be harmful to health. Discussion In Santa Cruz, the water supply comes from underground wells with depths greater than 200 m, which ensures the stability of the sources.In addition, the absence of living organisms is due to the lack of oxygen available at these depths for them to survive. Water disinfection is the only treatment carried out by many of the city's water service providers (EPSAs).This ensures the provision of safe water for the population's consumption.While there are several methods of chlorination, each with its advantages and disadvantages [18,19].The Bolivian Standard does not require a specific method, but rather requires the continuity of chlorination during the water supply service.Any of the mechanisms used undergoes the same process of disinfection and chemical reactions, forming hypochlorous acid and hypochlorite ions within the pH range set by NB512.These reactions result in the presence of residual free chlorine, as shown in Table 5 based on measurements obtained by the electrochemical method with DPD. The study area around well 5, where the samples were taken, has pH levels within the values established by the Bolivian standard.This indicates that the water can be consumed without fear of direct contamination of aquifers.Any sudden or spontaneous variation in pH should serve as an alert for the service operator to make immediate decisions regarding water supply.A considerable increase in pH can raise alkalinity levels, possibly due to the increase in water hardness caused by the formation of minerals or salts in the treatment process.Conversely, a significant reduction in pH could indicate acid contamination.Table 6 shows the values within the ranges allowed by the Bolivian standard, ensuring an adequate water supply in the area, free of any contamination that alters the pH of the water [20,21]. Discussion In Santa Cruz, the water supply comes from underground wells with depths greater than 200 m, which ensures the stability of the sources.In addition, the absence of living organisms is due to the lack of oxygen available at these depths for them to survive. Water disinfection is the only treatment carried out by many of the city's water service providers (EPSAs).This ensures the provision of safe water for the population's consumption.While there are several methods of chlorination, each with its advantages and disadvantages [18,19].The Bolivian Standard does not require a specific method, but rather requires the continuity of chlorination during the water supply service.Any of the mechanisms used undergoes the same process of disinfection and chemical reactions, forming hypochlorous acid and hypochlorite ions within the pH range set by NB512.These reactions result in the presence of residual free chlorine, as shown in Table 5 based on measurements obtained by the electrochemical method with DPD. The study area around well 5, where the samples were taken, has pH levels within the values established by the Bolivian standard.This indicates that the water can be consumed without fear of direct contamination of aquifers.Any sudden or spontaneous variation in pH should serve as an alert for the service operator to make immediate decisions regarding water supply.A considerable increase in pH can raise alkalinity levels, possibly due to the increase in water hardness caused by the formation of minerals or salts in the treatment process.Conversely, a significant reduction in pH could indicate acid contamination.Table 6 shows the values within the ranges allowed by the Bolivian standard, ensuring an adequate water supply in the area, free of any contamination that alters the pH of the water [20,21].The level of total dissolved solids (TDS) in water serves as an important indicator for water quality monitoring.With the necessary equipment available, obtaining the results in Table 7 was easy.This table provides reliable information on suspended solids in EPSAsupplied water.While it does not specify the exact solids present, given the quality of the water monitored by EPSA, it can be inferred that they include calcium and magnesium ions, along with carbonates and bicarbonates characteristic of water hardness.These levels, however, do not pose a danger or risk to human health. Conclusions The level of residual free chlorine concentration obtained from the remnants of disinfection carried out through the direct pumping of sodium hypochlorite with a certain percentage concentration, into the water distribution network of the EPSA providing the drinking water service, shows acceptable levels according to Bolivian standards.The dosage is conditioned by the water flow supplied by the well to the population it serves, which varies based on hours and user consumption.These variables, along with the distance, affect residual chlorine readings as they move away from the disinfection point.The concentration of residual chlorine decreases along the distribution network route.To correct and/or improve this disinfection process, automation of chlorination can be implemented, taking into account not only the well flow and supply hours but also the chlorine level required to ensure that even the last user at the end of the distribution network receives water with residual chlorine of 0.2 ppm, as demanded by Bolivian regulations, to ensure potable water [22]. Another variable of interest in this study is pH, a crucial indicator for the supply of drinking water.This value is measured by two methods: one through the colorimetric method, where the color change of the paper is compared and associated with the characteristic colors of each pH range, and the other digitally through equipment with electrodes that provide a direct reading.pH should not exhibit variations, as it is one of the most stable quality indicators over time.It only varies in the presence of foreign and different substances in the water, which would indicate contamination.The pH is also one of the most sensitive variables; if a variation occurs, it will be noticeable and immediate.This aids the service operator in making prompt decisions while supplying water to the population. Finally, Total Dissolved Solids (TDS), as the name suggests, indicate the concentrations of parts per million in chemical units.They are measured digitally through equipment that instantly reveals the quantity of solids in the water.This value may increase with more suspended minerals or the generation of products or by-products of disinfection in the distribution network.Generally, TDS is due to the presence of carbonates, bicarbonates, calcium, and magnesium in the distribution water.To reduce these values, filtration can be performed, but this procedure is considered when values exceed acceptable norms or when the organoleptic characteristics of water change, as many minerals are essential for the body's needs. The proposed technology meets the minimum requirement for residual chlorine levels mandated by current Bolivian regulations, and no chemicals are added to the water as it works with the salts present. Figure 2 . Figure 2. The map corresponds to the location of the sampling points of the wells where (A) It is the place of the country, and (B) it is the place of the province and (C), the place of water sampling. Figure 2 . Figure 2. The map corresponds to the location of the sampling points of the wells where (A) It is the place of the country, and (B) it is the place of the province and (C), the place of water sampling. Figure 3 . Figure 3. Water samples taken at different sampling points to measure the residual chlorine level.Measurement of residual chlorine with portable equipment at (a) sample target, (b) close sampling point, (c) midpoint, and (d) far point of the distribution network. Figure 3 . Figure 3. Water samples taken at different sampling points to measure the residual chlorine level.Measurement of residual chlorine with portable equipment at (a) sample target, (b) close sampling point, (c) midpoint, and (d) far point of the distribution network. Water 2024, 16, x FOR PEER REVIEW 6 of 14 Figure 4 . Figure 4. Experimental diagram of the on-site chlorination method from production to measurement with DPD in taps of drinking water distribution network partners.2.2.1.Calculation of the Well Flow Rate to Determine the Volume of Chlorine Figure 4 . 1 . Figure 4. Experimental diagram of the on-site chlorination method from production to measurement with DPD in taps of drinking water distribution network partners. Table 1 . Residual chlorine results at network sampling points based on liquid chlorine used in disinfection. Figure 5 . Figure 5.This image shows the measurement of liquid chlorine, specifically sodium hypochlorite, that was added to the supply network.It also presents the results obtained from the chlorine residue.It can be noticed that as the distance increases, the chlorine residue decreases.In this way, it is determined that the M2 sample is the best option according to the NB 512 standard. Figure 5 . Figure 5.This image shows the measurement of liquid chlorine, specifically sodium hypochlorite, that was added to the supply network.It also presents the results obtained from the chlorine residue.It can be noticed that as the distance increases, the chlorine residue decreases.In this way, it is determined that the M2 sample is the best option according to the NB 512 standard. Figure 7 .Table 3 . Figure 7. Chlorine Gas.Table 3. Residual chlorine results at network sampling points based on chlorine gas used in disinfection. Table 1 . Residual chlorine results at network sampling points based on liquid chlorine used in disinfection. Table 2 . Residual chlorine results at network sampling points based on solid chlorine used in disinfection. Table 2 . Residual chlorine results at network sampling points based on solid chlorine used in disinfection. Table 4 . Appropriate values for Quality Water. Table 3 . Residual chlorine results at network sampling points based on chlorine gas used in disinfection. From the Table4of initial parameters, the following results are presented in Table5: Table 4 . Appropriate values for Quality Water. Table 6 . pH levels in water. Table 6 . pH levels in water.
2024-06-22T15:24:30.678Z
2024-06-20T00:00:00.000
{ "year": 2024, "sha1": "62a6d8dbb65cb45d47e9916de05426f8fa60efb1", "oa_license": "CCBY", "oa_url": "https://www.mdpi.com/2073-4441/16/12/1738/pdf?version=1718865908", "oa_status": "GOLD", "pdf_src": "ScienceParsePlus", "pdf_hash": "0282df8515d673ab235b18d13c08e348dce8da7b", "s2fieldsofstudy": [ "Environmental Science" ], "extfieldsofstudy": [] }
239473151
pes2o/s2orc
v3-fos-license
Design of Hierarchical NiCo2O4 Nanocages with Excellent Electrocatalytic Dynamic for Enhanced Methanol Oxidation Although sheet-like materials have good electrochemical properties, they still suffer from agglomeration problems during the electrocatalytic process. Integrating two-dimensional building blocks into a hollow cage-like structure is considered as an effective way to prevent agglomeration. In this work, the hierarchical NiCo2O4 nanocages were successfully synthesized via coordinated etching and precipitation method combined with a post-annealing process. The nanocages are constructed through the interaction of two-dimensional NiCo2O4 nanosheets, forming a three-dimensional hollow hierarchical architecture. The three-dimensional supporting cavity effectively prevents the aggregation of NiCo2O4 nanosheets and the hollow porous feature provides amounts of channels for mass transport and electron transfer. As an electrocatalytic electrode for methanol, the NiCo2O4 nanocages-modified glassy carbon electrode exhibits a lower overpotential of 0.29 V than those of NiO nanocages (0.38 V) and Co3O4 nanocages (0.34 V) modified glassy carbon electrodes. The low overpotential is attributed to the prominent electrocatalytic dynamic issued from the three-dimensional hollow porous architecture and two-dimensional hierarchical feature of NiCo2O4 building blocks. Furthermore, the hollow porous structure provides sufficient interspace for accommodation of structural strain and volume change, leading to improved cycling stability. The NiCo2O4 nanocages-modified glassy carbon electrode still maintains 80% of its original value after 1000 consecutive cycles. The results demonstrate that the NiCo2O4 nanocages could have potential applications in the field of direct methanol fuel cells due to the synergy between two-dimensional hierarchical feature and three-dimensional hollow structure. Introduction The ever-worsening energy and global warming issues have triggered significant research efforts in the design and development of advanced energy devices. Direct methanol fuel cells (DMFCs) have exhibited great commercialization potential credited to high energy density, low cost, easy storage and low pollutant emissions [1,2]. Generally, the performance of DMFCs was mainly related to the activity of methanol electrocatalyst [3]. Traditionally, platinum group precious metals (Pt, Ru, and Pd, etc.) were always employed as electrocatalysts for methanol. Although high electrocatalytic activity was achieved, the precious metals still suffer from high cost and low working stability [4][5][6]. In this regard, the design of Pt-free catalysts was considered as the best alternative to solve the problems. Transition metal oxides (TMOs) were recognized as ideal substitutions for noble metals due to their high-active redox sites, low cost and high physicochemical stability. Over the last decade, significant efforts on TMOs have been made to obtain high-performance Pt-free electrocatalysts for methanol [7]. Generally, the nanomaterials in conventional forms of aggregated particles generally have no significant advantages in electrocatalysis. Inspired by kinetics, quantities of TMOs with different microstructures were constructed to improve electrocatalytic kinetics and high electrocatalytic activity was obtained. Thereinto, two-dimensional (2D) nanosheets were demonstrated as ideal structure in electrocatalysis due to the unique physicochemical properties issued from high structural and morphologic anisotropies [8]. However, agglomeration of 2D nanosheets was easy to occur in electrocatalytic reactions because of the large lateral specific surface areas, leading to the decrease of active sites and diffusion channels. Integrating amounts of 2D nanosheets into three-dimensional (3D) hierarchical nanocages provided an efficient way to obtain highly active structures. The hierarchical nanocages effectively prevented the aggregation of 2D building blocks and afforded large specific surface areas, which provided sufficient active sites for electrooxidation of methanol [9]. Meanwhile, the pores formed by the interaction of nanosheets not only provided diffusion channels for methanol and intermediate products, but also relieved the volume change and structural strain during electrocatalysis, resulting in excellent stability [10]. Further, the 2D feature of building blocks and porous thin shell of hierarchical nanocages accelerated both the collected and transfer efficiency of catalytic electrons during electrocatalysis, leading to high electrocatalytic activity. Therefore, highly active and stable methanol electrocatalysts can be acquired through the design of hierarchical porous hollow nanocages. NiCo 2 O 4 possesses bimetallic active sites (Co 2+ /Co 3+ and Ni 2+ /Ni 3+ ) and excellent conductivity, exhibiting potential applications in the field of methanol oxidization [11]. In this report, NiCo 2 O 4 nanocages (NCs) were prepared by coordinated etching and precipitation (CEP) method combined with a post-annealing process. As an electrode for methanol electrooxidation, NiCo 2 O 4 NCs-modified glassy carbon electrode (GCE) exhibited low overpotential, high current density and excellent stability. Preparation of NiCo 2 O 4 NCs Cu 2 O templates were firstly prepared according to our previous work [12]. Simply, 10 mL NaOH solution (2 M) was added into 100 mL of CuCl 2 ·2H 2 O (0.01 M) and stirred at 55 • C for 30 min. Then, 10 mL of AA (0.6 M) was added. After 3 h of reaction, Cu 2 O cubes were collected and dried in vacuum. A total of 10 mg cubic Cu 2 O, 1mg NiCl 2 ·6H 2 O and 2 mg CoCl 2 ·6H 2 O were dispersed into 10 mL ethanol/water (1:1), and then, 0.33 g PVP was added and stirred for 30 min. Afterwards, 4 mL Na 2 S 2 O 3 ·5H 2 O solution (1 M) was slowly dropped at room temperature. After 3 h, hydroxide precursors were collected and dried in vacuum. Finally, the precursors were calcined using a tube furnace at 400 • C in air for 2 h with a heating rate of 1 • C min −1 . Co 3 O 4 NCs and NiO NCs were respectively prepared as contrast samples using CoCl 2 ·6H 2 O and NiCl 2 ·6H 2 O only in the CEP process. Electrochemical Measurements Cyclic voltammetry (CV), chronoamperometry and electrochemical impedance spectroscopy (EIS) were performed in 1 M KOH solution on CH1760E A191018 electrochemical workstation at room temperature. A three-electrode system was used with Ag/AgCl Nanomaterials 2021, 11, 2667 3 of 10 (saturated with KCl) and platinum disk (Φ = 2 mm) as the reference and counter electrodes, respectively. The Co 3 O 4 NCs, NiO NCs and NiCo 2 O 4 NCs-modified glassy carbon electrodes (GCE, Φ = 3 mm) were applied as working electrodes. Typically, GCE was carefully polished with 3 µm, 0.5 µm and 0.05 µm alumina powders, respectively. Then, 5 µL of the prepared sample suspension (1 mg mL −1 in 0.1% Nafion solution) was measured with a pipette and dropped onto the surface of GCE, and then dried naturally. Materials Characterization The microstructures and morphologies of the samples were observed by field emission electron microscope (FESEM, SU8020) and high-resolution transmission electron microscope (HRTEM, FEI F20). The crystal structure and elemental composition were recorded by X-ray powder diffractometer (XRD, Rigaku D/Max-2400 using Cu-Ka radiation λ = 1.54 Å). The chemical state was determined by X-ray photoelectron spectroscopy (XPS, ESCALAB 250Xi) using a 500 µm X-ray spot (energy resolution 0.4 eV). The Brunauer-Emmett-Teller (BET, Belsort-max) was applied to analyze the specific surface area and pore structure. Characterization As shown in Figure 1a, Co 2+ (1)). The part-hydrolyzation of S 2 O 3 2− also facilitated the supply of OH − (reaction (2)) [14]. Reactions (1) and (2) concurrently pushed reaction (3) forward, facilitating the formation of Ni-Co hydroxide precursor. The diffusion of S 2 O 3 2− from the surface into the interior of the shell directly affected the rate of Cu 2 O etching, while the transport of OH − from internal to external sites promoted the growth of Ni-Co hydroxide precursor [15]. The two reaction processes coordinated together to achieve dynamic balance to promote the formation of hollow structure. In order to confirm the formation mechanism, the precipitate prepared at 0, 10, 20, 30, and 180 min was collected and observed by TEM (Figure 1b). With the introduction of S 2 O 3 2− , Cu 2 O was gradually etched into the polyhedral structure due to the higher diffusion intensity of ions at the corners [15,16]. After the reaction lasted for 3 h, Cu 2 O completely disappeared and hierarchical porous nanocages were obtained ( Figure S1). Finally, NiCo 2 O 4 NCs were obtained through the annealing of Ni-Co hydroxide precursor (reaction (4)). As observed in Figure 1c, the color of the reaction system gradually became shallow and the light green precipitates generated at the same time. The fading was attributed to the etching of Cu 2 O, while the green precipitates were correlated to the formation of Ni-Co hydroxide precursor. As shown in Figure 2a, the strong peaks from the templates at 30 • , 37 • , 42 • , 62 • , 74 • and 78 • matched well with PDF#77-0199 of cubic Cu 2 O. As observed in Figure 2b, no significant diffraction peaks were observed in the precursor, revealing poor crystallinity of Ni-Co hydroxide precursor. After calcination, the crystallinity of materials was obviously improved and the diffraction peaks at 36 • , 43 • , 64 • , 75 • and 77 • were well indexed to the (111), (200), (220), (311) and (400) crystal planes of face-centered cubic NiCo 2 O 4 . XRD results clearly demonstrated the formation of high purity NiCo 2 O 4 product. Furthermore, XPS measurements were performed to obtain detailed information on the elements and oxidation state of prepared NiCo 2 O 4 . As shown in Figure 2c, the survey spectrum displayed a series of strong peaks related to Ni, Co, O and C species, indicating the main chemical elements of the NiCo 2 O 4 . In Figure 2d, two states of Co 2+ and Co 3+ were clearly observed according to Gaussian fitting. Specifically, the fitting peaks at 779.3 eV and 794.3 eV were ascribed to Co 3+ . Another two fitting peaks at 781.0 eV and 795.8 eV were ascribed to Co 2+ [17]. Analogously, the Ni 2p spectra included two kinds of nickel species of Ni 2+ and Ni 3+ in Figure 2e. The fitting peaks at 854.0 eV and 871.7 eV were ascribed to Ni 2+ , while the fitting peaks at 855.9 eV and 873.9 eV were related to Ni 3+ [18]. As shown in Figure 2f, the fine spectrum of O 1s displayed three peaks originated from M-O-M, C-O=C and O=C. The fitting peak of M-O-M at 528.5 eV was the typical metal-oxygen bond [19]. C-O=C at a binding energy of 530.5 eV corresponded to the high number of defect sites containing low oxygen coordination [20]. O=C at a binding energy of 531.7 eV could be ascribed to the multiplicity of physisorbed water at and within the surface [17,21]. The results of XPS demonstrated a mixed valence containing Co 2+ , Co 3+ , Ni 2+ and Ni 3+ , which was consistent with previous reports [22]. The complex electronic states of Ni 2+ /Ni 3+ and Co 2+ /Co 3+ could afford enough active sites for methanol oxidation, which may be one of the important factors contributing to the high electrocatalytic performance. As shown in Figure 2a, the strong peaks from the templates at 30°, 37°, 42°, 62°, 74° and 78° matched well with PDF#77-0199 of cubic Cu2O. As observed in Figure 2b, no significant diffraction peaks were observed in the precursor, revealing poor crystallinity of Ni-Co hydroxide precursor. After calcination, the crystallinity of materials was obviously improved and the diffraction peaks at 36°, 43°, 64°, 75° and 77° were well indexed to the (111), (200), (220), (311) and (400) crystal planes of face-centered cubic NiCo2O4. XRD results clearly demonstrated the formation of high purity NiCo2O4 product. Furthermore, XPS measurements were performed to obtain detailed information on the elements and oxidation state of prepared NiCo2O4. As shown in Figure 2c, the survey spectrum displayed a series of strong peaks related to Ni, Co, O and C species, indicating the main The surface morphologies of the CuO 2 templates, NiO NCs and Co 3 O 4 NCs were examined and displayed in Figure 3. As shown in Figure 3a, the Cu 2 O templates were uniformly dispersed, which was conducive to the adsorption of Ni 2+ and Co 2+ . The surface of Cu 2 O was smooth and the edge size was about 500 nm ( Figure 3b). As observed in Figure 3c, the NiO cube was composed of a large number of stacked nanoparticles, and the NiO cube had nearly the same size compared to Cu 2 O. As shown in Figure 3d, the NiO cube displayed a hollow structure with a wall thickness of about 80 nm. Similarly, the Co 3 O 4 cube also displayed a hollow cubic feature (Figure 3e). However, the Co 3 O 4 NCs mainly consisted of a large number of stacked nanosheets, forming a network structure (Figure 3f). was consistent with previous reports [22]. The complex electronic states of Ni 2+ /Ni 3+ and Co 2+ /Co 3+ could afford enough active sites for methanol oxidation, which may be one of the important factors contributing to the high electrocatalytic performance. The surface morphologies of the CuO2 templates, NiO NCs and Co3O4 NCs were examined and displayed in Figure 3. As shown in Figure 3a, the Cu2O templates were uniformly dispersed, which was conducive to the adsorption of Ni 2+ and Co 2+ . The surface of Cu2O was smooth and the edge size was about 500 nm ( Figure 3b). As observed in Figure 3c, the NiO cube was composed of a large number of stacked nanoparticles, and the NiO cube had nearly the same size compared to Cu2O. As shown in Figure 3d, the NiO cube displayed a hollow structure with a wall thickness of about 80 nm. Similarly, the Co3O4 cube also displayed a hollow cubic feature (Figure 3e). However, the Co3O4 NCs mainly consisted of a large number of stacked nanosheets, forming a network structure ( Figure 3f). As observed in Figure 4a, the uniformly distributed Ni-Co hydroxide precursor accurately replicated the cubic structure of Cu2O templates and had an edge size of 500 nm. As shown in Figure 4b, the surface of Ni-Co hydroxide precursor was composed of a large number of interacted nanosheets and formed a network structure. In addition, the precursor displayed a cage-like structure and the shell thickness was about 200 nm. After calcination, the nanosheets on the surface became thicker, more compact and the thickness of the shell reduced to about 100 nm (Figure 4c). Notably, the crinkly nanosheets structure was clearly investigated in Figure 4d (400) planes of spinel NiCo2O4, respectively. The results were consistent with the XRD analysis and a previous report [22]. On the basis of the above discussion, NiCo2O4 NCs were constructed by the combination of the CEP method and post calcination. The highly porous structure provided sufficient active sites and mass transport channels, which are beneficial for electrocatalytic kinetics, leading to high electrocatalytic activity [23,24]. As observed in Figure 4a, the uniformly distributed Ni-Co hydroxide precursor accurately replicated the cubic structure of Cu 2 O templates and had an edge size of 500 nm. As shown in Figure 4b, the surface of Ni-Co hydroxide precursor was composed of a large number of interacted nanosheets and formed a network structure. In addition, the precursor displayed a cage-like structure and the shell thickness was about 200 nm. After calcination, the nanosheets on the surface became thicker, more compact and the thickness of the shell reduced to about 100 nm (Figure 4c). Notably, the crinkly nanosheets structure was clearly investigated in Figure 4d The results were consistent with the XRD analysis and a previous report [22]. On the basis of the above discussion, NiCo 2 O 4 NCs were constructed by the combination of the CEP method and post calcination. The highly porous structure provided sufficient active sites and mass transport channels, which are beneficial for electrocatalytic kinetics, leading to high electrocatalytic activity [23,24]. Electrocatalytic Activity of NiCo2O4 NCs/GCE Towards Methanol The electrocatalytic activity of NiCo2O4 NCs/GCE and the contrast samples was detailly evaluated by CV and EIS. Figure 5a shows the CV curves of the three electrodes in 1 M KOH in the absence of methanol. The distinct pairs of redox peaks were observed in all the three CV curves. The redox peaks of NiO NCs/GCE corresponded to the reversible transition of Ni ions, such as Ni 2+ /Ni 3+ [25]. Similarly, the redox peaks of Co3O4 NCs/GCE were attributed to the transition between Co 2+ /Co 3+ or Co 3+ /Co 4+ [26]. The CV curve of NiCo2O4 NCs/GCE exhibited a much larger enclosed area than those of the Co3O4 NCs/GCE and NiO NCs/GCE. This may be due to the fact that NiCo2O4 was generally regarded as a binary TMO, which has more complicated redox electrical pairs [27]. As displayed in Figure 5b, the electrocatalytic current towards methanol on the NiCo2O4 NCs/GCE, Co3O4 NCs/GCE and NiO NCs/GCE can be clearly observed compared to Electrocatalytic Activity of NiCo 2 O 4 NCs/GCE towards Methanol The electrocatalytic activity of NiCo 2 O 4 NCs/GCE and the contrast samples was detailly evaluated by CV and EIS. Figure 5a shows the CV curves of the three electrodes in 1 M KOH in the absence of methanol. The distinct pairs of redox peaks were observed in all the three CV curves. The redox peaks of NiO NCs/GCE corresponded to the reversible transition of Ni ions, such as Ni 2+ /Ni 3+ [25]. Similarly, the redox peaks of Co 3 O 4 NCs/GCE were attributed to the transition between Co 2+ /Co 3+ or Co 3+ /Co 4+ [26]. The CV curve of NiCo 2 O 4 NCs/GCE exhibited a much larger enclosed area than those of the Co 3 O 4 NCs/GCE and NiO NCs/GCE. This may be due to the fact that NiCo 2 O 4 was generally regarded as a binary TMO, which has more complicated redox electrical pairs [27]. As displayed in Figure 5b, the electrocatalytic current towards methanol on the NiCo 2 O 4 NCs/GCE, Co 3 O 4 NCs/GCE and NiO NCs/GCE can be clearly observed compared to Figure 5a, demonstrating that all the three electrodes showed catalytic activity towards methanol. Notably, the NiCo 2 O 4 NCs/GCE presented a larger catalytic current than the other two electrodes. With the potential rising to 0.45 V, the current of NiCo 2 O 4 NCs/GCE was 3.16 and 9.11 times that of Co 3 O 4 NCs/GCE and NiO NCs/GCE, respectively. In addition, the onset potential towards methanol oxidation on the NiCo 2 O 4 NCs/GCE was about 0.29 V ( Figure S2, which was lower than those of Co 3 O 4 NCs/GCE (0.34 V, Figure S3) and NiO NCs/GCE (0.38 V, Figure S4), revealing higher electrocatalytic activity. As shown in Figure 5c, the EIS was carried out in 1 M KOH containing 0.5 M methanol and the equivalent circuit is displayed in the insert. In the circuit, R s , C, R ct and Z w were the internal resistance, redox capacitance, charge transfer resistance and Warburg resistance, respectively [28,29]. Notably, the R ct value of NiCo 2 O 4 NCs/GCE (4.4 KΩ) was obviously lower than those of Co 3 O 4 NCs/GCE (10.8 KΩ and NiO NCs/GCE (18.3 KΩ), indicating fast electron transfer rate within the electrode or at the electrode/electrolyte interface. The lower charge transfer resistance was related to the anisotropic feature of building blocks and relatively high conductivity of NiCo 2 O 4 . At low frequencies, the NiCo 2 O 4 NCs/GCE displayed larger Z w than Co 3 O 4 NCs/GCE and NiO NCs/GCE, revealing lower ion diffusion resistance. The lower ion diffusion resistance might be attributed to ample diffusion channels afforded by the interacted NiCo 2 O 4 nanosheets. In order to support the kinetics analysis of EIS, the surfaces area and porosity of NiCo 2 O 4 NCs were tested Nanomaterials 2021, 11, 2667 7 of 10 by BET. In Figure 5d, the curve presents a H 3 -type hysteric loop in the range of 0.45-1.0, indicating a typical mesoporous characteristic [30,31]. The mean pore size of NiCo 2 O 4 NCs/GCE was around 9 nm, which was ideal for the diffusion of methanol [32]. Moreover, the specific surface area and pore volume were 38.3 m 2 g −1 and 0.2 cm 3 g −1 , respectively, which were both higher than those of the precursor (30.0 m 2 g −1 , 0.1 cm 3 g −1 , Figure S5). The large specific surface area provided abundant active sites for methanol catalysis, and the appropriate pore volume provided ordered diffusion channels for rapid transport [33]. In short, NiCo 2 O 4 NCs/GCE exhibited rich redox active sites and transmission channels, leading to excellent electrocatalytic activity. tron transfer rate within the electrode or at the electrode/electrolyte interface. The lower charge transfer resistance was related to the anisotropic feature of building blocks and relatively high conductivity of NiCo2O4. At low frequencies, the NiCo2O4 NCs/GCE displayed larger Zw than Co3O4 NCs/GCE and NiO NCs/GCE, revealing lower ion diffusion resistance. The lower ion diffusion resistance might be attributed to ample diffusion channels afforded by the interacted NiCo2O4 nanosheets. In order to support the kinetics analysis of EIS, the surfaces area and porosity of NiCo2O4 NCs were tested by BET. In Figure 5d, the curve presents a H3-type hysteric loop in the range of 0.45-1.0, indicating a typical mesoporous characteristic [30,31]. The mean pore size of NiCo2O4 NCs/GCE was around 9 nm, which was ideal for the diffusion of methanol [32]. Moreover, the specific surface area and pore volume were 38.3 m 2 g −1 and 0.2 cm 3 g −1 , respectively, which were both higher than those of the precursor (30.0 m 2 g −1 , 0.1 cm 3 g −1 , Figure S5). The large specific surface area provided abundant active sites for methanol catalysis, and the appropriate pore volume provided ordered diffusion channels for rapid transport [33]. In short, NiCo2O4 NCs/GCE exhibited rich redox active sites and transmission channels, leading to excellent electrocatalytic activity. The chronoamperometry is an effective tool to investigate electrochemical stability of the electrocatalyst. As shown in Figure 6a, the electrochemical stability of the NiCo 2 O 4 NCs/GCE, Co 3 O 4 NCs/GCE and NiO NCs/GCE for methanol oxidation at 0.45 V was investigated. Notably, the NiCo 2 O 4 NCs/GCE displayed largest electrocatalytic current towards 0.5 M methanol. The current of NiCo 2 O 4 NCs/GCE displayed a decrease at the initial stage due to poisoning of the intermediates, and then kept a relatively steady value until 1100 s [34,35]. The final current still maintained 85% of its original value, which was three times of the Co 3 O 4 NCs/GCE and thirteen times of the NiO NCs/GCE. The CV tests were carried out for 1000 cycles to further investigate the stability of NiCo 2 O 4 NCs/GCE. The maximum current density presented an 8% decrease at the 500th cycle, and maintained 80% of the initial value after 1000 cycles. The hierarchical porous structure provided sufficient interspaces for accommodation of volume change and structural strain during electrocatalysis, resulting in excellent long-term stability towards methanol. three times of the Co3O4 NCs/GCE and thirteen times of the NiO NCs/GCE. The CV tests were carried out for 1000 cycles to further investigate the stability of NiCo2O4 NCs/GCE. The maximum current density presented an 8% decrease at the 500th cycle, and maintained 80% of the initial value after 1000 cycles. The hierarchical porous structure provided sufficient interspaces for accommodation of volume change and structural strain during electrocatalysis, resulting in excellent long-term stability towards methanol. Conclusions In summary, the NiCo2O4 NCs were successfully synthesized through the CEP method combined with a post-annealing process. The designed NiCo2O4 NCs were constructed through the interaction between NiCo2O4 NSs and formed a hierarchical cagelike structure. As a catalytic electrode for methanol oxidation, the NiCo2O4 NCs/GCE exhibited high electrocatalytic activity in terms of low onset potential (0.29 V) and excellent long-term stability (80% after 1000 cycles). It is demonstrated that the NiCo2O4 NCs/GCE was an ideal electrode for DMFCs and the design of hollow hierarchical structure was an effective method to obtain highly active 2D electrocatalysts. Supplementary Materials: The Supporting Information is available free of charge on the MDPI Publications website at www.mdpi.com/xxx/s1, Figure S1: XPS survey of NiCo2O4, Figure S2: CV curves of NiCo2O4 NCs/GCE in 1 M KOH without methanol and with 0.5 M methanol at 50 mV s −1 , Figure S3: CV curves of Co3O4 NCs/GCE in 1 M KOH without methanol and with 0.5 M methanol at 50 mV s −1 , Figure S4: CV curves of NiO NCs/GCE in 1 M KOH without methanol and with 0.5 M methanol at 50 mV s −1 , Figure S5: N2 adsorption-desorption isotherms of the Ni-Co hydroxide precursors. Conclusions In summary, the NiCo 2 O 4 NCs were successfully synthesized through the CEP method combined with a post-annealing process. The designed NiCo 2 O 4 NCs were constructed through the interaction between NiCo 2 O 4 NSs and formed a hierarchical cage-like structure. As a catalytic electrode for methanol oxidation, the NiCo 2 O 4 NCs/GCE exhibited high electrocatalytic activity in terms of low onset potential (0.29 V) and excellent long-term stability (80% after 1000 cycles). It is demonstrated that the NiCo 2 O 4 NCs/GCE was an ideal electrode for DMFCs and the design of hollow hierarchical structure was an effective method to obtain highly active 2D electrocatalysts. Figure S5: N 2 adsorption-desorption isotherms of the Ni-Co hydroxide precursors.
2021-10-15T15:18:46.077Z
2021-10-01T00:00:00.000
{ "year": 2021, "sha1": "148ad47398b41a1ac17e82abd584382ed7f12526", "oa_license": "CCBY", "oa_url": "https://www.mdpi.com/2079-4991/11/10/2667/pdf", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "e4acad849943df76e4c70964bf95c2a3adb62ad0", "s2fieldsofstudy": [ "Materials Science", "Chemistry" ], "extfieldsofstudy": [ "Medicine" ] }
56088360
pes2o/s2orc
v3-fos-license
ENTROPY GENERATION DUE TO NATURAL CONVECTION WITH NON -UNIFORM HEATING OF POROUS QUADRANTAL ENCLOSURE-A NUMERICAL STUDY Industrial processes optimization for higher energy efficiency may be effectively carried out based on the thermodynamic approach of entropy generation minimization (EGM). This approach provides the key insights on how the available energy (exergy) is being destroyed during the process and the ways to minimize its destruction. In this study, EGM approach is implemented for the analysis of optimal thermal mixing and temperature uniformity due to natural convection in quadrantal cavity filled with porous medium for the material processing applications or for cooling of electrical equipments. Effect of the permeability of the porous medium and the role of non-uniform heating in enhancing the thermal mixing, temperature effects and minimization of entropy generation is analyzed. The numerical solutions are obtained using finite element method. Stream lines , Isotherms and Entropy generation results are depicted for Ra=106, 1.7x105, 104 and for Da=10-3, 10-4 and 10-5 at Pr=0.71.It is found that at lower Darcy number (Da), the thermal mixing is low and the heat transfer irreversibility dominates the total entropy generation. In contrast, thermal mixing is improved due to enhanced convection at higher Da. It is observed that fluid friction irreversibility dominates over heat transfer irreversibility for only Da=10-3 at Ra=106. The local entropy generation is maximum at the bottom wall, while at the center and top of the enclosure it becomes minimum. Based on EGM analysis, it is established that total entropy production is not significant with larger thermal mixing at high Darcy number, and can be recommended for material processing process or for cooling of electrical equipments. INTRODUCTION Natural convection phenomenon is a very interesting topic for five decades since it plays a vital role in many engineering applications. Some of the recent investigations on natural convection in fluid saturated porous enclosures particularly focus on flow and heat transfer, but not effectively on energy distribution. There are many published studies which are related to natural convection in rectangular porous enclosures in the past. Moya et al.(1987), Bejan (1979), Prasad and Kulacki (1984), Baytas and Pop (1999), Baytas (1999) have published many important results for this types of enclosure. Waheed (2009) studied the problem of the natural laminar convection in square enclosures filled with fluid-saturated porous medium numerically in his work using the heat-function formulation approach. The flow governing equations including the Brinkmanextended Darcy equations of motion, energy, and heat-function equations were thereby solved using the finite-difference method. The results showed that all the problem governing parameters have strong effect on the convection vigour, isotherms, and heat-function fields and profiles. They also found that an increase in the value of the Darcy number above unity had no more influence on the heat-function profiles. They also gave a clear picture of the heat trajectory in the enclosure which became possible through the use of the heat lines. The heat lines approach thus made possible the interpretation of the flow and the temperature fields, and thus provided a better means to explain the convective heat transfer in the cavity filled with fluid-saturated porous media. Moukalled and Darwish (2010) presented numerical solutions for laminar natural convection heat transfer in a fluid saturated porous enclosure between * Corresponding Author. Email: shantanudut@gmail.com. two isothermal concentric cylinders of rhombic cross sections. They found that the flow strength and convection heat transfer increase with an increase in Ra, Da, Eg(enclosure gap ratio), and/or ε(porosity). At low Eg values, the flow in the enclosure observed was weak and convection heat transfer was low even though the total heat transfer was found to be higher at higher Eg values, due to an increase in conduction heat transfer. An increase in Pr was associated with a decrease in the flow strength and an increase in total heat transfer. They also stated that convection started affecting the total heat transfer at Ra values higher than the critical one. The critical Ra decreased with increasing Da and/or ε, and increased with decreasing Eg in that study . Another interesting problem was reported by Wong and Xie (2011) who studied the classical/spectral conjugate gradient methods with adjoint equations which were applicable to the natural convection problem in a porous medium for the determination of an unknown heat source. The direct, sensitivity and adjoint equations associated with the Darcy and the Forchheimer terms were given for a Boussinesq fluid, over a square porous medium in two dimensions. The inverse solutions that were obtained was determined by a second-order scheme in space and in time and further a mixed finite element method were presented for a square enclosure under known temperature boundary conditions. Questions regarding the numerical accuracy of the proposed schemes for porous flow models for recovering the strength of the unknown heat source were also addressed in that research article. Also Bhuvaneswari et al.(2011) discussed about convective flow and heat transfer in a cavity in the presence of uniform magnetic field and discussed about different combinations of phase deviation, amplitude ratio, and Hartmann and Rayleigh numbers. It was observed by them that the heat transfer rate increased with amplitude ratio. They also stated that the heat transfer rate first increased and then decreased on increasing the Frontiers in Heat and Mass Transfer Available at www.ThermalFluidsCentral.org phase deviation. They also found that the heat transfer rate decreased with an increasing Hartmann number. Pandit and Chattopadhyay (2014) employed higher order compact scheme to investigate the transient natural convection in a deep enclosure filled with porous medium. Also Chou et al.(2015) analyzed and employed the effects of temperaturedependent viscosity on natural convection inside porous media between two concentric spheres. Gibanov et al (2017) carried out numerical analysis of natural convection combined with entropy for a ferrofluid under the effect of inclined uniform magnetic field. They used the governing equations with corresponding boundary conditions formulated in dimensionless stream function and vorticity using Brinkman-extended Darcy model for porous layer and further they were solved numerically using finite difference method. The published results by them showed that an inclusion of spherical ferric oxide nano-particles could lead to a diminution of entropy generation in the case of similar flow and heat transfer structures. Entropy generation and heat flow analysis study on nanoparticle added fluids, namely nano fluids, are increasing rapidly in recent years in different shaped cavities or nano and micro channels. A numerical analysis of laminar natural convection with entropy generation in a partially heated open triangular cavity filled with a Cu-water nano-fluid was carried out by Bondareva et al.(2016). The main findings of that paper was that there was heat transfer enhancement and fluid flow attenuation with nanoparticles volume fraction, mainly for high values of Rayleigh number. Sheremet et.al(2017) also carried out a numerical study on entropy generation in natural convection of nanofluid in a wavy shaped cavity using a single-phase nanofluid model. They also found that the average Bejan number is an increasing function of nanoparticle volume fraction and a decreasing function of the Rayleigh number, undulation number and wavy contraction ratio. Also, they investigated and found that an increase in the wavy contraction ratio leads to an attenuation of the convective flow due to an intensification of the secondary vortices located in the bottom and upper wavy troughs. It was also found by them that the solid volume fraction suppresses the fluid motion. At the same time, it was also found that an increase in the nanoparticle volume fraction leads to an enhancement of the heat transfer rate and the average Bejan number while the average entropy generation decreased. The work on quadrantal cavity has been done earlier by Aydin and Yesiloz (2011) and Yesiloz and Aydin (2011).They investigated experimentally and numerically the effects of the inclination angle(φ), and the Rayleigh number, Ra on fluid flow and heat transfer for the range of angle of inclination between 0 0 ≤φ≤360 0 , and Ra from 10 5 to 10 7 . It was disclosed by them that heat transfer changes dramatically according to the inclination angle which affects convection currents inside, i.e. flow physics inside. Sen et al.(2013Sen et al.( ,2015 performed numerical investigation to analyze the natural convection heat transfer in quadrantal cavity having hot bottom wall and cold curved wall and using heater on adjacent walls for Rayleigh number in the range of 10 3 ≤ Ra ≤ 10 7 and found out that both flow and temperature fields are affected by a changing Ra. They also found out that heat transfer increases with increase in Rayleigh number and the flow strength increases with increase in size of heater on the vertical wall compared to the bottom wall and temperature fields were also affected. In contrast, with increase in size of heater on both side of adjacent walls flow strength did not change significantly. Bose et al. (2013) also performed numerical simulation to study the natural convection heat transfer in quadrantal cavity having finned hot vertical wall and cold bottom wall for Rayleigh number in the range of 10 4 ≤ Ra ≤ 10 6 . They found out that heat transfer increases with increase of Rayleigh number and they also found out that the values of stream function (flow strength) also reportedly increase with the increasing of Rayleigh number. They also found out that non-dimensional fin location changes the shape of vortices and enhances the strength of the flow and with the increasing of dimensionless fin length, the fin causes blockage effect for the flow. Very recently Dutta et al.(2018) have numerically investigated porous quadrantal enclosure with sinusoidal heating of bottom wall and cold vertical wall and insulated curved wall and found the effect of Darcy number is significant in dictating the Nusselt number only for higher values of Rayleigh number and the variation is more profound for larger values of Darcy number. The variation of entropy generation rate was found to be significant with the Darcy number only for higher values of Rayleigh number. We also know that the available energy supply or energy conservation practices truly balance the energy efficiency, while heat transfer enhancement is used in combination with the effective energy distribution. Hence, the principles of fluid mechanics and heat transfer go hand in hand with the second law of thermodynamics to determine energy efficiency and equals to or approaches sustainable energy improvements of the system and processes. This approach of thermodynamic optimization known as entropy generation minimization (EGM) was first reported by Bejan (1996) to optimize any process or system by minimizing the irreversibilities present in the system. Varol et al. (2009) analyzed the entropy generation during combined convection and conduction inside the right angle trapezoidal enclosure filled with fluid saturated porous medium via considering major controlling factors, such as thermal conductivity ratio and dimensionless thickness of the solid wall on heat transfer and fluid flow. Results of entropy generation analysis for natural convection inside a porous square enclosure was presented by Zahmatkesh (2008) in terms of streamlines, isothermal lines, iso-entropy generation lines, and iso-Bejan lines for various thermal boundary conditions, and it was concluded by them that thermal boundary conditions greatly influence the average Nusselt number, global entropy generation rate, and global Bejan number in the calculated range of Darcy-modified Rayleigh number. The objective of this present study is to analyze the entropy generation due to natural convection in quadrantal enclosure filled with fluid saturated porous medium for Rayleigh Benard heating situations based on its various engineering (cooling of mounted electronic devices) and natural applications(geothermal).The finite element method has been employed to solve the nonlinear equations of fluid flow, energy and entropy for a range of Darcy parameters Da=10 -5 -10 -3 and Pr=0.71 for Rayleigh number (Ra=10 4 ,1.7x10 5 and 10 6 ). Also important to emphasise here is that the numerical simulations are chosen in order to show the effect of Da which are reported to be well within the regimes of possible operating conditions as reported in the literature by Basak et al.( ,2013. It is also important to mention in this context here that the Darcy number physically represents the permeability of the porous medium chosen for the numerical study, while the Rayleigh number represents the relative ability to transport thermal energy due to buoyancy to that due to diffusion of heat in the enclosure or cavity chosen for study. The results are presented in terms of contours of isotherms (θ), streamlines (ψ), entropy generation maps due to heat transfer (Sθ), and entropy generation maps due to fluid friction (Sψ). In many industrial or practical situations we frequently encounter with horizontal cylinders filled with fluids. This type of geometry and flow configuration discussed in the present article have relevance to those cases discussed above and are also commonly observed in the field of electronics, cooling system, heat exchanger, etc. Also, as per authors knowledge very less work regarding natural convection in quadrantal enclosures with porous media with sinusoidal heated bottom wall and neighbouring curved and vertical walls have been reported earlier, which lead to motivation of this numerical study. As to the best of my knowledge, this is the first natural convection study in quadrantal cavity with non-linear heating of hot bottom wall filled with porous material with the following boundary conditions. PROBLEM DESCRIPTION The quadrantal enclosure chosen for our study is filled with porous material inside and is represented in Figure1a. The grid (Figure 1b) build by free triangular method is also shown to the right. The bottom wall is the hot-wall imposed with non uniform heating and the vertical wall and the curved wall is maintained at lower temperature than the hot bottom wall of the enclosure and is defined as the cold-walls. Thermo physical properties of the fluid in the flow field are assumed to be constant, except the density in buoyancy term, and change in density due to temperature variation is calculated using Boussinesq's approximation. It may be noted that the local thermal equilibrium (LTE) is valid, i.e., the temperature of the fluid phase (Tf) is equal to the temperature of the solid phase (Ts) within the porous medium, and similar approximations were also used by earlier researchers(refer Nield and Bejan(2006)). The momentum transport in porous medium is based on a generalized non-Darcy model. However, the velocity square term or Forchheimer term which models the inertia effect is neglected here in the present case, as this work deals with natural convection flow within a porous enclosed cavity. However, the current model involves an advection term, as well as Brinkman terms, to incorporate non-Darcy effects(refer Vafai and Tien (1981)).Under these assumptions, the governing equations for steady laminar natural convection flow in a porous rhombic cavity for conservation of mass, momentum, and energy equations may be written with the following dimensionless variables or numbers. Boundary Conditions For the enclosure of Figures1(a) the boundary conditions and no slip condition is imposed on all walls. The curved and vertical wall of quadrantal enclosure are maintained at the cold temperature and bottom wall is imposed with a non-uniform varying temperature distribution. The boundary conditions are: (1) The dimensional form of the non uniform temperature distribution on the heated wall is given as per Sarris et al.(2002) where ∆T * is the temperature difference between the maximum and minimum temperatures of the heated wall, Tc * is the temperatures at the cold wall, and L is the length of the cavity. The dimensionless form of the temperature distribution on the heated bottom wall (Eq. (3)) can be written by using scale parameters given by Dalal and Das (2005) Also, , (also , along AB) On the vertical wall (along AC) θ = 0, U = V = 0 and 0 < X < 1; 0< Y <1 (4) Governing Equations The momentum transport in porous medium is based on a generalized non-Darcy model as already discussed. However, the velocity square term or Forchheimer term which models the inertia effect is neglected here in the present case, as this work deals with natural convection flow within a porous enclosed cavity. The major assumptions made to analyze the present problem are as follows: (i) The fluid confined within porous bed is Newtonian and the flow is steady, laminar and incompressible. (ii) The effect of viscous dissipation is neglected. (iii) The physical properties except the density in the body force term are considered to be constant. The variation of density in the body force term with temperature follows Boussinesq approximation. (iv) The radiation heat transfer is neglected. (v) Darcy-Forchheimer model is used to simulate the momentum transfer in the porous medium. (vi) The temperature of the fluid phase is equal to the temperature of the solid phase everywhere and local thermodynamic-equilibrium is applicable. Under these assumptions, the governing equations for steady two-dimensional natural convection flow in a porous rhombic cavity for conservation of mass, momentum, and energy may be written with the following dimensionless variables. H= height of Enclosure, L=Length of Enclosure and where Pr is Prandtl number and Ra is Rayleigh number and Da is Darcy number. (5) In the above equations, various dimensionless parameters are defined as where u and v are the dimensional velocities in x and y directions, respectively,  is the thermal diffusivity, p is the dimensional pressure,  is the density of fluid, g is gravitational acceleration,  is volumetric thermal expansion coefficient,  is kinematic viscosity, K is the permeability of the porous media. Stream Function The fluid motion is displayed using the stream function (ψ) obtained from velocity components (U and V). The relationships between stream function and velocity components yield a single equation which is depicted as: The no-slip condition is valid at all boundaries and there is no cross flow, hence ψ= 0 is used in residual equations at the nodes for the boundaries. Using the above definition of the stream function, the positive sign of ψ denotes anti-clockwise circulation and the clockwise circulation is represented by the negative sign of ψ. Entropy Generation According to local thermodynamic equilibrium of linear transport theory, the dimensionless total local entropy generation for a two-dimensional heat and fluid flow in porous applications in cartesian coordinates in explicit form is written as: Note that, the viscous dissipation model as proposed by Al-Hadhrami et al.(2003) is employed in Eq. (14). It may be noted that, the effect of viscous dissipation is neglected in the energy equation (Eq. (11)), but that is considered for estimation of Sψ. Here Sθ and Sψ are local entropy generations due to heat transfer and fluid friction, respectively. In the above equation, ξ is called the irreversibility distribution ratio, which is defined as Varol et.al( 2009) and this is based on the order of magnitude estimation for all the numerical simulations.T0 is the bulk mean temperature and is considered to be (Th+Tc)/2. Local and Average Nusselt Number The heat transfer coefficient in terms of the local Nusselt number (Nu) is defined by n Nu      where n denotes the normal direction on a plane. The average Nusselt number at the bottom and inclined side walls are given by: Bejan Number An alternative irreversibility distribution parameter is Bejan number (Be), which is the ratio of heat transfer irreversibility(HTi) to global entropy generation rate (please refer Bejan (1996) Be>>0.5 is the limit at which, heat transfer irreversibility dominates; Be<<0.5 is the opposite limit at which, irreversibility is dominated by fluid friction effects; and Be ~ 0.5 is the case wherein, HTi and FFi are of equal significance. CODE VALIDATION AND GRID INDEPENDENCY TEST Due to the lack of suitable results in the literature pertaining to the present porous configuration, the result obtained have been validated against the existing results for a quadrantal cavity filled with a water medium (Pr=6.62) when the bottom wall is heated and the vertical wall is cold. ( refer to Aydin and Yesiloz, (2011). Figure 2a,b clearly indicate that the results are in good agreement with the prior published data. Further, entropy generation results were not available for this type of enclosure so the validation was carried out for results obtained from triangular enclosure (Basak et.al(2012). Figure 2c,d also indicate that the results are in good agreement with the prior published data. Grid Independency test was also carried out for average Nusselt number for bottom wall for Pr=0.71 at Da=10 -3 for this present configuration and final results are depicted for Ra-46018 elements. In the study, three different mesh sizes (21612 finite elements, 46018 finite elements, and 63366 finite elements) are adopted in order to check the mesh independence. A detailed grid independence study has been performed, and the results are obtained for the average Nusselt number for bottom wall and they are represented in Table 1, but no considerable changes were observed for average Nusselt number values for the bottom wall. Thus, a grid size of 46018 finite elements is found to meet the requirements of both the grid independency study and also because on further refinement of grids the change in the average Nusselt number is almost negligible( less than 1%).. 4.NUMERICAL SOLUTION METHODOLOGY The governing transport equations (Eqs.5-8) together with the boundary conditions (Eqs 1-4) as outlined in the previous section are solved numerically using finite element method. The Galerkin weighted method is used to transform the governing equations into a system of integral equations. The detailed description of the methodology can be found in Zienkiewicz and Taylor (1991). The Gauss's quadrature method (refer Zienkiewicz and Taylor (1991)) is also used to perform the integration. RESULTS AND DISCUSSION Numerical simulations are performed to ascertain fluid flow, heat transfer and entropy generation characteristics in this enclosure for this particular study. We first analyse the stream lines and Isotherms and later discuss about entropy generation in the quadrantal enclosure. Stream lines and Isotherms Heat transfer and flow distributions are presented via streamlines in Figure 3-5 and isotherms in Figures. 6-8. The streamlines and isotherm results are depicted for Ra=1x10 4 , 1.7x10 5 and 1x10 6 for Da=10 -5 , 10 -4 and 10 -3 . Examining Figures 3(a-c), 4(a,c) & 5(a-c) it is also observed that the difference in temperature between the inner and outer walls creates density variations within the fluid filling the porous enclosure and gives rise to the buoyancy forces that move the flow. The thermo physical parameters in the problem are Rayleigh number (Ra),the Prandtl number (Pr), and the Darcy number (Da). From Figure 3(a) for the following Darcy number of Da=10 -5 ,the strength of the vortex is negligibly small when Ra=10 4 because of combined confluence of high resistance to flow and low buoyancy force and porous drag. With the increase in Rayleigh number, the buoyancy force increases as a result of which the strength of the vortices rotating in both clockwise and anticlockwise direction are enhanced. For example, the maximum strength of vortex at Ra=10 4 is 0.021 in both clockwise and anticlockwise direction and the value increases to 0.35 and -0.35 respectively with increase of Ra to 1.7x10 5 in Figure 3(b) .With a further increase of Ra=10 6 in Figure 3(c) we find that the stream line circulation strength has increased to 2.1. Thus we can say that the effect of circulation strength is low at low permeability of the working fluid. In Figure 4 at higher Da number of Da=10 -4 at low Ra we observe similar trends of circulation pattern and strength at low Ra=10 4 .When Ra is increased from 1.7x10 5 and 10 6 , we observe that the stream line pattern doesn't alter in visualisation but there is a sufficient increase of stream line strength at Ra=10 6 and ψmax=22 for the anticlockwise roll. So what we observe for permeability of fluid for Da=10 -4 for Figure 4(a-c) that the quality and magnitude of stream lines are quite similar to that for permeability of fluid for Da=10 -5 and only when we increase the Ra=10 6 there is a major increase of circulation strength of flow in both anticlockwise and clockwise direction proving that convection phenomenon is predominant at this particular Rayleigh number. With a further increase of Darcy number from 10 -4 to 10 -3 in Figure 5(a-c) the strength of stream function increases and more over the streamlines that comprise the major circulation cell continue to occupy more than 75% of the enclosure. Further the streamline patterns that are observed in the enclosure are found to be elliptical in shape which results with distinct boundary layers, and thus as a consequence, the energy transport is enhanced .It is also observed at Figure 5(c) for Da=10 -3 ,Ra = 10 6 , that the phenomenon of the dominant buoyancy forces leads to enhanced circulation, as depicted from the larger magnitudes of stream functions |ψ| max>112 for (for the anticlockwise direction flow).Note that increase in the magnitude of the stream functions (ψ) signify that the buoyancy forces start to dominate over viscous forces. Thus with an increase in the Darcy number this indicates either an increase in the permeability of the porous matrix or corresponding increase in transport of fluid. Isotherms for the following study are depicted for Figure 6-8,(a-c). It is also found that higher value isotherms are located on the mid region of the enclosure because of the imposed temperature gradient on the bottom wall. Also the isotherm patterns are asymmetrical because of the non-uniform heating of the enclosure at higher Darcy number compared to lower Darcy number. At lower Rayleigh values and lower Darcy values in Figure 6(a-b) & 7(a) , the nature of isotherm clearly indicates that heat is transferred mainly due to conduction because of the fact that resistance due to porous drag dominates over the buoyancy force. An increase in the permeability of the porous matrix implies lower hydrodynamic resistance and consequently stronger convective flow and this phenomenon proves the fact for the nature of isotherm patterns for Figure7(c) & 8 (b-c) at higher Rayleigh numbers .Further the effect of asymmetry is more profound for Da=10 -3 rather than Da=10 -5 because of enhanced convection. Entropy Generations Figures.9-14 (a), (b) and (c) depicts the entropy generation due to fluid friction and heat transfer irreversibility inside the enclosure for Da=10 -5 to Da=10 -3 . The entropy generation values obtained for fluid friction for this case, i.e for Da=10 -5 is practically negligible inside the enclosure. The maximum values are found to be ranging between Sψmax=4x10 -6 (for Ra=10 4 ) to Sψ max=.045(for Ra=10 6 ) for Figure 9 (a-c). It is also observed that entropy generation due to heat transfer is also negligible in the core region and also it has very less values for Ra=10 4 ,Da=10 -5 as observed from Figure 10(a). As we increase the Rayleigh number from Ra= 10 4 to 1.7x10 5 in Fig 10(b) and then further to 10 6 in Fig 10(c),entropy generation due to heat transfer decreases slightly or practically remains the same (i.e from Sθmax=16.6 to Sθmax=15.5) for Da=1x10 -5 because of the conduction like situation though there has been an significant increase of Rayleigh number. Figure 11 & 12 (a), (b) and (c) depicts the entropy generation due to fluid friction and heat transfer irreversibility inside the enclosure for Da=10 -4 . The corresponding entropy generation for fluid friction for this case ,i.e Da=10 -4 for Figure 11(a) is also practically negligible inside the enclosure with maximum values ranging between Sψmax=3.2x10 -4 (for Ra=10 4 ) to Sψ max 4.7(for Ra=10 6 ) found in Figure 11(c). It is also observed that, here also the dominant source of irreversibility is due to heat transfer. The values observed are Sθmax=23 (bottom wall) (for Ra =10 6 , Da=10 -4 ) and is found in Figure 11(c). So it is observed that the entropy generation values of heat transfer irreversibility are practically same or having very less change when we increase the Da number from 10 -5 to 10 -4 . On the other hand what we also observe is negligible entropy generation production due to fluid flow irreversibility. Figures 13 & 14 (a), (b) and (c) depicts the entropy generation due to fluid friction and heat transfer irreversibility inside the enclosure for Da=10 -3 .It is observed that the dominant source of irreversibility for this case is due to fluid friction. For lower Ra=10 4 Sψ < Sθ and this is found in Figure 13( a) & 14 (a) respectively. However with increase of Ra=1.7x10 5 Sψ max=12.12( location at right curved cold wall) found in Fig 13(b) the values are found to have increased and also the values reported are approaching to nearly half of the values of entropy generation irreversibility due to heat transfer. Sθmax=30 (bottom wall) (for Ra = 1.7x10 5 , Da=10 -3 ) and found in Figure 14(b). There is a significant increase in the entropy generation due to fluid friction when Rayleigh number changes it value from 1.7x10 5 to 10 6 and found in Figure 13(c).This is due to a smaller thermal gradient near the side cold wall and also because of higher thermal mixing near the core of the cavity in nonlinear heating. Convection effects are prominent in this case specially at higher Rayleigh numbers and for this case Sψ max(value=563) and this is much higher than corresponding Sθmax(value=148) found in Figure14(c) for Ra=10 6 . The effect of Ra on the local Nusselt number for cold wall is demonstrated in Fig. 15, while the local Nusselt number variation with Ra for hot bottom wall is shown in Figure 16. Overall at low Rayleigh numbers Da=10 -5 Ra=10 4 the influence of Rayleigh number on the local Nusselt number is not significant. The influence becomes stronger as the Rayleigh number increases beyond 10 4 . The poor heat transfer performance at low Ra numbers can be attributed to the weaker convection which is very clear from the panel plots of local Nusselt number for different Da number and all values reported are negative as observed for Figures 15(a-c). Maximum influence of Ra=10 6 is only prominent in the panel plot for Da =10 -3 .The value is approaching towards the positive side at the right hand part of enclosure (see panel plots corresponding to figure 15(c) . Nusselt Number For analysis of local Nusselt number for the hot bottom wall we refer to Figure 16 (a-c). What we observe is that, the value is ~18 at two points for the panel plots corresponding to Figure 16(c) for Ra=10 6 and corresponding Da=10 -3 .There is a local dip in values for local Nusselt number for bottom wall and they show very less values near the end of enclosure. The Nusselt number slightly increases again around the corner point and also continues to increase along the hot wall in order that temperature difference between the hot wall and the adjacent fluid layer decreases along the y-direction and also justifies the nature of temperature profile imposed on the wall. From the Bejan plots in Figure 17 (a-c) and Bejan number values that are found in Table 2 respectively for it's quite clear that the heat transfer irreversibility dominates at lower Ra when convection is poor and fluid friction irreversibility is more at higher Ra when convection effects are predominant . CONCLUSIONS Natural convection in a quadrantal enclosure filled with porous material has been studied numerically. The main conclusions are: 1) As the Darcy number increases from Da=10 -5 to Da=10 -3 and Rayleigh number (Ra) increases to 10 6 , the fluid flow as well as thermal energy transport intensifies, because of enhanced convection. Consequently, the entropy generation due to heat transfer(sθ) and fluid friction (Sψ) also increases. 2) The entropy generation due to heat transfer (Sθ) is significant in the bottom half of the enclosure near bottom wall, because of a large temperature gradient, whereas Sψ is significant in the portions where velocity gradient is large, especially where the solid wall is in contact with the adjacent circulation cells. 3) The significant contributor of entropy generation is nearly similar for both Darcy numbers i.e. Da=10 -5 and 10 -4 and occurs because of heat transfer irreversibility whereas the fluid friction irreversibility is practically negligible for both these two cases of Darcy number. 4) Fluid friction irreversibility dominates over heat transfer irreversibility for Da=10 -3 Ra=10 6 . 5) Local Nusselt Number values are more, numerically only for Ra=10 6 , Da=10 -3 for cold wall as well as hot wall. 6) Based on EGM analysis and Bejan number analysis, it is established that total entropy production is not significant with larger thermal mixing at high Darcy number (Da=10 -3 ) and maybe recommended as for this type of enclosure discussed in the present study.
2018-12-07T13:25:40.856Z
2018-02-15T00:00:00.000
{ "year": 2018, "sha1": "01e31e3368fa51b3fd98f191dd2a548584318777", "oa_license": "CCBY", "oa_url": "http://thermalfluidscentral.org/journals/index.php/Heat_Mass_Transfer/article/view/786/589", "oa_status": "GOLD", "pdf_src": "Anansi", "pdf_hash": "01e31e3368fa51b3fd98f191dd2a548584318777", "s2fieldsofstudy": [ "Physics" ], "extfieldsofstudy": [ "Physics" ] }
255739928
pes2o/s2orc
v3-fos-license
Evaluation of short-term gastrointestinal motion and its impact on dosimetric parameters in stereotactic body radiation therapy for pancreatic cancer Highlights • The short-term GI-tract motion was assessed in eleven pancreatic cancer patients.• Dose uncertainties were also evaluated with SBRT of 40 Gy in 5 fractions.• The necessary margin was at least 8 mm to compensate for the organ motion.• The short-term motion could lead to unexpectedly high doses in parts of the GI-tract.• The results of this study have important implications for intra-fractional motion. Introduction Stereotactic body radiation therapy (SBRT) has emerged as a new strategy for pancreatic cancer, and improved treatment outcomes have been reported in resectable and unresectable cases [1][2][3][4][5][6]. The advantages of SBRT have the potential to deliver higher doses to the tumor and shorter treatment periods, which allow a transition to chemotherapy or surgery without long delays. One of the challenges in the treatment planning is that the tumor is usually located close to the gastrointestinal tract (GI-tract), and excess doses of a few Gy can result in severe adverse events [7,8], to ensure against this careful attention must be paid to minimize adverse events. In addition to these anatomical characteristics, GI-tract motion must also be considered. Intra-fractional motion is a short-term motion in the order of seconds or minutes during the radiotherapy [9][10][11], mainly caused by respiration or organ peristalsis [12]. A robust treatment plan employing sufficiently large margins can account for the small intrafractional motions, which arise as the planning CT is scanned at a single time point. With recent technological developments in respiratory motion management, dose uncertainties from respiration would be negligible during beam delivery when appropriate measures, such as respiratory gating are employed [13]. However, the influence of organ peristalsis on the dose distribution has not yet been thoroughly studied mainly as it is a continuous and irregular motion [14,15]. To avoid under-and overdosages, short-term GI-tract motion should be considered in SBRT planning for pancreatic cancer. In this study, we evaluated this motion using multiple CT images scanned in patients treated with SBRT or proton beam therapy (PBT). We also quantified the dosimetric impact of short-term motion in the simulation SBRT plan. The aim of this study is to investigate the short-term motion and its impact on the dose that the GI-tract is exposed to. Patients This study was approved by our institutional ethics review committee (IRB-number:21-0074). Eligibility criteria were as follows: SBRT for pancreatic cancer without lymph node or distant metastases, and at least three different CT image sets available at the planning CT scan. To study a larger number of cases, we also included patients who had received PBT. Of patients treated with SBRT or PBT between January 2015 and October 2022, eleven patients with pancreatic cancer were finally included in the study (Supplementary material A). All patients received respiratory-gated SBRT or respiratory-gated spot-scanning proton therapy using the fiducial marker transarterially placed near the tumor [16,17]. Because this study included a mix of pancreatic cancer patients treated with SBRT or PBT, simulated SBRT plans were generated for all patients to reduce errors in the treatment planning. CT data acquisition The general protocol of the planning CT image acquisition was the same for PBT and SBRT. The CT scans were of 2-2.5 mm slice thickness and obtained using a vacuum cushion after six or more hours of fasting. Four-dimensional computed tomography (4DCT) imaging was performed using the real-time position management (RPM) system to determine the accurate respiratory phase in a patient (Varian Medical Systems, Palo Alto, CA). The obtained 4DCT images were reconstructed into ten sets of three-dimensional CT (3DCT) images based on ten respiratory phases (from 0 %, 10 %,…to 90 %) using the RPM system. In 3DCT images, 0 % images were images of spontaneous inspiration, and 50 % images were spontaneous expiration. A non-contrast planning CT image (CTp) and if possible a contrast enhanced CT (CT CE ) were acquired. Since the respiratory gating was performed at the spontaneous expiration, the CT P and CT CE were also obtained at this point in the respiratory phase. The obtained CT P and CT CE were checked to confirm whether they were appropriately scanned at the spontaneous expiration with reference to the 4DCT 50 % phase (CT 4D ), which corresponds to a CT image at spontaneous expiration. The CT P or CT CE was determined to be properly imaged with spontaneous expiration by a board-certified radiation oncologist if the fiducial marker location was within 4 mm in the craniocaudal direction after bone-based rigid image registration with CT 4D . The median interval from first to last CT scan at the planning CT scan was 736 s (interquartile range, IQR: 624-986). Target and organ contouring Contouring was performed based on our institutional protocol. For patients treated with PBT, additional contours were generated for this study. Gross tumor volume (GTV) was defined as the tumors identified by the available images including non-enhanced CT, enhanced CT, and/ or positron emission tomography (PET) images. The clinical target volume (CTV) was generated as the GTV with the tumor-vessel interface (TVI), including the major vessels within 5 mm of the GTV [18]. Planning target volume (PTV) was defined as the CTV with a margin of 5 mm considering the set-up margin and gating window of 2 mm. The planning at risk volume (PRV) of the GI-tract (GI-tract_PRV) was defined as the area encompassing the stomach, duodenum, small intestine, or large intestine plus a 5 mm margin. Here, the PTV eval is defined as the PTV minus the area overlapping the GI-tract_PRV. The prescribed dose was 40 Gy in 5 fractions for 90 % of the volume of PTV eval , and the 80-90 % isodose lines surrounding the dose volume. In cases where the dose of PTV eval was difficult to establish accurately (e.g the GTV was surrounded by the GI-tract), an acceptable dose was targeted in the plan (Table 1). For our institutional dose constraints, the volume receiving at least 33 Gy (V 33 ) < 0.5 cm 3 was used for each of the PRV (Stomach_PRV, Duodenum_PRV, Small intestine_PRV, and Large intestine_PRV) to ensure safety compared to Oar et al. as reported elsewhere [18]. Other target goals and major dose constraints are described in Table 1. Simulation SBRT planning Simulation treatment plans (PLAN sim ) were generated for all the study patients on the CT P with the Auto-Planning module of Pinnacle 3 version 14.0 (Philips, Amsterdam, Netherlands). A previous study has reported that treatment planning using the Auto-Planning module was clinically acceptable at a very high-quality level in SBRT for pancreatic cancer [19]. Simulation SBRT plans were generated by volumetric modulated arc therapy (VMAT) with two full arcs, 6 mega-voltage (MV) flattening filter-free (FFF) beams in a TrueBeam (Varian Medical Systems) (SBRT-VMAT). Gantry spacing was 2 degrees, and a dose grid of 2 mm was applied at the dose calculation. These planning parameters were the same as those used in previous studies for primary liver tumors [20]. To reduce the errors involved in simulation planning, the plan quality was checked by Y.U and T.K. The above simulation planning workflow was like the actual SBRT in that the doses were delivered by 6 MV-FFF beams using a TrueBeam, implementing a SyncTraX FX4 (Shimadzu, Kyoto, Japan) for fiducial marker-based respiratory gating. Vx: volumes receiving a minimum of X Gy. Dx%: minimum dose received by X as the percent of the target volume. GTV: gross tumor volume, PRV: planned at risk volume, PTV: planned target volume, SBRT: stereotactic body radiation therapy. Study workflow Details of the study workflow are shown in Fig. 1. First, the CT CE and CT 4D were rigidly registered with respect to the vertebral bone for the CT P , and translation-only registration was performed based on the fiducial marker position (Fig. 1A-ab). This alignment method is similar to the actual patient positioning procedure and previous inter-fractional motion studies [14]. After the rigid image registration, the GI-tract contour in CT P was deformably transferred to the CT CE or CT 4D . Here, we defined the region of interest (ROI) in CT P , CT CE and CT 4D as ROI P , ROI CE and ROI 4D , respectively (Fig. 1A-c). The deformed ROI were reviewed, and most of them were manually modified, then transferred back to the CT P . Dosimetric parameters were analyzed using the planned dose distribution on the CT p for each ROI (Fig. 1A-d). Study contouring was conducted for the entire stomach and duodenum according to RTOG recommendations [21]. The small or large intestines were contoured from the diaphragm to the lowest axial slice of the CTV with a 20 mm margin. Since it was difficult to distinguish the small intestine from the large intestine in one patient, the entire intestine other than the stomach and duodenum was contoured as the small intestine in this case. At least two radiation oncologists (Y.U and Y.F) reviewed these contours to minimize errors among physicians. The above procedure was performed by MIM maestro ver. 7.0 (MIM Software, Cleveland, OH, USA). Data analysis The locational difference among three ROI indicates the short-term motion ( Fig. 1B-a). The sum of the ROI was generated as ROI all to evaluate the minimum margin to compensate for the motion uncertainty based on the ROI P ( Fig. 1B-b). Then, the margin needed to compensate for the ROI all was determined based on the ROI P every 2 mm (Fig. 1B-c). To avoid incorrect evaluations, the boundary of the GI-tract was made the same for each of the patients and it was verified that this uncertainty did not influence the results (Supplementary material B). As standard distance metrics, Dice index and mean distance to agreement were also evaluated in each GI-tract (Supplementary material C). For more practical values, the change over time in the shortest distance was measured between the PTV and GI-tract. In this evaluation, the stomach and duodenum were evaluated as one ROI of the stomach-duodenum. All CT images used in the analysis were imaged with spontaneous expiration, and the effects of respiration were assumed to be negligible. The paired differences of dosimetric parameters were analyzed with paired Wilcoxon signed-rank tests, with the p-value < 0.05 considered to show significance. The statistical analysis was performed with the JMP Pro version 16 (SAS, Cary, NC). The sum of all contours is defined as the ROI all . (c) To evaluate the margin to compensate for ROI all , margins were determined every 2 mm from the ROI P to the entire circumference. In this example, the minimum margin is calculated to be 6 mm. ROI: region of interest. Contour analysis An example of short-term GI-tract motion after fiducial-based matching (shown in Fig. 2) was established to take place within the 986 s between the first and last CT scan. The shortest distances between the PTV and each GI-tract have changed over time (Supplementary material D). Based on the shortest distance in CT P , the median maximum change of the distance over time was 0 mm (IQR: 0-0) in the stomachduodenum, 0 mm (-1.2 to 1.1) in the small intestine, and 0 mm (-0.9 to 2.1) in the large intestine. The minimum margin to compensate for the motion uncertainties of the GI-tract was calculated, to include an area distant from the tumor. To compensate for these GI-tract motion, the median necessary margin was 10 (Fig. 3). Dosimetric analysis The short-term motion increased V 33 in some of the worst cases, which caused the deviation of the dose constraint of V 33 < 0.5 cm 3 ( Table 2). Because the large intestine in one patient (patient No 3) was difficult to identify, the entire intestine other than the stomach and duodenum was contoured as the small intestine. Compared to the median V 33 (IQR) of PLAN sim , that of the worst case, which is at the highest values among the three CT data sets, deviated in three cases in the duodenum and two in the other organs ( Table 3). The absolute increase of median V 33 was 0.02 cm 3 Discussion In this study, we report the short-term GI-tract motion other than respiration using CT images in patients treated with SBRT or PBT for pancreatic cancer. We also generated SBRT simulation plans for all cases and evaluated its impact on the dosimetric parameters. There have been several reports on inter-fractional changes [14,15], however, to our knowledge this is the first study to report on short-term motion observed within the median of 783 s from CT images. We have also evaluated the impact of the short-term motion on the dosimetric parameters. The results suggest that overdosages could occur due to the short-term GI-tract motion. We quantified the intestinal motion during the planning CT scan and showed that the motion could not be ignored. Lens et al. analyzed the intratumoral fiducial variations with breath-holding CT in a single fraction. From 12 pancreatic cancer patients who had undergone SBRT, Lens et al. reported that the mean variation in a single fraction was − 0.2 (standard deviation; SD: 1.7) mm and − 0.5 (SD: 0.8) mm in the inferiorsuperior and anterior-posterior directions, respectively [22]. Grimbergen et al. analyzed patients who underwent MR-guided SBRT and reported that the mean of the maximum baseline drift of the tumor during SBRT was 1.2 mm in the craniocaudal direction, excluding respiratory motion [23]. These studies analyzed tumor motions except for respiration during treatment, but the number of studies on motions of the GItract is limited. Mostafaei et al. studied peristaltic motions during radiotherapy by MR-linac and reported that these motions were irregular, persistent, and comparable in magnitude to the respiratory motion [12]. They suggested that peristalsis also should be considered together with respiratory motion. From this present study, we also found that a median margin of 8.0-14.0 mm was necessary to compensate for the short-term uncertainties of each of the GI-tract organs. Our evaluations were motion in one day at the treatment planning CT scan, but such short-term motion can be suggestive of intra-fractional motion during radiotherapy. In the worst-case scenarios, the V 33 of each PRV in the GI-tract was higher than the simulation SBRT plan (PLAN sim ), and dose constraints were violated in two or three cases (Table 3). Similar results have been reported in studies on inter-fractional [14,15] and intra-fractional motion [24] in pancreatic SBRT. Loi et al. studied inter-fractional motions of 35 pancreatic cancer patients treated with SBRT [14]. They reported a median increase of 1.0 cm 3 (IQR: 0.2-2.6) in V 35 of the gastrointestinal tract (stomach, duodenum, and intestine) over that of the treatment plan. Niedzielski et al. also conducted a similar study using daily CT-onrails image guidance immediately before treatment, and they reported that the dose constraints of V 35 in the duodenum or small intestine were violated in three out of eleven patients [15]. Alam et al. studied interfractional and intra-fractional motion in patients with pancreatic Worst-case is the one with the highest V 33 among the three CT data sets (CT P CT CE , CT 4D ). *PRV: Generated by adding a 5 mm margin to each GI-tract (stomach, duodenum, small intestine, or large intestine). **Difficult to identify the large intestine in one case. PRV: planning at risk volume. Worst-case is the one with the highest V 33 among three CT data sets (CT P CT CE , and CT 4D ). D X shows the maximum dose delivered to a volume of X cm 3 in each organ. *PRV: Generated by adding a 5 mm margin to each GI-tract organ (stomach, duodenum, small intestine, or large intestine). **Difficult to identify the large intestine in one case. IQR: interquartile range, PRV: planning at risk volume. cancer treated by MR-guided SBRT [24]. They reported an increased accumulated dose in the stomach-duodenum and small bowel due to intra-fractional motion, which caused deviations from institutional dose constraints in three out of five patients. Of course, it should be noted that the dosimetric analysis in the present study was based on CT images at a specific point in time. Because the GI-tract is continuously moving and deforming, a cumulative dose that accounts for organ changes over time would be closer to reality. In an SBRT for pancreatic cancer, a highly optimized treatment plan is generated to achieve an adequate dose to the target and maintain sophisticated dose constraints of the GI-tract. Several studies have reported an increased incidence of adverse events when high doses are administered to part of the GI-tract. From a dose-escalation study of SBRT, Courtney et al. reported that a higher dose to the GI-tract was shown to be associated with late gastrointestinal hemorrhage [25]. They also summarized increased duodenum dose-volume parameters (V 35 , V 40 , or V 45 ) in the patient group with higher-dose prescription. Moreover, Kopek et al. found a maximum dose to 1 cm 3 (D 1 ) of duodenum was important to predict late duodenal complications in cholangiocarcinomas treated with SBRT [26]. Our study did not show a significant increase in D 1 of the duodenum_PRV in the worst case (p = 0.056), but there was a median increase of 1.7 Gy (5.5 %). Given these findings, even slight uncertainties in the GI-tract location, especially in the proximity of the GI-tract, can lead to serious adverse events. Online adaptive therapy has become clinically applied to cope with the daily anatomical changes of patients, but some articles have reported that it takes several tens of minutes from the initial CT acquisition to beam-on [27,28]. Because our study observed the short-term GI-tract motion with a median period of 736 s, such motion should also be considered in an online adaptive strategy. The limitations of our study include the following. Because peristaltic motion in the GI-tract is continuous, the analyzed CT images alone may not adequately reflect the organ motion. Moreover, the shortterm GI-tract motion may vary among the days the images were obtained. To address these issues, obtaining multiple CT images over multiple days is ideal, but would be impractical given the burden on patients and medical staffs. Another limitation is that the dose distribution was not recalculated on different CT images. One reason for this is that the CT image data used in this study were obtained from the same body position, at spontaneous expiration, and at intervals of several minutes. Therefore, we assumed that the variations in the dose distribution due to anatomical changes are negligible in different CT images. Another reason is that the differences in CT scan conditions (enhanced, non-enhanced, or 4DCT) may lead to the slight uncertainties in the dose distribution, as also suggested in several reports [29,30]. Future studies require an increased number of patients, and CT images should be taken under the same conditions. In conclusion, the short-term motion of the GI-tract was observed, and these uncertainties may lead to unexpectedly high dose exposure in parts of the GI-tract. To reduce adverse events of the GI-tract, it is necessary to quantify these motions and reflect them appropriately in the SBRT plan. Declaration of Competing Interest K.K. is an employee of the research institute of Hitachi, Ltd., currently working at Hokkaido University, under a secondary agreement. K.K. declares that this research has no relationship with Hitachi, Ltd. All other authors declare that they have no conflicts of interest.
2023-01-12T16:53:25.837Z
2023-01-01T00:00:00.000
{ "year": 2023, "sha1": "ae1a8f0b55763640a0ac2b9557b8e4c94c33d96c", "oa_license": "CCBYNCND", "oa_url": "http://www.ctro.science/article/S2405630823000010/pdf", "oa_status": "HYBRID", "pdf_src": "PubMedCentral", "pdf_hash": "e376f8257c100b99647293680f1f8ac27d237d2f", "s2fieldsofstudy": [ "Medicine", "Physics" ], "extfieldsofstudy": [] }
216637398
pes2o/s2orc
v3-fos-license
Addressing the Path-Length-Dependency Confound in White Matter Tract Segmentation We derive the Iterative Confidence Enhancement of Tractography (ICE-T) framework to address the problem of path-length dependency (PLD), the streamline dispersivity confound inherent to probabilistic tractography methods. We show that PLD can arise as a non-linear effect, compounded by tissue complexity, and therefore cannot be handled using linear correction methods. ICE-T is an easy-to-implement framework that acts as a wrapper around most probabilistic streamline tractography methods, iteratively growing the tractography seed regions. Tract networks segmented with ICE-T can subsequently be delineated with a global threshold, even from a single-voxel seed. We investigated ICE-T performance using ex vivo pig-brain datasets where true positives were known via in vivo tracers, and applied the derived ICE-T parameters to a human in vivo dataset. We examined the parameter space of ICE-T: the number of streamlines emitted per voxel, and a threshold applied at each iteration. As few as 20 streamlines per seed-voxel, and a robust range of ICE-T thresholds, were shown to sufficiently segment the desired tract network. Outside this range, the tract network either approximated the complete white-matter compartment (too low threshold) or failed to propagate through complex regions (too high threshold). The parameters were shown to be generalizable across seed regions. With ICE-T, the degree of both near-seed flare due to false positives, and of distal false negatives, are decreased when compared with thresholded probabilistic tractography without ICE-T. Since ICE-T only addresses PLD, the degree of remaining false-positives and false-negatives will consequently be mainly attributable to the particular tractography method employed. Given the benefits offered by ICE-T, we would suggest that future studies consider this or a similar approach when using tractography to provide tract segmentations for tract based analysis, or for brain network analysis. Introduction Diffusion weighted imaging (DWI) provides a novel and unique method with which to study white-matter microstructure within the brain (for an overview see [1] and [2]). In particular, processing of DWI data can produce estimates of white matter fibre directions ( [3], [4], [5], [6]) from which voxelwise uncertainty orientation distribution functions (uODFs) can be generated. Probabilistic streamlining methods thereafter permit generation of connection-confidence maps e.g. Probabilistic Index of Connectivity (PICo) maps [9], by counting the relative propagating success of streamlines from a given seed region ( [7], [8] and [9]). Such PICo maps can then be used to inform brain connectivity ( [10], [11], [12]). However, several well-documented confounds and limitations make it challenging to infer from such connection-confidence maps. These include confounds such as Path-Length Dependency (PLD) and the Partial Volume Effect (PVE), modelling limitations in regions of crossing fibres, and data limitations due to an image resolution that is far coarser than the axons themselves. These issues hamper the inference of any robust form of true anatomical connectivity (in terms of parallel associated axons) from these connection-confidence maps ( [13], [14]). Furthermore, the application of tract-specific statistics, and therefore also many forms of network analysis (e.g. [15]), relies upon the correct segmentation of tracts from PICo-type maps, usually by the application of a priori information in the form of waypoints, followed by a global threshold. Due to the reasons outlined above, this often proves to be problematic, and a single global threshold is generally insufficient. The PLD confound is one of the key obstacles to the application of global thresholding, and its inherent presence in all probabilistic streamline tractography methods ( [9], [13]) imposes a great challenge to the segmentation of any tract system from a relevant seed region. The origin of the PLD effect lies in the mechanics of the probabilistic streamline tractography method, and is simply due to the step-wise dispersion of the propagating streamlines along the length of a tract. Hence image resolution is also a contributory factor. PLD is an inherent side-effect of the probabilistic approach and is manifested as a monotonic, non-linear down-modulation of the calculated probability as a function of the propagation distance from the seed point ( [9], [13]). Hence the resultant probability values produced per voxel represent only the chance that an average streamline could propagate thereto from the seed region, and cannot represent any anatomical connection strength of the tract -see [13] for a discussion. The consequential decline in the number of streamlines that manage to successfully propagate to distal portions of a tract means that those that do are too few to sufficiently sample uODFs therein. The PLD effect is a well-known phenomenon and has been previously reported in early tractographic human studies, [9]. The problems related to PLD have since been demonstrated in a validation study [16], where they observed long WM tracts terminating arbitrarily depending upon the threshold chosen. The consequences of PLD can therefore be significant modulation of tractographic results, thereby also impacting upon subsequent diagnostic interpretation. Nevertheless, PLD has so far received little attention. Whilst few studies have looked into moderating PLD itself, some attempts have been made to compensate for its effects. Normalisation by a distance-correction factor has been suggested ( [17], [18]), but due to tract specific differences (different routes, different number of dispersions and anatomical complexities encountered) this cannot guarantee the removal of all PLD effects. To be able to delineate specific longer tracts, many studies use constrained tractography (using a priori information such as waypoints) to enable a global threshold. Although this does not solve the PLD issue, the extra a priori info can help to delineate specific tracts. This approach was used together with additional heuristics by Sherbondy et al. [19] to counter the resistance to conventional thresholding of specific tracts due to the effect of PLD. It must be noted that such heuristics, together with the more general and commonly-used approach of applying waypoints, exclusion masks and termination regions, do not provide PICo map outputs, nor attempt to remove PLD itself, but can be useful to delineate specific tracts of interest. Here, we visualise the PLD effect and its non-linear behaviour. We propose a heuristic method, based upon probabilistic tractography, to segment out a given tract system emanating from a seed region, with minimal influence from PLD. Based upon a region-growing approach, Iterative Confidence Enhancement for Tractography (ICE-T) is an easy-to-implement framework that is applicable as a wrapper function to most probabilistic streamline tractography methods. This work introduces ICE-T, its modifiable parameters (termed ICE-T threshold and ICE-T streams ) and their generalizability, and investigates the method's parametric behaviour. We confirm both the applicability of our ICE-T method against the results derived by [16], via the use of the same ex vivo pig brain dataset that uniquely combined tractography with invasive tracer studies, and also its reproducibility. Additionally we demonstrate that, due to the non-linear nature of PLD, linear compensation methods may not be sufficient. Finally, we demonstrate the application of the ICE-T method to a human in vivo dataset. Theory We define PLD as being the drop in connection confidence along a tract due to a combination of effects caused by the stepwise dispersion of streamlines, which is in turn due to the stepwise sampling of the uODFs and the compounding effect of anatomical complexities. ICE-T aims to significantly reduce PLD in existing probabilistic streamline tractography methods. The underlying principle of ICE-T is to ensure that the uODF of each voxel that is determined as being connected to the seed is sufficiently sampled by a suitably large number of streamlines. To achieve this we apply ICE-T to conventional tractography in order to grow the given seed region iteratively along its connections. This can be viewed as a tract segmentation step via region-growing, using a predicate of voxelwise connectivity determined by probabilistic tractography. The resultant grown seed region thereby represents the segmented tract system emanating from the given seed. Hereafter one can either use the region for tract-based analysis, or use it as a seed region for tractography. The outcome of the latter will not be a PICo map, but instead a map describing how well connected each voxel of the segmented tract is to all the other voxels within the tract. This can therefore be considered as a tractbased ACM [20], to which a global threshold can then be directly applied without the need to compensate for bias introduced by PLD. Hereunder, we describe the mechanics of the ICE-T framework. ICE-T Framework A tractography pipeline generally includes the following steps: voxelwise fibre-reconstruction and generation of uODFs, followed by the streamlining tracking process. ICE-T is a simple modification of this, introducing a feedback loop around the streamlining tracking process, and is described as pseudo-code in Table 1. ICE-T utilises the same parameters as tractography, with two exceptions: the number of streamlines generated per voxel is modified (ICE-T streams ), and the threshold applied to the end-ofiteration connection-confidence map (ICE-T threshold ). The ICE-T framework consists of iteratively growing the seed region-of-interest (ROI), ICE-T ROI i , (where 'i' indicates the iteration count) along the tract branches it comes across, as outlined in Table 1. At each step, a PICo map [9] is generated and then thresholded at ICE-T threshold to produce ICE-T ROI i+1 . This ROI, if it has increased in size, is then fed back to the tracking step where it is used as the updated seed region for the next iteration. By employing the connection-confidence value of a voxel as the predicate, a connectivity constraint is automatically imposed upon ICE-T ROI i and all its voxels are thereby classified as being highly connected to one another and hence also to the original seed region. In addition, the connectivity constraint is enhanced at each iteration by the application of a streamline waypoint region through which all streamlines are required to pass. The waypoint region for iteration 'i' is defined as ICE-T ROI i-2 (equivalently the seed region for iteration 'i-1'). This guarantees that the original seed region will be included in the final segmented tract. For efficiency, the streamline computations are stored between iterations meaning that at each iteration only streamlines from the newly-included voxels need to be generated. The final grown seed region ICE-T ROI I (where 'I' represents the total number of iterations performed), representing the segmented tract system from the original seed, can then be used for tract-based analysis. The resultant ICE-T ROI I is not a PICo map, but instead each voxel's value represents an index of connection confidence with every other voxel within the tract, and is hereby defined as the Intra-Tract Confidence. Ethics Statement Animal data. All procedures followed ''Guidelines for the Care and Use of Experimental Animals'' and were approved by the Danish Animal Experiments Inspectorate. Human data. The participant signed an informed consent following the guidelines of the declaration of Helsinki. The study protocol (KF 01 -131/03) was approved by the local ethics committee (''De Videnskabsetiske Komiteer for Københavns og Frederiksberg Kommuner''). Data Acquisition and Pre-Processing i) Ex-vivo pig brain. The data, including ground-truth seed regions defined by tracer injection, from three young and normal Göttingen mini pig brains (P1, P2 and P3), as used in [16], were re-used here to permit comparison against the validated results reported therein. MRI data were acquired ex vivo on a 4.7T Varian MR scanner using a pulse gradient spin echo (PGSE) sequence with single line read-out using the following parameters: TR = 6500 ms, TE = 67.1 ms; matrix = 1286128, in-plane resolution = 0.5160.51 mm 2 . Diffusion sensitisation gradient duration d = 27 ms, time between gradient-pulse onsets D = 33.5 ms, gradient strength 56 mT/m. A slice thickness of 0.5 mm, gap 0.5 mm and two sets of 35 interleaved slices ensured whole brain coverage. NEX = 2. The pig brain datasets consisted of 36b = 0 s/ mm 2 and one b-value of 4009 s/mm 2 (chosen as specified in [21]), acquired in 61 non-collinear directions as available in Camino [22]. Before MR scanning the tissue was temperature stabilised to room temperature and a dummy run lasting 15 hours ensured that no short-term instabilities were introduced into the final diffusion MRI dataset [21]. The application of a spin-echo diffusion sequence, and the absence of both physiological and subject-generated motion, minimised the distortions in the ex vivo data. Visual inspection confirmed that no additional processing of the ex vivo data was required prior to tractography [21]. To limit subsequent analysis to brain tissue only, a brain mask was generated via summation of all diffusion images followed by application of a suitable threshold. For tractography, we used the same hand-drawn seed regions as defined in [16] based upon the following tracer injection sites: right prefrontal cortex (PFC), right somatosensory cortex (SC) and the left motor cortex (MC). The datasets, including the seed regions, are freely available on (http://dig.drcmr.dk). ii) In-vivo human brain. A single healthy volunteer (righthanded female, age 21 years), with no history of neurological or psychiatric disorders, or a family history thereof, nor hypertension, was recruited. Intra-volume subject motion and the undesired stretching or shearing caused by eddy-current build-up were simultaneously corrected for by estimating a 12 parameter affine model [24] to coregister the DWI images with the first b = 0 image of the sequence. Field inhomogeneity distortions, causing a geometric displacement of voxel intensities along the phase encode direction of the images, ICE-T threshold : The level of ''connection probability'' (typically scaled between 0 and 1), above which a voxel must rise for it to be considered part of a significant connection, and therefore to be appended to the seed region from which the streamlines were initiated. Applied at each iteration of the feedback loop. ICE-T streams : The number of streamlines emitted from each voxel in the seed region during the region-growing steps. ICE-T iterations , I: The number of times that the seed region is iteratively grown. doi:10.1371/journal.pone.0096247.t001 were addressed by acquisition of a field-map. The field-map correction [25] shipped as part of SPM8 (www.fil.ion.ucl.ac.uk/ spm) was then used to estimate the voxel displacement map. The DWI images were then re-sliced into the space of the first b = 0 image via cubic b-spline interpolation within SPM8, with voxel intensities per volume scaled by the Jacobian determinant of the calculated transformation matrix. Finally, the rotational part of the affine model was applied to reorient the gradient directions [26]. A brain mask was generated in the diffusion image space using the grey-and white-matter (GM & WM) segmentations generated by SPM8 from the MPRAGE T1 scan. For tractography, a cubic seed VOI was specified (by ML) using Matlab, defined so as to approximate the left subcortical motor area. The location and size of the VOI was defined following tractography seeded in the brainstem. Tractography i) Probabilistic Tractography. For this work we chose, for comparative purposes, to use the same probabilistic tractography method as that was employed in Dyrby et al [16] i.e. the multitensor fibre reconstruction algorithm [3], [8] implemented in the Camino software package [22]. This comprised a voxel classification procedure [27] to allocate the likely number of component fibre directions within each voxel, followed by fitting of a multitensor model using a maximum of two fibre directions per voxel. This was then followed by streamline propagation from the centre of every voxel, using the FACT streamlining method [28] (with an inner-product threshold of 0.5, imposing a maximum within-voxel curvature of 60 degrees) to generate PICo maps [9]. The number of streamlines emitted from each voxel within the original seed region was 64,000 [16], whilst 25,000 streamlines were employed for the human in vivo data. These numbers are far greater than those often employed (5,000-10,000) and were chosen so as to ensure that resultant PICo values could not be attributed to poor sampling of the proximal tract network. However, due to PLD, poor sampling of the distal tract will occur even with this number of streamlines [29]. Streamlining was restricted to brain-only voxels via a brain-mask. For comparison, results from tractography were subjected to a linear correction for PLD as described in [18]. Here the PICo values obtained from tracking along a given tract of interest are multiplied by their voxel distance from the seed region. ii) Probabilistic Tractography With ICE-T. For the ICE-T Framework, we employ the same setup and parameters for probabilistic tractography as described above, with the exceptions that the number of streamlines was defined by the ICE-T streams parameter, and with the addition of the ICE-T threshold parameter, as described in Table 1. The ICE-T Framework was implemented in Matlab (Mathworks Inc.) as a data-flow wrapper, handling file management and with calls to the required tractography functions from the Camino package [22]. An initial experiment was performed to empirically investigate the combined impact of both ICE-T parameters, ICE-T threshold and ICE-T streams , using a single pig-brain dataset (P1) and sampling the following values: ICE- To reduce processing time and storage demands when investigating the ICE-T parameter space, a file repository of 500 projection streamlines per voxel for dataset P1 was generated a priori, from which streamline samples could be randomly drawn as required. This experiment permitted the derivation of a single value for the ICE-T streams parameter, one which could then be employed for all three pig-brain datasets. Subsequently, the impact of the ICE-T threshold parameter was investigated using this fixed value of ICE-T streams on all three pigbrain datasets, with the following values: ICE- Analysis To assess the degree of PLD along a selected tract, we select a line-of-interest (LOI) along its midline (see below). To delineate the LOI, we determined the most-likely pathway by selecting the canonical streamline, S canonical , from a collection propagated from the seed region to a waypoint. The canonical streamline, S canonical , was estimated as that which had the greatest number of points of agreement with all other streamlines. For this we used the first 100 successful streamlines. For analysis and subsequent comparison of connection-confidence values produced with and without ICE-T, S canonical was then employed as the LOI to extract the tract midline data. In the pig-brain datasets, S canonical was derived from validated pathways described in [16], using the seed regions and waypoints specified therein. The LOI emanating from the SC region was chosen as an example due to its length and its passage through the complex region comprising the substantia nigra. ICE-T Parameter Selection i) ICE-T streams Parameter. The first experiment investigated the combined impact of both ICE-T parameters. Figure 1 shows that the ICE-T streams parameter has no global effect upon the size of ICE-T ROI I , however the latter does decrease with increasing ICE-T threshold . A slight variation in the size of the ICE-T ROI I can be observed in the PFC and MC between 5 and 10 streamlines. An outlier is noted at a single point for the PFC at ICE-T streams 100 and ICE-T threshold 0.25. For the remaining experiments, we fixed the ICE-T streams at 20 streamlines, as a compromise between having a sufficient number of streamlines whilst minimising computational resources. ii) ICE-T threshold Parameter. For any seed region, the ICE-T threshold parameter significantly impacts the size of the ICE-T ROI I as shown in Figure 2. The size of ICE-T ROI I increases linearly as the ICE-T threshold is reduced, down to around ICE-T threshold = 0.025. Below this threshold, the increase is greater and non-linear, most likely due to the incorporation of adjacent tract networks into the segmented network along with a consequential increase in the number of false positives. For very low ICE-T threshold (approx. ,0.001), ICE-T ROI i always grows at each iteration, and hence never generates a specific segmented tract network (results not shown). However, as shown in Figure 3 for the MC seed, selecting an ICE-T threshold above 0.005 results in a stable ICE-T ROI I . Figure 3 also shows the general decrease in the number of iterations required to reach stability as ICE-T threshold is increased. Further iterations provide no additional change, and the ICE-T halts at this point. The resultant ICE-T ROI I therefore represents a segmented tract system emanating from the seed. Similar results are obtained for SC and PFC (results not shown). In the following experiments the number of ICE-T iterations is automatically limited, determined by the point at which the ICE-T ROI i and ROI i-1 show no size differences, and hence the number of iterations is not a parameter in itself. The spatial growth of the ICE-T ROI I from the SC seed region along the canonical streamline of the corticonigral tract that projects through a complex crossing fibre region (centrum semiovale) is illustrated in Figure 4. With this seed region, selection of ICE-T threshold .0.015 causes the region growing to halt at this complex region. However, ICE-T threshold values below this level permit the ICE-T ROI I to grow along the entire canonical streamline. Similar ICE-T threshold values were also found for this seed region in the P2 & P3 datasets (results not shown). Figure 5 shows tractography performed using only the seed ROI versus using the ICE-T_ROI I , thresholded at different levels. In order to extract the entire tract without using ICE-T i.e. using seed ROI (left column), thresholding of the obtained results would require application of a very low global threshold (,0.010). Using the seed ROI, the effect of low thresholding produces a near-seed flare, reflecting the high proportion of false positive connections found close to the seed, where the sampling is still sufficient (green arrows, Figure 5). Distally, due to the PLD effect, the false negatives become dominant, as evidenced by the sudden termination in white matter of those tracts that survive the thresholding (red arrows, Figure 5). In contrast, the results after application of ICE-T do not show such behaviour, and instead generate a segmentation of the tract network at each threshold. Notably, because a grown ICE-T ROI I is used for tracking, then lowering the subsequent global threshold just broadens the crosssectional area of the tract system. Near-seed flare effects are not present, and no sudden termination of tracts in white-matter are seen, indicating a reduction in the PLD effect. Tract Segmentation This demonstrates the potential for a common ICE-T threshold which can be used across different subjects and seed regions, as illustrated in Figure 6 for an ICE-T threshold of 0.005. Here an extensive tract network is segmented for all 3 seed regions and shows similar, though not identical, structures across all datasets (P1, P2, P3). Exceptions can be seen in, for example, the projections from the PFC seeds towards substantia nigra in P1 shows the spatial extent of the ICE-T_ROI I , sampled along the canonical streamline, from the seed region (defined as Distance = 0) as a function of both distance from the seed region, and of the value of the ICE-T threshold parameter. Here a coloured voxel represents that the segmented ROI was present at the given threshold and distance from the seed. Each threshold level is coloured differently for clarity. Once the ICE-T threshold parameter falls to 0.015 and below (lower three rows), the region-growing penetrates past the complexity and continues on to extract the distal portion of the tract. The 3D rendering (upper right panel) shows the ICE-T results at the same two thresholds (0.02 in blue and 0.015 in orange). The seed region is located at the site of the green arrow (upper right panel). doi:10.1371/journal.pone.0096247.g004 vs those from P2 and P3 ( Figure 6, red arrows). Importantly, however, the cross-sectional sizes of the longer tracts appear consistent along their length, inferring an independence of the results to path length. However, deviations from expected results do occur around complex regions. For example from the MC and SC seeds, false positive contralateral projections towards the internal capsule are observed, and when this is the case, they mirror completely the ipsilateral tract network (green arrows, Figure 6). As could be expected, the incorporation of just a portion of any false positive branch leads its extraction up to a cortical region. Tractography The following compares the tractography results from the original seed regions (SC, PFC and MC) with those derived from a seed defined by the segmented tracts provided by ICE-T with an ICE-T threshold of 0.005. The tractography PICo values in Figure 7 show a substantial drop in the area of the centrum semiovale, located at tract distance of 40 voxels from the seed, and subsequently show a continuing decrease towards the projection site, indicating the presence of PLD. It is also apparent how the combined effect of PLD and anatomically-related obstacles impose a non-linear behaviour upon the PICo values as a function of distance from the seed. Although the linear compensation factor is initially able to correct for distance after the seed region (approximate distance: 25-35 voxels), it is unable to compensate for the non-linear PLD effect introduced in the centrum semiovale and the values on the distal side remain low. In contrast, although the ICE-T results show variations along the length of the canonical streamline, there is no overall drop in Intra-Tract Confidence values related to such underlying complex regions. In-vivo human brain Results from the application of ICE-T to the in vivo human dataset are shown in Figure 8, using parameters within the same range as those determined to be applicable to the ex vivo dataset (ICE-T threshold = 0.01 and ICE-T streams = 20). Tractography without ICE-T employed the seed ROI, whereas that with ICE-T used the ICE-T ROI I as seed. The ICE-T tracts show, as in the ex vivo data, a uniform cross-sectional size along their length as a function of threshold. In contrast, such features are not seen in the tracts generated from probabilistic tractography without ICE-T due to the effects of PLD, as was also observed in the ex vivo pig brain. A possible false positive can be observed in the ICE-T results, seen as a descending tract in the region of the contra lateral capsula interna (red arrow, Figure 8). Just as in tractography, we can explicitly remove false positives by using exclusive ROIs. After placing an exclusive ROI (small dark yellow region in Figure 8) contra-laterally, superior to the capsula interna region, the false positive projection can be removed. Figure 9 illustrates the impact of the seed region specificity and size on the tractography results for the human in vivo data. The global thresholds for these results were chosen so as to match the segmented tracts for their distal propagation into the contralateral ascending portion of the cortical spinal tract (CST). The top row shows the results using the same cubic seed region as used in the Figure 8 -a cubic area centred approximately over the left motor cortex. The lower row shows the results using only a single voxel seed, chosen from within the cubic seed. Whilst the ICE-T results show minor impact of the choice of seed used, those of the probabilistic tractography without ICE-T demonstrate several differences ( Figure 9). Firstly, with the cubic seed, the degree of near-seed flare is substantially greater than that after ICE-T (Figure 9(a) vs. 9(c). Note that the green seed region is enveloped by the tract in 9(a) but not in 9(c)). Secondly, the PLD effect is greater when using the single-voxel seed (Figure 9(a) vs. 9(d)). Thirdly, the segmented tracts differ slightly in both shape and extent for the two seed regions (Figure 9b). Using the cubic seed produced a lateral cortical branch not seen for the single-voxel seed (Figure 9(a)(b) vs. 9(d)(e), green arrows). In contrast, the singlevoxel seed produced an ipsilateral medial branch (Figure 9(d)(e), red arrows) that appears to divert from the CST around the level of the corpus callosum (Figure 9(d)(e), yellow arrows) and instead follow the anterior thalamic radiation. Inferiorly, the descending portion (Figure 9(d)(e), orange arrows) also appears to follow a different route than the CST, before terminating prematurely. Discussion We have shown that PLD can arise as a non-linear effect modulated by tissue complexity, and that some of the effects imposed by PLD upon probabilistic tractography are the nearseed flare (false positives), and reduced distal propagation (false negatives), confounds which have been speculated to bias structural connectivity analysis ( [13], [30]). We have introduced the ICE-T Framework as a generalizable wrapper around existing methods and demonstrated its ability to mollify the universal PLD confounds in segmented tracts. The results presented herein suggest that using methods such as ICE-T that address the PLD confound will benefit the statistical robustness for a wide range of group statistics, such as structural connectivity analysis and advanced tract shape models The PLD confound At present, the PLD confound is rarely addressed. Indeed, it is common for the threshold-level of probabilistic streamline tractography experiments to either be left extremely low (giving 'near-seed flare' effects, as observed in Figure 5), or to select a threshold which perceptually segments-out the tracts of interest. Both approaches are subjective, and therefore obviate meta-or group-analysis. As noted by [19], PLD imposed problems for the delineation of the tracts of interest in their group study, and so to compensate, they found it necessary to employ several extra heuristic constraints. Linear propagation distance factors have previously been used in an attempt to correct for the PLD effect [18], [17]. However, we observed how anatomical complexities, e.g. centrum semiovale, which cause barriers to streamline propagation, compounded the PLD effect in a non-linear manner (Figure 7). We demonstrate how linear compensation techniques cannot correct for the nonlinearity of the PLD effect, in contrast to ICE-T. The ICE-T approach has the additional advantage of being independent of the image resolution, unlike the PLD effect, which will increase with the number of propagation steps (and thereby voxels), required to reach the target. As such, the impact of PLD is likely to increase as future studies permit the use of higher resolution imaging techniques. Benefits of the Iterative Process In conventional tractography, the PLD imposes a limit upon how far a streamline is likely to propagate away from a seed. As a consequence, increasing the number of streamlines used cannot improve the propagation performance [29]. In contrast, application of ICE-T generates a delineation of the tract, thereby iteratively distributing the seed voxels along its length. This has the consequence that average streamline path-lengths will be diminished, thereby causing the distribution of false positives and negatives along the length of the delineated tract to be more uniform. Naturally, the decrease in distal false negatives also means a concurrent increase in distal false positives. Note that the presence of distal false positives actually reflects the removal of PLD, because such errors are most likely to occur where the sampling of the uODFs is sufficient. The tracking algorithm reported in [31], based upon Time-Of-Arrival (TOA) maps, has some similarities to the iterative nature of ICE-T, but was developed as a way to improve tracking performance of streamlines that met with problematic regions such as those containing crossing fibres. The procedure includes an iterative region-growing step that is similar to the approach presented herein, and although they did not report any such findings, it too may show an improvement to the PLD effect. However the approach cannot be generalized to existing tractography methods. Prior knowledge in the form of waypoints and exclusion ROIs can be used with ICE-T to reduce the false positives as in conventional tractography. Similarly, prior knowledge can be applied to constrain the segmentation to specific fibre bundles emanating from a seed. Furthermore, as shown in Figures 5 & 8, the tracts generated by ICE-T have more uniform cross-sectional areas along their entire length, and the extent of the cross-section can be controlled by the global threshold parameter. Hence the ICE-T tract volumes are suited for use as binary masks to generate sample volumes (VOIs) for tract-oriented statistics, e.g. [32]. Generalisability It must be highlighted that probabilistic tractography with ICE-T is not a new tracking algorithm per se, but a generic framework applicable to most probabilistic streamlining methods. As such it is able to benefit from their long-standing methodological developments, and their individual advantages and disadvantages. ICE-T has a further benefit of generalizing the parameter choice. Conventional tractography, besides the definition of a seed region, requires the specification of the number of streamlines and usually the global threshold, applied to the probabilistic results in order to delineate the desired tracts. Aside from the seed region, the parameters of the ICE-T Framework are ICE-T threshold and ICE-T streams , used to generate ICE-T_ROI I , along with the subsequently-applied global threshold. The streamline parameter has a different purpose in the two methods. In conventional tractography, the number of streamlines is chosen heuristically in an attempt to sufficiently sample the entire tract. Liptrot & Dyrby [29] demonstrated how increasing the number of streamlines (typically up to 5000) simply increased the voxelwise connectivity probabilities, and so was unable to address the PLD. However when using ICE-T, we have shown how far fewer streamlines are required (approximately 20) -any more than this show minor additional benefits and simply add to computational burden. This is because at each iteration only the local tract environment needs to be sufficiently sampled. The ICE-T threshold parameter controls the minimal degree of connectivity confidence required that new voxels must attain to be incorporated into the growing seed. We have shown how choosing ICE-T threshold at approximately 0.01 permits segmentation of the tract network. Selection of a too high ICE-T threshold hinders the growth of the seed region through complex regions, e.g. centrum semiovale. In contrast, too low a value of ICE-T threshold will lead to growth of the seed region outside of the relevant tract network. The exact choice of ICE-T threshold will depend upon several factors, including acquisition parameters (e.g. imaging sequence, resolution), but especially the tractography method as well as the topology of the particular tract network being analysed. The tracking results obtained with ICE-T show wide agreement with those obtained in [16] using in vivo tracers. However Both results are generated from a cubic seed (dark green) placed approximately in the left MC region. Tractography without ICE-T used the original cubic seed ROI as the seed (25,000 streamlines, blue, top row). Tractography with ICE-T used the ICE-T ROI I as seed (ICE-T threshold 0.01, ICE-T streams 20, purple, bottom row), shown here at various rendering thresholds (0.02, 0.01, 0.005, 0.001).The path-length dependency is very pronounced in the tractography results without ICE-T (top row), evidenced by the movement of the end-of-tract point (green arrows) as a function of the applied threshold. Probable false-positives are seen in tractography both with and without ICE-T around the descending portion of the contralateral CST (red arrows). These can be addressed in the conventional manner by the introduction of exclusion masks (dark green box and plane) that terminate and remove any streamlines that propagate through them. Here two are shown for both methods (last column) -one along the mid-sagittal plane and one in the contralateral CST. The former is to prevent streamlines crossing between the hemispheres at the cortical level dorsal to the corpus callosum due to the high partial volume effect. The latter is to prevent segmentation of a known false-positive branch of the contralateral CST. doi:10.1371/journal.pone.0096247.g008 omissions were also noted, for example the absence of the corticonigral projections from the PFC region for datasets P2 and P3 ( Figure 6, red arrows). Although previous work [16] has successfully delineated these tracts for this dataset using both invivo tracers and tractography, the latter was achieved via the application of waypoint constraints. This suggests that local complexities may have prematurely halted the tractography using ICE-T, and that a reduction of the ICE-T threshold may be needed to permit successful penetration into the distal portion of the tracts. This underlines that external factors such as tractography method and dataset parameters (e.g. resolution, b-value) influence the selection of ICE-T threshold. However, we have shown how the parameters are generally transferrable to similar, ex vivo datasets (P1, P2, P3), and have also successfully applied it to an in vivo clinical dataset. The ICE-T parameters are not expected to be generalizable across tractography methods or acquisition parameters, however it is expected that they will also exhibit a stable range. In future work we will investigate the effect of various tractography methods upon the ICE-T parameters. A major difference between tractography with and without ICE-T is that while the latter outputs a PICo map based upon tracking from a given seed region, ICE-T generates an Intra-Tract Confidence map of all connections within the segmented tract. Interpretation of the Intra-Tract Confidence map is therefore different from that of a PICo map. The direct interpretation of the values has not been considered herein. However, since a PICo map is a metric of streamline propagation from a seed region, it is affected by tract integrity, but is not a direct measure of it. In contrast, the Intra-Tract Confidence map from ICE-T is a metric that reflects the sum of connections from every member voxel, most of which will, by construction, lie within the ICE-T_ROI, i.e the segmented tract. However, this in turn means that it cannot be used directly for network analysis as it does not reflect a probability of being connected to the seed, but instead it can be used as a binarized version of the tract system emanating from the seed. The latter is often used for creation of structural connectivity matrices. Considerations In tractography, streamlines are propagated in both directions from the seed region. It should be noted, however, that the ICE-T method we have implemented here is based upon the Camino toolbox and does not include directionality constraints applied to each ICE-T_ROI i region. This infers the possibility that the segmented tract network might reflect bi-directional pathways along the entire delineated tract. If such behaviour is undesirable then a simple forwards-only directionality constraint could be applied at the end of each ICE-T iteration. When specifying the initial seed region, we expect that any subset of voxels within the region of interest could be employed. Due to the region-growing feature of the seed region when using ICE-T, it is expected that the iterative region-growing will expand the seed to approximate the entire tract network. This same argument would also imply that care must be taken to ensure that over-inclusive regions are not employed as seeds. For example, we found false-positive lateral branching only in the results using the overly-large ROI when ICE-T is not applied (Figure 9a,b). This suggests that the imprecise delineation of an ROI (too large) could be a source of false positives in tractography, whereas ICE-T appears to be more robust to the precision of the ROI. In addition, as was clearly demonstrated in Figure 9f, a major advantage of the , (c)) shows the same results as for Figure 8, but from a posterior viewpoint. From this angle it is also clear how the tractography without ICE-T using the cubic seed also generates a lateral cortical branch ((a), (b): green arrow). Inset on (a) shows lateral view from the right side, highlighting the posteriorly-directed angle of the branch.Bottom row ((d), (e), (f)): tractography results from a single voxel seed within the left MC, using the same parameters as for the cubic seed. As for the cubic seed, the rendering thresholds have been selected so as to generate comparable propagation of the tractography into the contralateral ascending portion of the CST. In the tractography results without ICE-T ((a), (d)), the ipsilateral descending portion follows a more medial route than the results using ICE-T ((c), (f)), as can be seen on the merged views ((b), (e)). Further inspection of these results indicates that the streamlines diverge from the CST around the level of the ventricles and seem to instead pick-up a periventricular route through the medial thalamic nuclei ((d), (e): yellow arrows). The streams then diverge, following a descending route close to the CST ((d), (e): orange arrows), and a medial route along the anterior thalamic radiation ((d), (e): red arrows). The ICE-T results correctly follow the CST from both seed areas. doi:10.1371/journal.pone.0096247.g009 ICE-T method is the ability to segment a tract network using only a subset, even a single voxel, of the seed region of interest. Uniquely, using ICE-T with such a subset does not result in a penalty of increased PLD, and it is still able to segment out the same network as a much larger seed. This has obvious benefits for future clinical studies where accurate delineation of anatomical areas of interest to act as seed regions could be obviated and replaced by a selection of a single voxel within the known region. Such an approach is likely to be simpler and more reproducible as the margin for error will be a function of the region's size, and the selection could occur in those subregions where the confidence of correct localisation is highest. Conclusions The impact of PLD on the results of probabilistic streamline tractography is a confound which should be considered. We have shown the non-linear spatial variation of PLD along any given pathway, challenging the application of a global threshold and introducing both false positives (near-seed flare) and false negatives (premature tract termination). We have shown how a novel re-appraisal of the probabilistic streamline tractography pipeline, termed ICE-T Framework (ICE-T), offers the possibility to segment tract systems without the problems imposed by PLD. With ICE-T, PLD issues are substantially reduced to the point where tract networks can be delineated using a global threshold, leading to a reduction in the PLD-related confounds. Importantly, ICE-T only addresses the PLD issue, and preserves all the characteristics of the individual tractography methods. It is recommended that future work should consider handling PLD in order to minimise the risk of bias in tract statistics and structural network analysis.
2015-09-23T00:31:53.000Z
2014-05-05T00:00:00.000
{ "year": 2014, "sha1": "3ef82ccc83c69e58232ba5c937390364f7d634b3", "oa_license": "CCBY", "oa_url": "https://journals.plos.org/plosone/article/file?id=10.1371/journal.pone.0096247&type=printable", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "0226dc7870564a08a49de4cc690e4121cf57357e", "s2fieldsofstudy": [ "Computer Science" ], "extfieldsofstudy": [ "Medicine" ] }
53668042
pes2o/s2orc
v3-fos-license
Lorentz TEM investigation of chiral spin textures and N\'eel Skyrmions in asymmetric [Pt/(Co/Ni)$_M$/Ir]$_N$ multi-layer thin films We examine magnetic domain patterns in symmetric [Co/Ni]$_M$ and asymmetric [Pt/(Co/Ni)$_M$/Ir]$_N$ multi-layers using Fresnel mode Lorentz transmission electron microscopy (LTEM). In the symmetric multi-layer, where the Dzyaloshinskii-Moriya Interaction is expected to be zero, we observe purely Bloch type domain walls with no preferred chirality. In the asymmetric multi-layers, where significant interfacial DMI is present, we observe domain patterns with chiral N\'eel domain walls, which evolve into sub-100nm isolated N\'eel Skyrmions with the application of a perpendicular field. The impact of layer thickness and film stack on interfacial magnetic properties is discussed in the context of developing a tunable multi-layer system for future spintronic applications. Magnetic objects with added topological stability, such as Skyrmions and chiral domain walls (DWs), have garnered a great deal of attention in recent years due the unprecedented efficiency by which they can be manipulated with electric current for use in future spintronic devices [1][2][3]. The topology of such objects is described by the topological charge, C, as determined from 4π C = m · (∂ x m × ∂ y m) dxdy. Such objects are stabilized by the Dzyaloshinskii-Moriya interaction (DMI) which is found in magnetic materials where inversion symmetry is broken [4,5]. This interaction has been observed in bulk magnetic materials lacking inversion symmetry (such as FeGe [6] and MnSi [7]) and more recently in magnetic multi-layers at the interface of a ferromagnet (FM) and a heavy metal (HM) with large spin-orbit coupling [8][9][10]. In the latter case, interfacial DMI is well-established to stabilize chiral Néel DWs over the magnetostatically favorable Bloch wall [11]. The discovery of DMI has opened the door to a range of new magnetic materials and multi-layer designs to support the formation of such chiral configurations. Parameters such as FM/HM interfaces [12][13][14], FM layer composition [15], and asymmetric stacking sequences [16] have been explored as means of strengthening DMI in multi-layer systems. The ideal system will offer tunability of emergent magnetic properties, including DMI, while preserving other critical properties such as magnetic anisotropy and saturation magnetization. In this work we examine the magnetic domain structures of asymmetric multi-layers based on [Pt/(Co/Ni) M /Ir] N using Lorentz transmission electron microscopy (LTEM). In these films, interfacial DMI is induced at the interface of Pt/Co and Ni/Ir. It has been * mpli@andrew.cmu.edu reported that Pt and Ir induce DMI of opposite sign leading to an additive effect when placed on opposite surfaces of a magnetic heterostructure [12,17]. Although some reports have found both the Pt/Co and Ir/Co interfaces to produce DMI of the same sign [3,18], it is widely agreed that the magnitude of DMI at the Pt/Co interface is much larger. To estimate the strength of DMI in films examined here, we have leveraged the asymmetric bubble expansion technique using Kerr microscopy [9,19,20] for M = 2 and N = 1, as shown in the supplemental information. [21] In films used for the Lorentz TEM investigation presented here, the number of Co/Ni layers, M , offers tunability of interfacial magnetic properties, such as DMI, while the number of total repeats, N , contributes to the stabilization of stripe domain patterns [22,23]. These characteristics are reflected in Fresnel-mode LTEM images where Néel walls are observed confirming the presence of interfacial DMI in our materials system. We note that the inclusion of Ni in the multi-layer allows us to increase the magnetic layer thickness (via M ), which allows variation in DMI, but preserves perpendicular magnetic anisotropy due to the Co/Ni interface. This would not be possible in a multi-layer based only on Pt/Co/Ir. Symmetric [Co/Ni] M based multi-layers were also examined with LTEM for comparison; they are expected to have zero DMI and display only Bloch domain walls. II. EXPERIMENTAL Multi-layers were deposited onto 5 nm thick SiN membranes via magnetron sputtering in an Ar environment with working pressure fixed at 2.5 mTorr and base pressure at < 3.0 × 10 −7 Torr. All film stacks were deposited onto seedlayers of Ta Fresnel-mode LTEM imaging was performed on an aberration-corrected FEI Titan G2 80-300 operated in Lorentz mode. Magnetic induction maps were produced by solving the Transport of Intensity Equation (TIE) using over-and under-focused Fresnel images as in [24]. In some cases, a perpendicular magnetic field was applied in-situ by exciting the objective lens of the microscope. Interfacial DMI affects domain wall characteristics most notably manifest in the formation of Néel walls over the magnetostatically favorable Bloch wall [11]. In the absence of specimen tilt, Néel walls do not display magnetic contrast as the deflection of electrons through the Lorentz force lies parallel to the DW. When a sample tilt is applied, however, the perpendicular magnetic induction of surrounding domains gains an effective in-plane induction which deflects electrons towards or away from the DW, leading to the appearance of magnetic contrast [25,26]. This is not the case for Bloch walls which form magnetic contrast in the absence of sample tilt as electrons are deflected perpendicular to the DW. III. SYMMETRIC CO/NI MULTI-LAYERS Symmetric [Co/Ni] M based multi-layers were first examined to serve as a limiting case where there is no impact from Ir and Pt. Perpendicular M-H loops from these films indicate perpendicular magnetic anisotropy is present. The shearing and pinched shape of the loop, which is characteristic of bubble materials, suggests the formation of a multi-domain state at zero applied field. [22,23] The increased loop shearing at M = 100, compared to M = 10, is due to the increased role of dipole-dipole interactions in the formation of small magnetic domains when films are thicker. We note a negative nucleation field of 4 kOe for M = 100 as compared to 50 Oe for M = 10. Fresnel mode LTEM images of both symmetric multilayers (M = 10, 100) display magnetic contrast in the absence of tilt (FIG. 1c & d); the presence of these Bloch walls indicates no DMI is present in these multi-layers as expected. The domain structure of both multi-layers depict a demagnitized labyrinth-configuration, as reflected in M-H loops, with M = 100 displaying uniform domain widths throughout the field of view whereas those in M = 10 are less periodic. Vertical Bloch lines (VBLs) are also observed in these multi-layers which are decribed by 180 • rotations in magnetic induction along a domain wall. [27] This appears as a discontinuity in contrast along a DW in Fresnel mode images whereby the contrast inverts about the discontinuity. Such VBLs are observed to occur at a high frequency in the M = 10 multi-layers forming clusters whereby several VBLs exist at close proximity with one another along a DW. [28] VBLs are also observed in M = 100 multi-layers but are harder to discern as the DWs are spaced closer together than those in M = 10. Application of an ex-situ in-plane magnetic field produces a stripe domain pattern with domains aligned parallel to the field direction as shown in FIG. 2a,b. An in-situ perpendicular magnetic field leads to the formation of magnetic bubbles, which are found to be Bloch type (C = 1) having no preferred chirality or topologically trivial (C = 0) where two VBLs are present along the circumference (see example in supplementary information). [21] IV. ASYMMETRIC PT/CO/NI/IR MULTI-LAYERS The tunability of asymmetric [Pt/(Co/Ni) M /Ir] N based multi-layers was examined by varying the number of Co/Ni layers in a repeat unit, M . The total number of repeat units in a multi-layer stack, N , was used to stabilize labyrinth domain patterns. By increasing M , the effective DMI is expected to decrease while the areal magnetization will increase. We note that the interface anisotropy associated with Co/Ni is comparable to that of Co/(Pt, Ir) so we do not expect significant change in perpendicular magnetic anisotropy (PMA). We note that increasing thickness of either Co or Ni would lead to a Fresnel mode LTEM images of these asymmetric multilayers displayed no magnetic contrast in the non-tilted state (FIG. 4). Upon application of a 20 • tilt, however, magnetic contrast becomes apparent indicating the presence of Néel walls. The contrast seen in the accompanied in-plane induction map calculated from the tilted Fresnel images is due to the magnetization of the domains, which now have a component perpendicular to the electron beam. This process reveals alternating perpendicular domains, but does not directly provide any details on the internal structure of the domain walls. As such we proceed on characterizing the domain patterns in asymmetric films by direct examination of the Fresnel images. Magnetic contrast characteristic of Néel walls is observed even when M is increased from 1 to 3 repeats despite an effective reduction in DMI. In first approximation, due to the changing magnetic layer thickness, we expect DMI of -0.772, -0.386, and -0.257 mJ/m 2 for M = 1, 2, and 3, respectively, based on the experimental measurements of asymmetric bubble expansion shown in the supplemental information for M = 2. [21] These values are large enough to overcome the DW anisotropy and produce pure Néel DWs. A labyrinth domain structure is observed in each of these multi-layers with DW contrast becoming more apparent as M increases due to greater magnetic induction originating from a larger ferromagnetic thickness. Despite changes to DMI and perpendicular magnetic anisotropy, the domain widths are not observed to change greatly between M = 2 and 3. When compared with symmetric films, the domain widths observed in these asymmetric films are much smaller which stems from a reduction in the DW energy due to DMI. Next netic domains when N = 5 were noticeably wider than those for N = 10 and 20 which is also observed with symmetric films. A perpendicular magnetic field was applied in-situ on [Pt/(Co/Ni) 2 /Ir] 20 multi-layers by exciting the objective lens of the TEM (FIG. 6). With increasing field, domains with magnetization anti-parallel to the direction of the field shrink and visa versa. Near the saturation field, these domains form isolated Néel Skyrmions with diameters of ∼80 nm (FIG 6c) before annihilating. The inversion of contrast upon reversal of focus confirms the magnetic origin of these features (see supplementary info). Skyrmions were not observed to form in [Pt/(Co/Ni) 2 /Ir] N multi-layers where N = 5 or 10 and instead formed long, worm-like domains before annihilating at the saturation field (see supplementary info). V. SUMMARY In summary, we have examined asymmetric Pt/Co/Ni/Ir based multi-layers using Lorentz trans-mission electron microscopy. The properties of these multi-layers were tuned through variations in ferromagnetic layer thickness and overall thickness, which were reflected in the magnetic domain structure. Symmetric Co/Ni multi-layers displayed only Bloch walls indicating no DMI was present; with the addition of a Pt and Ir layer sandwiching Co and Ni, DMI is induced which is reflected in the presence of exclusively Néel walls in Fresnel mode images. Although the effective DMI was diminished with greater ferromagnetic content in film stack repeats, Néel walls were still observed. Additionally, asymmetric Pt/Co/Ni/Ir multi-layers with greater number of total film stack repeats were observed to support the formation of sub-100 nm Skyrmions at room temperature in the presence of a perpendicular magnetic field. Overall, this materials system provides a tunable platform for further exploration of chiral spin textures and the development of spintronic devices.
2018-11-05T20:04:22.000Z
2018-11-05T00:00:00.000
{ "year": 2018, "sha1": "93cb9810a92cce02b2625006a0862f3714399dd4", "oa_license": "publisher-specific, author manuscript", "oa_url": "https://doi.org/10.1103/physrevmaterials.3.064409", "oa_status": "HYBRID", "pdf_src": "Arxiv", "pdf_hash": "93cb9810a92cce02b2625006a0862f3714399dd4", "s2fieldsofstudy": [ "Physics" ], "extfieldsofstudy": [ "Physics", "Materials Science" ] }
8074685
pes2o/s2orc
v3-fos-license
NSP2 gene variation of the North American genotype of the Thai PRRSV in central Thailand Porcine reproductive and respiratory syndrome virus (PRRSV) is a major swine pathogen causing economic losses in the swine industry almost worldwide. PRRSV has been divided into 2 genotypes, the European (Type 1) and North American (Type 2) genotype, respectively and displays a large degree of genetic variability, particularly at the nonstructural protein (nsp) 2 gene. This is the first study determining genetic variation of the nsp2 of Thai PRRSV isolates. The results showed that 9 out of 10 Thai PRRSV isolates were nsp2-truncated viruses that might have evolved from a virus previously introduced in the past, but not from one recently introduced. Recently, Type 2 PRRSV with a nucleotide deletion in the nsp2 coding region has been identified in USA, China, Japan, Denmark and Vietnam [4,[12][13][14][15]. Following the outbreaks of swine high fever (SHF) syndrome in China, many genetic variants of the virus have been isolated. A novel nucleotide deletion in nsp2 found in those Chinese isolates was initially linked to the virulence of the virus [14]. The objective of this study was to investigate the deletion patterns of Type 2 PRRSV found in Thailand. Nine recent Thai isolates of Type 2 PRRSV (2007, 07NP2, 07NP4, 78/51, 8NP46, 8NP154, 08RB1, 8NP147, 8NP148 and 8NP59 and one previous Thai isolate (01CS1/2) obtained in 2001 (Table 1) (kindly provided by the Chulalongkorn University-Veterinary Diagnostic Laboratory, CU-VDL) were included in this study. All samples were obtained from PRRSV-affected farms, located in the central region, an area of Thailand with a large pig population. According to the farm history, Type 2 PRRSV infection was endemic and clinically stable in those selected farms. Samples were collected when the appearance of respiratory symptoms in suckling and/or weaning pigs and reproductive failures were highly increased compared to the baseline. In this study, at first, 4 complete nsp2 nucleotide sequences of Thai PRRSV (07NP2, 07NP4, 08RB1 and 8NP154) from the acute re-emerging PRRSV-affected farms in central Thailand were characterized by multiple alignment with Type 2 PRRSV from other countries reported in GenBank. Based on nsp2 of VR2332, the prototype of Type 2 PRRSV, nucleotide deletion was found in all of those four Thai-PRRSV nsp2 sequences. Then, the remaining 6 nsp2 sequences of other Thai PRRSVs (78/51, 8NP59, 01CS1/2, 8NP46, 8NP147 and 8NP148) were further genetically characterized in the region covering all the nucleotide deletions found in 07NP2, 07NP4, 08RB1 and 8NP154 (nt 885 -2,205 or aa 296 -735). This specific area also contains most nucleotide deletion positions previously reported [4,13,15]. Nucleotide deletion was also found in 5 of those 6 partial nsp2 sequences. Therefore, 9 out of 10 Thai PRRSVs in this study had a nucleotide deletion in the nsp2-coding region (or at least in the studied region). The size of the partial fragment (nt 885 -2,205) of the nsp2-coding region of all Thai PRRSVs in this study was shown to be 3 -384 nt smaller than the VR2332, except for 8NP147 which was devoid of either nucleotide deletion or insertion (Table 1). Interestingly, based on Figures 1 and 2, Thai Type 2 PRRSVs having similar deletion patterns were also located in the same cluster. Three groups of viruses were similarly identified based on both deletion patterns and phylogenetic analysis (a group of 07NP2, 07NP4 and 78/51, a group of 01CS1/2 and 8NP148, and a group of 8NP154, 08RB1, 8NP46 and 8NP59). It should be noted that 8NP147 was the only virus showing no nucleotide deletion in this study. In addition, it was located on a separate branch of the other Thai Type 2 PRRSVs. These results suggest a different evolutionary history of each PRRSV group in Thailand. Among the studied Thai PRRSVs, sequence identities ranged from 77.0 -99.7% and 68.1 -99.5% for nucleotide and amino acid sequence, respectively. 07NP2 and 07NP4 showed the highest sequence identity (99.5% aa identity and 99.7% nt identity) since those two viruses had been isolated from the same farm 3 months apart showing that PRRSV still persisted and caused problems on that affected farm. The lowest sequence identity was found with 8NP147 and 8NP154 (68.1% aa identity and 77.0% nt identity). It should be noted that those isolates were from the same province. Genetic comparison of nsp2 between Thai Type 2 PRRSVs and previously reported nsp2-truncated Type 2 PRRSVs was conducted. 8NP59, 08RB1, 8NP154 and 8NP46 displayed deletion patterns resembling other virulent isolates such as MN184A, MN184B (USA) and Jnt1 (Japan). However, sequence identity and phylogenetic studies showed no (or minor) genetic relationship. Identity of the nsp2 amino acid sequences between the Thai PRRSVs and the virulent US isolates, MN184A and MN184B ranged from 60.0 -65.0%. Similarly, they showed only 64.6 -68.6% identity when compared with the Japanese isolate, Jnt1 (Table 1). Amino acid sequence identity between the Thai PRRSVs and SY0608 (Chinese SHF-related isolate) was low, ranging from 69.6 -75.6% (Table 1). Sequence alignment ( Figure 3) and phylogenetic tree ( Figure 2) also showed no (or minor) genetic relationship among the viruses. These findings confirmed a total lack of evidence of SHF-like virus in Thailand at least at the time of sample collection. Only severe respiratory symptoms with moderate to high mortality after weaning were observed on these studied farms. The results suggest that nsp2-truncated viruses found in this study and other nsp2-truncated viruses from other countries are unlikely to have derived from a common origin. It is more likely that the nsp2 deletion of the Thai PRRSVs has occurred in the course of individual self evolution of the PRRSVs previously circulating in Thailand. One of the most striking characteristics of PRRSV is its genetic variation [16][17][18][19]. Nsp2 is one of PRRSV genomic regions with very high genetic variability [4,13,15,20,21]. Although the deletion in the nsp2-coding region was not related to the virulence of the emerging PRRSV in China, it could be used as a genetic marker of the highly virulent PRRSV found in China [22]. In 2007, it has been shown that the 30-aa-deletion PRRSV was also identified in Vietnam [12] which could be the result of horizontal transmission between the 2 countries. Since Thailand is in the same area as Vietnam, we therefore searched for evidence of the atypical PRRSV found in China from the acute 2007 -2008 reemerging PRRSV outbreaks in central Thailand. The data suggested that the atypical PRRSV having emerged in China in 2006 had not yet been introduced into Thailand, or at least into central Thailand since neither Type 2 PRRSV with the 30-aa-deletion pattern nor nucleotide sequences related to the Chinese isolates were found in this study. At present, only 1 complete genomic sequence of the Thai Type 2 PRRSV has been reported [23]. Since the first report of PRRSV isolation in 1996, Thailand has implemented a very rigid policy aimed at imported pigs and semen having to be PRRSV-free. Thus, introduction of new exotic PRRSV strains from other countries has been limited to a minimum. Our data did not support the hypothesis of the introduction of new PRRSV strains with the same nsp2 deletion patterns from other countries. The deletion patterns found in this study could stem from the evolution of the existing PRRSVs in Thailand. List of abbreviations PRRSV: porcine reproductive and respiratory syndrome virus; nsp: nonstructural protein; SHF: swine high fever; ORF: open reading frame; nt: nucleotide; aa: amino acid; EAV: equine arteritis virus; PCR: polymerase chain reaction study. SN participated in phylogenetic analysis and helped to draft the manuscript. RT conceived the study, participated in its design and helped to draft the manuscript. All authors read and approved the final manuscript.
2014-10-01T00:00:00.000Z
2010-11-24T00:00:00.000
{ "year": 2010, "sha1": "1dbd4b0fc20310aea51ff4cba435ce1a1ceb46e3", "oa_license": "CCBY", "oa_url": "https://virologyj.biomedcentral.com/track/pdf/10.1186/1743-422X-7-340", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "1dbd4b0fc20310aea51ff4cba435ce1a1ceb46e3", "s2fieldsofstudy": [ "Biology" ], "extfieldsofstudy": [ "Biology", "Medicine" ] }
16264351
pes2o/s2orc
v3-fos-license
Modeling truncated pixel values of faint reflections in MicroED images A procedure is presented to model the truncated low pixel counts in micro-electron diffraction (MicroED) images. The correction could extend to any conventional macromolecular X-ray crystallography or X-ray free-electron laser measurements. Introduction The success of diffraction data analysis, and consequently the quality of the final atomic model, hinges on accurate integration of the recorded Bragg reflections. The intensities of these reflections decrease with increasing scattering angle until the point where their peaks become indistinguishable from the surrounding background (Bourenkov & Popov, 2006). Ignoring the effects of solvent scattering and artifacts such as ice rings (Glover et al., 1991), the recorded counts of pixels between the Bragg spots follow the same general pattern; the greater the distance from the intersection point of the direct beam with the detector surface, the smaller their values. Because the background pixels around a reflection are commonly used to estimate the noise contribution to the integrated signal (Leslie, 1999), successful data reduction generally requires that all pixel values are accurately recorded, irrespective of their scattering angle and magnitude, or whether they represent Bragg spots or not. Many detector systems used to record diffraction data apply corrections to the raw data before a rectified image is presented to the experimenter for processing. The flat-field calibration is one such correction. For CCD-and CMOS-based detectors, this two-step procedure consists of dark-frame correction, where a previously recorded, unexposed image is subtracted, followed by multiplication with a gain image. Dark-frame correction removes features that arise from the small currents that flow through the sensor even when the shutter is closed. The subsequent gain correction compensates for the uneven response of individual pixels by ensuring that the calibrated readout under uniform flat-field illumination is featureless. In some cases, images are uninterpretable unless these corrections are applied. ISSN 1600-5767 A number of macromolecular crystal structures have recently been solved by micro-electron diffraction (MicroED) (Shi et al., 2013;Nannenga, Shi, Hattne et al., 2014;Rodriguez et al., 2015;Yonekura et al., 2015). In our laboratory, diffraction datasets have been recorded by continuous rotation (Nannenga, Shi, Leslie & Gonen, 2014) using a TVIPS TemCam-F416 CMOS camera. During data collection the crystal is slowly rotated in the electron beam and the accumulated counts are rapidly read out at regular intervals without interrupting the rotation of the sample. However, the camera's 'rolling shutter' mode (Stumpf et al., 2010) that makes these measurements possible is primarily intended to provide real-time visual feedback during data collection. The camera does apply a flat-field correction, but the storage format required to sustain the high data-transfer rates is restricted to representing pixel values as unsigned 16 bit integers. This causes problems for weak reflections, which are typically observed at high resolution. Around these reflections the raw counts on the detector may be comparable in magnitude to those in the dark frame. Owing to random fluctuations in the raw counts, dark-frame subtraction may then yield very small or even negative values, which are propagated through the subsequent gain correction. As negative counts cannot be represented in the storage format, they are truncated to zero, and information about the true, negative value is lost. Generally, the effect is not immediately apparent on visual inspection of the diffraction pattern, but becomes clear in histograms of the low pixel values, which feature a prominent peak at zero analog-to-digital units (ADU) (Fig. 1a). It is conceivable that the dark frame could be offset by some constant to reduce the probability that dark subtraction yields a negative number. This is not easily achievable without altering the software used to control the camera. Modifying the camera's storage format to use signed integers is similarly impractical. Disabling the flatfield correction altogether is unattractive, since it would remove the ability to view cali-brated diffraction images while they are being retrieved from the camera. The remaining option is to attempt to recover as much information as possible from the dataset. Here we present a procedure to model the values of the truncated pixels with zero counts from the histogram of the values of the remaining pixels. Methods For a sufficiently large sample of weakly positive-valued pixels, their histogram allows the distribution of the counts around zero to be modeled. For diffraction patterns, the parameters of the distribution of recorded counts across the image depend on the scattering angle (Fig. 1b). Therefore separate models are derived from pixels within a narrow interval of scattering angles. The finite range of scattering angles leads to heavy-tailed distributions, particularly at low resolution where a larger spread of scattering angles is necessary to provide an adequate sample size to model the distribution. Invalid pixels, for example pixels in the shadow of the beam stop, are not considered because they do not follow the distribution of pixels that record electrons scattered from the sample. We use the lognormal distribution to model the behavior of the low-valued pixels. The lognormal distribution is expected where the observed counts are the result of independent multiplicative processes in the detector (Kissick et al., 2010), but in our case its use is primarily motivated by its quality of fit to the experimental data (mean r.m.s.d. 327 ADU). The probability density function f and cumulative distribution function F of the lognormal distribution are given by where and are the location and scale parameters, respectively. A third parameter, , is used to arbitrarily shift the distribution, which allows the random variable it models to take any real value >, rather than just positive values. Assuming the pixels in a given resolution range of a diffraction image are independent and identically distributed, the probability of observing a pixel with true integer count I, such that I > 0, can then be approximated: The probability of observing a pixel with any value I 0 is given by Let HðIÞ denote the number of pixels with value I in the image. For any integer count I in the closed interval ½0; I max , HðIÞ defines the observed histogram ( Fig. 1). We assume that Distribution of the low counts in a typical MicroED image of proteinase K collected by continuous rotation using the rolling shutter mode of the camera. (a) The histogram in the second outermost shell between 1.5 and 1.7 Å for an uncorrected image, and (b) the histogram of the corresponding corrected image. The continuous curves in (b) show the fitted lognormal distributions in the two innermost (resolutions lower than 4.7 Å in blue, resolutions between 3.4 and 4.7 Å in orange) and the second outermost (black curve) resolution shells. As the resolution increases, the mode and the variance of the distribution decrease. any pixel with I > 0 is measured correctly; a pixel with I ¼ 0 could represent either a true value of zero or a negative value. We seek the parameters , and that maximize the probability of observing HðIÞ. This is equivalent to maximizing the likelihood, or more conveniently, the log-likelihood, which in our model is given by log Lð; ; t j HÞ ¼ Hð0Þ log Fð0:5Þ þ P I max I¼1 HðIÞ log f ðIÞ: ð4Þ This can be done using standard optimization algorithms such as the BFGS implementation in the R environment (R Core Team, 2015). The recovered parameters define the maximum likelihood lognormal distribution corresponding to the observed histogram in the given resolution shell. Negative values are then randomly assigned to the Hð0Þ pixels that were initially zero, such that the histogram for I 0 in the corrected image conforms to the fitted distribution (Fig. 1b). Pixels with initially positive values remain unchanged (Fig. 2), and the frequency of negative values agrees with the optimized model. Only the spatial arrangement of the negative values is random. Uncorrected images that do not contain any zero-valued pixels will have Hð0Þ ¼ 0; the correction does not alter these images in any way. If the corrected image will be stored in a format that does not support negative counts (e.g. SMV), an offset has to be applied before the image is output. To preserve correct integration downstream, the integration software has to be made aware of this offset (e.g. ADCOFFSET in MOSFLM). Choosing the offset as the negated value of the smallest count in all resolution shells of all images after correction allows straightforward processing of the sweep. The procedure was validated against MicroED images collected from four crystals of proteinase K. Protein solutions from Engyodontium album (Sigma-Aldrich, St Louis, MO, USA) were prepared by combining 2 ml of protein solution (50 mg ml À1 ) with 2 ml of precipitant solution (1.0-1.3 M ammonium sulfate, 0.1 M Tris pH 8.0). Crystals in space group P4 3 2 1 2 with unit cell a = b = 67.3, c = 101 Å appeared in hanging drops after equilibrating against the precipitant solution for three days. MicroED images were recorded on a transmission electron microscope (FEI) equipped with a field emission gun and a TVIPS TemCam-F416 CMOS camera using published protocols (Nannenga, Shi, Leslie & Gonen, 2014;Shi et al., 2016). At an acceleration voltage of 200 kV and a camera length of 1.2 m (corresponding to a virtual detector distance of 2.2 m) the detector can record reflections at resolutions up to $1.75 Å at the edges and $1.25 Å in the corners. The correction was applied to the images independently in ten concentric annuli of approximately equal area. Corrected datasets were indexed and integrated with MOSFLM (Leslie & Powell, 2007). To ensure comparable integration for the uncorrected and corrected datasets only the missetting angles were optimized during integration. The mosaicity was refined to convergence for each crystal separately and then held constant during integration. All detector parameters were fixed, and the measurement box was set to a 13 Â 13 pixel box with a 4 pixel border and an 8 pixel corner cutoff (Leslie, 1999). To allow the integration box to contain zero-valued pixels for the uncorrected data, MOSFLM's NULLPIX parameter was set to À1. The intensities calculated by summation integration were scaled and merged using AIMLESS with default parameters (Evans & Murshudov, 2013). The upper resolution limit imposed during scaling lies just inside the detector corners where the number of observations is barely large enough to permit merging statistics to be calculated. This is beyond commonly employed resolution cutoffs, but allows the effect of the correction on the weakest high-resolution reflections to be evaluated. The merged data were phased by molecular replacement in MOLREP (Vagin & Teplyakov, 1997) A spot near the edge of the detector (d = 1.8 Å ) (a) before and (b) after correction. Pixels with initial counts >0 ADU are otherwise unchanged, while zero-valued pixels exhibit counts 0 ADU in the corrected image. Figure 3 MicroED structure of proteinase K at 1.75 Å resolution. (a) The overall MicroED structure of proteinase K. (b) A five-residue fragment of the final model refined against the data derived from the corrected images. The SA composite omit map at 1.75 Å resolution is contoured at 1.0 above the mean and shows a hole in the center of the tyrosine side chain. The figures were generated using PyMol (Schrö dinger, 2014). data, respectively. Both models were refined with phenix.refine (Afonine et al., 2012) using electron scattering factors (Colliex et al., 2006), automatic water modeling and weight optimization of the stereochemistry terms. Only reflections up to 1.75 Å were included in the refinement, because the completeness of the merged dataset drops rapidly beyond the edges of the detector [see Fig. 5(a) in x3]. The simulated annealing (SA) composite omit map computed by CNS (Brunger, 2007) clearly reveals depressions or even holes in the centers of the aromatic side chains (Fig. 3). Results and discussion The correction only modifies the zero-valued pixels in an image and it can never increase their values. Because the mode of the fitted distribution tends to decrease with increasing resolution (Fig. 1b), the number and magnitude of the negative-valued pixels is expected to increase toward the edges of the detector. This behavior is seen in the integrated reflections (Fig. 4), with the exception of the low-resolution reflections, where the decreased values of the pixels surrounding the peaks lead to stronger integrated intensities after background subtraction. For higher-resolution reflections, where corrected pixels may fall within the foreground, the integrated intensities decrease as well. The magnitude of the difference between the integrated intensities before and after correction increases with resolution, and the corresponding increase in the fraction of negative intensities (Fig. 4) is consistent with this observation. Compared to the uncorrected images, the corrected dataset merged $2.5Â more reflections (Table 1). The vast majority of the rejections for the uncorrected images occur during integration owing to excessive background gradient (87%), indicating problems modeling the background, where low pixel counts are more abundant. Other rejections are mostly due to incompletely recorded, partial reflections and ill-fitting peaks. The smaller number of outlier rejections in the corrected dataset is reflected in an increased completeness and multiplicity ( Fig. 5a and Table 1). Except for the reflections only observed in the corners of the detector, the half-set correlation, CC 1/2 (Karplus & Diederichs, 2012), is marginally higher for the corrected images than for the uncorrected images (Fig. 5b). Beyond the edge of the detector CC 1/2 is dominated by noise. The merging R factors on the other hand are higher for the corrected dataset than for the uncorrected images, and this is most pronounced in the higher-resolution shells. At high resolution, Effect of the procedure on the integrated intensities before scaling and merging. The average change in the integrated unmerged intensities (blue curve) is smoothly varying as a function of resolution. Except for at the lowest resolutions, the intensities are consistently lower in the corrected data, and the magnitude of the difference increases with resolution. The horizontal dotted line at hI corrected À I uncorrected i = 1 is added to aid comparison. The fraction of negative intensities is larger in the corrected data (orange curve) than in the uncorrected data (black curve). The difference increases steadily until just beyond the edge of the detector, which is marked by a vertical dotted line. Table 1 Merging and refinement statistics for the uncorrected and corrected datasets of proteinase K. Both datasets were derived from the same 184 images collected from four separate nanocrystals of proteinase K. The frames were exposed for $4 s while the stage on which the crystals were mounted was continuously rotated at 0.09 s À1 . The models derived from the uncorrected and corrected images contain 166 and 133 water molecules, respectively. Both models include two sulfate ions. For CC 1/2 > 0.30, AIMLESS estimates the resolution limits to be 2.01 and 1.96 Å for the uncorrected and corrected datasets, respectively. The corresponding limits for hI/ I i > 1.50 are 1.96 and 1.91 Å . Numbers in parentheses refer to the highest-resolution shell for either merging or refinement. Figure 5 Merging statistics as a function of resolution. (a) At high resolution hI/ I i is higher in the uncorrected dataset (black curve) than in the corrected dataset (orange curve), and the values tend to zero only in the corrected dataset. The horizontal dotted line at hI/ I i = 1 is added to aid comparison. Beyond the edge of the detector (vertical dotted line) the completeness drops sharply for both the uncorrected (black dashed curve) and the corrected (orange dashed curve) datasets. (b) CC 1/2 is slightly higher for the corrected images (orange curve) than for the uncorrected images (black curve). Beyond the edge of the detector, indicated by the vertical dotted line, the curves are dominated by noise. individual pixel counts are more affected by noise, and their variance is governed by fluctuations around low counts. In the uncorrected dataset these fluctuations are diminished when negative pixel counts are truncated, leading to artificially homogenous integrated intensities and underestimated standard deviations for the very weakest Bragg spots. The correction recovers some of this variance, and notably, hI/ I i in the highest-resolution shell, where reflections are not visually discernible, drops twofold ( Fig. 5a and Table 1). With otherwise identical protocols, the overall R work and R free values are lower by 1.0 and 0.7%, respectively, for the model refined against the corrected dataset compared to those for the uncorrected dataset. The correlation coefficients between the observed and calculated structure factor amplitudes are generally higher for the model refined against the corrected data than for the model refined against the uncorrected data, and the effect is more pronounced at higher resolution (Fig. 6a). Similarly, the atomic model refined against the corrected data correlates better to its density map calculated from reflections in the interval between 1.75 and 5.00 Å than the model refined against the corresponding uncorrected data (Fig. 6b). However, the atomic coordinates of the two models are very similar with an r.m.s.d. of 0.080 Å . Conclusion The systematic truncation of weak pixel values introduces subtle anomalies in the integrated Bragg intensities, which propagate to the refined model. In the present case, the artifacts are due to the data format's inability to represent negative counts. File formats restricted to unsigned integers are common in crystallography, but it is conceivable that similar problems could arise by other means. However, modeling the counts of the low-valued pixels can help to recover the true signal for the high-resolution reflections. For stronger reflections, the benefit of the correction lies mainly in a realistic appearance of the background surrounding the peak, which provides a more accurate estimate of its reliability. The end effect is that the merged reflections better represent the amplitudes of the diffracting crystal's scattering factors. This in turn improves the quality of the final atomic model. Depending on the particular implementation of the spot-finding routine, the correction can also boost autoindexing and unit-cell determination of faint diffraction datasets, where an artificially flat background otherwise yields many spurious spots. It must be noted that the pixel values that are lost in truncation can never be truthfully recovered. Future advances could improve the quality of the procedure introduced here, but the correct negative values of the affected pixels are fundamentally irretrievable. The procedure instead models the corrupted counts, which limits the accuracy of the correction to the quality of the model and the process used to determine its parameters. While the reliance on a random number generator for the spatial distribution of negative counts is appropriate since it models the stochastic fluctuations that initially lead to the negative, truncated pixel values, it implies that the procedure is non-deterministic. Owing to the local homogeneity of the detector, initial attempts at exploiting per-pixel statistics instead for the assignment of the negative counts have not been successful. However, separately applying the correction to smaller regions can reduce the impact of the random number generator. The current implementation limits the structure of these areas to concentric annuli, but this could be extended to arbitrary shapes, which together cover the surface of the detector. Ideally, a diffraction measurement would be conducted such that the need for the correction described here would never arise. In emerging methods such as MicroED, which often rely on hard-and software originally developed and optimized for different purposes, this is not always immediately possible. Future developments in MicroED will address these difficulties by, for example, determining how to use the camera in a different mode that allows signed integers to be recorded. The corrected data and the model refined against them are available under PDB id 5i9s and EMDB id EMD-8077. The uncorrected data have been deposited with the Structural Biology Data Grid (Meyer et al., 2016) under doi 10.15785/ SBGRID/262. The procedure will be included in an upcoming release of our conversion tools for MicroED diffraction images (Hattne et al., 2015). Correlation coefficients of the refined model. (a) Particularly at high resolution, CC work (solid curves) and CC free (dashed curves) are generally higher for the model refinement against the corrected dataset (orange curves) than for the model refined against the uncorrected dataset (black curves). (b) For all 279 residues of proteinase K, the real-space correlation coefficient for the corrected data in the resolution range between 1.75 and 5.00 Å is higher for the model refined against the corrected data than for the model refined against the same resolution range of the uncorrected data.
2017-10-19T06:02:37.914Z
2016-05-11T00:00:00.000
{ "year": 2016, "sha1": "f230aa6e9d881702edc645aee42d859a3486a671", "oa_license": "CCBY", "oa_url": "http://journals.iucr.org/j/issues/2016/03/00/zw5007/zw5007.pdf", "oa_status": "HYBRID", "pdf_src": "PubMedCentral", "pdf_hash": "06f71dfe50b09514705ee884e3199c0115545172", "s2fieldsofstudy": [ "Materials Science", "Physics" ], "extfieldsofstudy": [ "Physics", "Medicine" ] }
254402637
pes2o/s2orc
v3-fos-license
The power of a healthy lifestyle for cancer prevention: the example of colorectal cancer Objective: We aimed to directly compare the estimated effects of adherence to a healthy lifestyle with those of risk predisposition according to known genetic variants affecting colorectal cancer (CRC) risk, to support effective risk communication for cancer prevention. Methods: A healthy lifestyle score (HLS) was derived from 5 lifestyle factors: smoking, alcohol consumption, diet, physical activity, and body adiposity. The association of lifestyle and polygenic risk score (PRS) (based on 140 CRC-associated risk loci) with CRC risk was assessed with multiple logistic regression and compared through the genetic risk equivalent (GRE), a novel approach providing an estimate of the effects of adherence to a healthy lifestyle in terms of percentile differences in PRS. Results: A higher HLS was associated with lower CRC risk (4,844 cases, 3,964 controls). Those adhering to all 5 healthy lifestyle factors had a 62% (95% CI 54%–68%) lower CRC risk than those adhering to ≤ 2 healthy lifestyle factors. The estimated effect of adherence to all 5 compared with ≤ 2 healthy lifestyle factors was as strong as the effect of having a 79 percentile (GRE 79, 95% CI 61–97) lower PRS. The association between a healthy lifestyle and CRC risk was independent of PRS level but was particularly pronounced among those with a family history of CRC in ≥ 1 first-degree relative (P-interaction = 0.0013). Conclusions: A healthy lifestyle was strongly inversely associated with CRC risk. The large GRE indicated that CRC risk determined by polygenic risk may be offset to a substantial extent by adherence to a healthy lifestyle. Introduction Epidemiological studies have identified multiple lifestyle factors associated with various cancers including colorectal cancer (CRC) 1 . However, the prevalence of "risky" lifestyle factors (e.g., smoking, unhealthy diet, and obesity) remains high or has increased in many countries [2][3][4][5][6] . Beyond lifestyle factors, genetic predisposition is also a major determinant of CRC risk. Polygenic risk scores (PRSs) based on a steadily increasing number of single nucleotide polymorphisms identified in genome-wide association studies are increasingly used to quantify genetic predisposition [7][8][9][10] . Although PRSs may be helpful for risk stratification in secondary prevention efforts, a danger exists in that they might be misinterpreted to suggest that CRC risk is an unmodifiable feature, thus discouraging primary prevention efforts. Therefore, whether and to what extent lifestyle factors interact with genetic risk, and to what extent increased polygenic risk can be offset by a healthy lifestyle, must crucially be demonstrated. Comparisons between the effects of individual lifestyle factors and polygenic risk have recently been conducted with the genetic risk equivalent (GRE), a novel metric to enhance effective risk communication in cancer preventive efforts [11][12][13][14] . Previous work from our group has indicated a strong association between a healthy lifestyle score, an integrative metric of lifestyle behaviors, and lower risk of CRC in a dose-dependent manner 15,16 , in agreement with previous findings 17,18 . An estimation of the extent to which increased CRC risk, as determined by polygenic risk, could be "compensated" for by adherence to healthy lifestyle behaviors could help facilitate risk communication and better inform the public regarding the benefits of adherence to a healthy lifestyle. Therefore, this study was aimed at comparing the effects of a healthy lifestyle with the effects of genetic predisposition according to known genetic variants, by using the novel concept of the GRE. Study design and study population This analysis was based on data from the DACHS [Darmkrebs: Chancen der Verhütung durch Screening (German)] study, an ongoing population-based case-control study in southwest Germany. Details of the design of the DACHS study were as reported previously 15,16,[19][20][21][22] . Briefly, German-speaking patients (≥ 30 years, no upper age limit) with a first histologically confirmed diagnosis of CRC are eligible to participate. Approximately 50% of all eligible patients in the study area of approximately 2 million people are recruited from 22 hospitals offering first-line treatment to patients with CRC. Control participants are randomly selected from population registries and matched to cases by age (5-year group), gender, and county of residence. Our analyses included 4,844 cases and 3,964 controls enrolled from 2003 to 2017, for whom genetic data and complete lifestyle data were available (Figure 1) Data collection Standardized in-person interviews were scheduled during hospital stays for cases and at home for controls. Information on sociodemographic and lifestyle factors, and family and medical history was collected during interviews. Pathology records and discharge letters were obtained from medical charts for all cases. In addition, blood or buccal swab samples were collected from both cases and controls for genotyping. Details of the lifestyle factors assessed in the DACHS study have been described in recent studies 11,[13][14][15][16] . Briefly, highly detailed information on current and prior smoking behavior, including years of initiation and cessation and amounts of smoking, was obtained from each participant and used to calculate pack-years for current smokers and former smokers (defined as people who had ever smoked and had ceased for at least 2 years). Participants were also asked about the number of alcoholic drinks [beer (0.33 L), wine (0.25 L), or liquor (0.02 L)] that they had consumed on average per week from the ages of 20 to 80 years (ascertained in 10-year intervals). On the basis of the ethanol content of each beverage type (assuming 4, 8.6, and 33 g of pure ethanol in 100 mL of beer, wine, or liquor, respectively 23 ) and data from all decennial ages, we calculated the average lifetime alcohol consumption (g/d). Dietary information was obtained with a 23-item food frequency questionnaire at baseline. Participants were asked about their average frequency of consumption over the 12 months before the date of diagnosis or interview. We 15,16 . Points were assigned to 6 main food groups (red and processed meat, fish, whole grains, dairy foods, fruit, and vegetables/salad) and were then summed (Supplementary Table S1). Information on the number of hours per week that participants spent performing various physical activities at the ages of 20, 30, 40, 50, 60, 70, and 80 years was obtained. Information on non-occupational physical activity (walking, cycling, or participating in sports) at the decennial age preceding the current age was used to derive the average MET min per week. We assumed 3.3, 6, and 8 MET-hours/week for each hour per week spent walking, cycling, and participating in sports, respectively 25 . We did not include occupational activity (hard exhausting work and light work) in the analysis, because most study participants were no longer occupationally active. Participants also reported their weight at each decade from age 20 to 80 years, and their current weight and height. To avoid bias due to cancer-associated weight loss, body mass index (BMI, kg/m 2 ) for this analysis was calculated on the basis of the weight approximately 10 years before the diagnosis or interview; e.g., weight at age 50 years was used for participants 55-64 years of age, weight at age 60 years was used for participants 65-74 years of age, etc. Derivation of the healthy lifestyle score We calculated the healthy lifestyle score as previously proposed by Carr et al. 15,16 , including the 5 lifestyle factors of smoking, alcohol consumption, diet quality, physical activity, and BMI. Details on the derivation of the healthy lifestyle score have been published elsewhere and are summarized in Supplementary Table S2. Briefly, participants were assigned 1 point for the following low-risk lifestyle behaviors: non-smoking (never smoking or former smoking of < 30 pack-years 26 ), alcohol consumption below the recommended level by WCRF/AICR (≤ 24 g/day for men and ≤ 12 g/day for women) 1 , a healthy diet (diet quality score in the highest 40%), being physically active (meeting the World Health Organization Global Recommendations on Physical Activity for Health: ≥ 150 min of moderate-intensity or ≥ 75 min of vigorous-intensity physical activity per week, or ≥ 500 MET min of moderate and vigorous-intensity physical activity) 27 , and having a healthy weight (BMI ≥ 18.5 to < 25 kg/m 2 ). The number of points for the 5 lifestyle factors were then summed to obtain a healthy lifestyle score, which ranged from 0 (least healthy) to 5 (most healthy). Derivation of the polygenic risk score DNA for genotyping was obtained from blood samples (99.1%) or from buccal swabs when blood samples were not available (0.9%). Supplementary Table S3 presents the details on genotyping and imputation methods. The PRS in the current analysis integrates information from 140 CRC-associated risk variants identified in a recent genome-wide association study 10 and extracted from our datasets (Supplementary Table S4). The unweighted score was calculated by summation of the number of risk alleles of the respective variants (0, 1, or 2 copies of the risk allele for genotyped loci; imputed dosages for imputed loci). A weighted PRS that summed all risk alleles with weights [log odds ratio (OR) of the respective risk variants] was additionally calculated for comparisons of the associations of unweighted and weighted PRS with CRC risk. Because the results were similar (Supplementary Table S5), the unweighted PRS was used in all further analyses. Derivation of the genetic risk equivalent GREs for individual low-risk lifestyle factors and different levels of the healthy lifestyle score were calculated as ratios of respective coefficients for healthy lifestyles and PRS percentiles from logistic regression models. The concept of GRE was developed in analogy with the well-established concept of risk and rate advancement periods 28 . Details on the calculation of GREs and 95% confidence intervals (CIs) for GREs have been published recently [11][12][13][14] . Briefly, consider an analysis based on a multivariable logistic regression: where ln(R) reflects the log odds of the disease risk, and a, b 1 , b 2 , and ci (i = 1, …, n) refer to the intercept and model parameters for H (individual healthy lifestyle factors or combined healthy lifestyles that were quantified by a healthy lifestyle score, categorized as 1 for subgroups with more healthy lifestyles and 0 for the reference group), P (PRS percentiles according to the distribution of PRS among controls), and F (other covariates). The GRE is calculated as the ratio of b 1 and b 2 , the estimated coefficients for healthy lifestyle categories and the PRS from the regression models, and thus the properties of GRE follow from the properties of b 1 and b 2 , which include consistency, asymptotic unbiasedness, and normality. With the delta method 29 , the asymptotic variance of GRE can be derived as: Because the GRE is asymptotically normal, its 95% CI can be calculated with the square root of var(GRE): ( ) ± GRE 1.96 var GRE Figure S1, the assumption of a linear relationship between PRS percentiles and the log (OR) of CRC risk appears reasonable (P value for linear trend = 0.00066, adjusted R-squared = 0.9822), thus indicating that GREs can be interpreted in a straightforward manner. For example, a GRE of −30 for non-smoking means that the effect of abstaining from smoking would correspond to the effect of having a 30 percentile lower PRS for CRC. Statistical analysis The distribution of the characteristics of cases and controls was described, and differences were compared between groups with chi-square or t tests. We also described the frequency of the healthy lifestyle factors, and measured agreement among the lifestyle factors in cases and controls by using Cohen's kappa statistic 30 . To assess the associations of the individual lifestyle factors (smoking, alcohol consumption, diet quality, physical activity, and BMI) with CRC risk, we used logistic regression models adjusted for the matching factors age and gender. Age was defined as age at diagnosis for cases and age at interview for controls. In further multivariable models, we additionally adjusted for education (< 9, 9-10, or > 10 years of schooling), family history of CRC (family history of CRC in a first-degree relative, yes/no), history of colonoscopy (yes/no), participation in routine health check-ups (yes/no), regular use (≥ 2 times/week for at least 1 year) of nonsteroidal anti-inflammatory drugs (NSAIDs, yes/ no), and the PRS (per 10 percentiles, continuous variable). Furthermore, we included mutual adjustment for the other lifestyle factors. Associations of the healthy lifestyle score with CRC risk was assessed in models adjusted for the same covariates described above except for mutual adjustment of the individual lifestyle factors. The healthy lifestyle score was added as a categorical variable (0-2, 3, 4, or 5 points) by using those with a score ≤ 2 as the reference group, accounting for the reasonable sample size and robust parameter estimation, or as an ordinal variable (per 1-point increase in the score; linear trend). We also evaluated the association of low, moderate, and high PRS levels (categorized according to tertiles of PRS among controls) with CRC risk, and tested for interaction with the healthy lifestyle score on CRC risk by adding a cross-product term along with the main effect terms in multivariable models. Stratified analysis of the associations between the lifestyle score and CRC by PRS level was also conducted. We performed subgroup analyses according to cancer site (colon/rectum) and clinical stage (stage I-IV), and by other potentially effect modifying factors including age (< 55 or ≥ 55 years), gender (female/male), history of colonoscopy (yes/no), use of NSAIDs (yes/no), and family history of CRC (yes/no). All analyses were conducted in R (version 4.1.3) and SAS (version 9.4) software. All statistical tests were conducted twosided with an alpha value of 0.05. Results Baseline characteristics of the study population by case and control status Table 1 presents the main characteristics of 4,844 cases and 3,964 controls. The median age was 69 years, and approximately 60% of participants were male in the case and control groups. Compared with controls, cases were less educated, were more likely to be current and former smokers (packyears ≥30), drank more alcohol (only for male cases), were more likely to have a lower diet quality score and lower physical activity levels, and were more often overweight or obese. Healthy lifestyle scores were therefore lower for cases than for controls. More than half the cases and controls adhered to at least 3 healthy lifestyle factors, and 7.3% of cases and 14.1% of controls adhered to all 5 healthy lifestyle factors. In addition, a higher proportion of cases reported a family history of CRC, and a lower proportion of cases than controls had had a colonoscopy examination, participated in routine health Figure S2. CRC cases had a significantly higher PRS than controls (mean: 138.5 vs. 135.9, P-value from Kruskal-Wallis test < 0.0001), although the distributions widely overlapped. As shown in Supplementary Table S6, the most prevalent healthy lifestyle factor was adherence to physical activity recommendations (cases: 84.3%; controls: 87.6%), whereas the adherence was lowest for BMI (cases: 29.7%; controls: 37.9%). With the exception of smoking and BMI, the healthy lifestyle factors tended to show slight positive agreement within participants; the highest agreement was observed between non-smoking and adherence to alcohol recommendations (kappa coefficient = 0.13 and 0.12 in cases and controls, respectively). Association of individual lifestyle factors with CRC risk All low-risk lifestyle factors except adherence to physical activity recommendations (OR 0.95, 95% CI 0.83-1.09) were significantly associated with a lower risk of CRC. Multivariate adjusted ORs (95% CI) were 0.86 (0.76-0.97) for non-smoking, 0.85 (0.76-0.95) for adherence to alcohol recommendations, 0.69 (0.63-0.76) for a healthy diet quality score, and 0.67 (0.60-0.74) for a healthy BMI ( Table 2). None of the interactions between the individual lifestyle factors and PRS on CRC risk reached statistical significance. Association of the healthy lifestyle score with CRC risk In combined analyses, the healthy lifestyle score was inversely associated with CRC risk independently of PRS level ( Table 3). Participants with a healthy lifestyle score of 3, 4, or 5 points had a 22% (95% CI 12% to 31%), 37% (95% CI 28% to 45%), and 62% (95% CI 54% to 68%) lower risk of CRC than those with a healthy lifestyle score ≤ 2 points. These associations were similar in each PRS tertile (Supplementary Table S7) and in subgroups stratified by cancer site ( Table 4), age, gender, history of colonoscopy, and use of NSAIDs, but varied by family history of CRC (Supplementary Tables S8 and S9). The highest healthy lifestyle score was associated with an 80% lower risk of CRC among participants with a family history of CRC (OR 0.20, 95% CI 0.11-0.33), thus indicating a much stronger risk reduction than that among those without a family history of CRC (OR 0.42, 95% CI 0.35-0.51) (Supplementary Table S9). We observed a stronger risk reduction of stage IV CRC with adherence to all 5 healthy lifestyle factors compared with stage I-III CRC (P value for heterogeneity = 0.0018, Table 4). Genetic risk equivalents for different levels of the healthy lifestyle score Each point increase in the healthy lifestyle score was equivalent to a decrease in CRC risk corresponding to a 20 percentile lower ranking in the PRS (GRE −20, 95% CI −25 to −16) ( Table 3 GREs were estimated for colon and rectal cancer ( Table 4), and in subgroups defined by age, gender, history of colonoscopy, or use of NSAIDs (Supplementary Tables S8 and S9). Again, the most pronounced GREs were estimated for those with a family history of CRC, among whom an increase in healthy lifestyle score by 1 point was equivalent to a 34 percentile (GRE −34, 95% CI −47 to −20) lower ranking in the PRS distribution (Supplementary Table S9). Discussion In this large population-based case-control study, a healthy lifestyle score incorporating information from known lifestyle factors was associated with a lower risk of CRC in a dose-dependent manner, regardless of polygenic risk of CRC. Those adhering to all 5 healthy lifestyle factors had a 62% (95% CI 54%-68%) lower risk of CRC than those adhering to ≤ 2 healthy lifestyle factors. The effect of adhering to all 5 healthy lifestyle factors compared with ≤ 2 healthy lifestyle factors was estimated to be as strong as the effect of having a 79 (95% CI 61-97) percentile lower PRS. Intriguingly, the estimated effects of a healthy lifestyle were more evident among participants who reported a family history of CRC. The large GREs for individuals with a healthy lifestyle underscores the benefits of adherence to lifestyle recommendations in CRC prevention. Several previous analyses [15][16][17][18] have explored the interaction of lifestyle scores and PRS on CRC risk, all of which observed a similar pattern of effects of combined lifestyle factors on CRC at different PRS levels. In our analysis, we used a combination of available international recommendations as well as study specific cutoffs for the determination of the healthy lifestyle score. Moreover, we used the same definition of the healthy lifestyle score as that in previous work by Carr et al. 15,16 , which was based on a smaller data set from the DACHS study available at that time. Our study corroborates and extends the results of these previous analyses, which also had included comprehensive sensitivity analyses, by adding comparative analyses of the effects of PRS and individual and combined healthy lifestyle factors on CRC risk in a larger sample of cases and controls. Cho et al. 17 have calculated a combined lifestyle risk score based on 5 modifiable factors (obesity, physical activity, smoking, alcohol consumption, and dietary inflammatory index) and have observed that a lifestyle risk score in the highest tertile was associated with an approximately 5.8-fold greater risk of CRC than the score in the lowest tertile. In a study by Choi P value for interaction between PRS and healthy lifestyle score 4 = 0.88/0.39 1 Adjusted for age and gender. 2 Additionally adjusted for school education, family history of CRC, history of colonoscopy, participation in routine health check-ups, use of nonsteroidal anti-inflammatory drugs, healthy lifestyle score (categorical variable, for the analysis of PRS), and PRS (continuous variable with per 10 percentile increase, for the analysis of healthy lifestyle score). 3 PRS was categorized into low, moderate, and high levels according to tertiles of PRS among controls. 4 Interactions were tested by inclusion of a cross-product of the PRS (categorical variable/continuous variable) and the healthy lifestyle score (categorical variable) along with the main effect terms in multivariable models. CI, confidence intervals; CRC, colorectal cancer; OR, odds ratio; PRS, polygenic risk score; Ref., reference. 18 , healthy lifestyle scores were constructed by using 8 lifestyle factors, primarily according to the American Cancer Society guidelines. A score ≥ 4 points was associated with a 29% (95% CI 19% to 37%) lower risk of CRC than a score ≤ 1 point. A recent study based on 2 large international consortia (including DACHS data) from 1992 to 2005 has developed an "E-score" involving 19 lifestyle and environmental risk factors, and has observed a greater CRC risk with higher E-scores-an effect also independent of PRS level 31 . Although the definitions of the lifestyle scores varied, and the number of risk variants involved in the PRS construction also differed among studies (the numbers of variants were smaller than in our study and varied between 13 and 95 in previous studies), all these findings underscore the importance of adherence to lifestyle recommendations regardless of polygenic risk of CRC. An intriguing finding in our analysis was a notable variation in lifestyle-CRC associations according to family history status. Although family history, like PRS, reflects genetic predisposition to some extent, it may also reflect shared environmental factors. In our study, family history of CRC was associated with less healthy lifestyle factors; this finding may partly reflect the clustering of risky lifestyle behaviors within families. Another aspect requiring careful consideration is that family history may also be associated with rare variants with high penetrance (e.g., mutations of APC tumor suppressor genes and DNA mismatch repair genes), whereas PRSs are built on the basis of common risk variants with low penetrance 32,33 . Therefore, family history and PRS may partly represent 2 different and complementary sources of genetic risk. Interestingly, interactions between lifestyle factors and rare genetic variants with respect to CRC risk have been reported in previous studies 34,35 ; therefore, such interactions might also have contributed to the interactions between family history and lifestyle factors observed in our study. Further large-scale studies are necessary to validate these findings, and to further decipher the genetic and environmental components of family history and clarify their interactions with healthy lifestyles in colorectal carcinogenesis. However, no studies to date have directly compared the magnitude of CRC risk associated with a combined healthy lifestyle score to the magnitude of CRC risk increased by known genetic variants. Communicating genetic risk in ways that could maximize understanding and promote public health is essential but challenging for diseases resulting from the complex interplay between genetic and environmental factors, particularly as genetic information is rapidly emerging with advances in genomic technologies. The GRE might serve as a useful supplementary metric to the traditional approaches commonly used to quantify the association of exposure with the risk of a specific outcome, such as odds ratios, whose meaning may be difficult to explain to laypeople and thus may hinder effective risk communication. Communicating the effects of modifiable risk factors of CRC in terms of GREs might help individuals feel less powerless against their genetic predisposition to CRC and empower them to adhere to healthy lifestyle recommendations. A major strength of our study is its use of a large sample size and detailed information on the participants' lifestyles as well as a comprehensive set of other CRC-associated factors, which enabled thorough confounder adjustment and detailed subgroup analysis. Our study adds important information to the limited evidence on the interaction between individual and combined healthy lifestyle factors and polygenic risk of CRC. Furthermore, this is the first study deriving GREs for different levels of healthy lifestyles, which might help promote effective risk communication in cancer prevention. Despite these strengths, several limitations of our study also require careful consideration, particularly those resulting from the case-control design of this study. First, we cannot rule out the possibility of information bias, because most of the data, including information on lifestyle factors, were retrospectively gathered. Imperfect recall or imprecise reporting might have attenuated the associations. Second, we cannot rule out the possibility of selection bias: those who participated in our study might potentially have tended to be more health-conscious than those who did not. In particular, overrepresentation of healthier controls included in the analyses might have led to an overestimation of lifestyle-CRC associations. However, adjustment for several covariates associated with health consciousness, such as education, history of colonoscopy examination, and history of routine health checkups, in the regression models should have limited potential bias from this source. Third, despite comprehensive covariate adjustment, residual confounding by omitted or imperfectly measured confounders cannot be ruled out. Fourth, despite the overall large sample size, the sample size in certain subgroups, such as the younger population and the subgroup with a family history of CRC, was relatively small, thus resulting in wide confidence intervals for risk estimates and GREs in these groups. Finally, the results in our study have not been validated in different populations and were based on a population of almost exclusively European ancestry. Further studies are warranted to validate our results in larger populations, and in populations with other or more diverse ethnicities. Conclusions In conclusion, we observed a substantial decrease in risk with adherence to combined healthy lifestyle factors; this effect was independent of the polygenic risk of CRC but was more apparent among those with a family history of CRC. A comparably strong risk reduction in relative terms at all levels of PRS implied a particularly strong absolute risk reduction associated with a high healthy lifestyle score for individuals with a high PRS 16 . The large GRE estimates indicated that a high polygenic risk of CRC can be offset to a substantial extent by a healthy lifestyle and can be greatly "compensated" for by adherence to healthy lifestyle recommendations. These findings might help inform targeted CRC prevention efforts and motivate adherence to healthy lifestyle recommendations. Future studies and further validation are warranted to replicate and corroborate our findings and to provide more precise GREs, particularly for the high-risk group with a family history of CRC.
2022-12-08T16:17:26.077Z
2022-11-15T00:00:00.000
{ "year": 2022, "sha1": "b256c7dc5ef9d15e65569b957a3f296e16ea3bd8", "oa_license": "CCBY", "oa_url": "https://www.cancerbiomed.org/content/cbm/19/11/1586.full.pdf", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "5a6d573fd09c2d036fb44ed610d83a318caca94d", "s2fieldsofstudy": [ "Medicine" ], "extfieldsofstudy": [ "Medicine" ] }
220388383
pes2o/s2orc
v3-fos-license
A Rare Cause of Type II Neovascularization: Unilateral Retinal Pigment Epithelium Dysgenesis Unilateral retinal pigment epithelium dysgenesis (URPED) is a very rare clinical condition first described in 2002. Fundus examination and imaging findings are almost pathognomonic and can facilitate diagnosis of this uncommon disease. In this article, we present a 32-year-old patient who developed type II neovascularization (NV) as a complication of URPED. After 6 months of monthly intravitreal bevacizumab injection, visual acuity increased from 20/32 to 20/20 but optic coherence tomography findings were partially improved. The aim of this report is to highlight URPED and secondary type 2 NV, the pathogenesis and prognosis of which are unknown but which cause visual loss especially in the younger population. Introduction Unilateral retinal pigment epithelium dysgenesis (URPED) is a very rare, unilateral condition that affects the younger population. It is typically characterized by a single leopardspot lesion with a seashell-like scalloped appearance located in the posterior pole and extending to the optic nerve. The lesion is in the RPE layer and gets its leopard-spot appearance due to fibrotic and hyperplastic changes in its periphery and areas thinning in its center. Diagnosis is established with fundoscopic examination together with fluorescein angiography (FA) and fundus autofluorescence (FAF), which provide reverse images. 1,2 Visual prognosis depends on the presence of associated neovascularization (NV) (type 1: choroidal NV, type II: subretinal NV, type III: retinal angiomatosis proliferation). 2,3,4 In this report, we present the treatment of a patient with type II NV secondary to URPED with intravitreal bevacizumab. Our aim was to highlight URPED and secondary NV, which is extremely rare but causes vision loss in the younger population. Case Report A 32-year-old man presented with blurry vision in his right eye. He had no known diseases or history of trauma. Best corrected visual acuity (BCVA) was 20/32 in the right and 20/20 in the left eye. Intraocular pressure was 14 mmHg in the right eye and 12 mmHg in the left eye; anterior segment examination results were normal. Fundus examination revealed a lesion with well-defined, scalloped margins that extending from the right peripapillary region to the macula and superior quadrant, including the superior temporal vascular arcade. The part of the lesion superior to the superior temporal arcade exhibited the leopard-spot pattern while a large subretinal scar formation was observed in the part of the lesion inferior to the superior temporal arcade, and the fovea was raised. Retinal folds were visible in the macula. The vessels superior to the optic nerve appeared thin and lacked continuity ( Figure 1). On FA, the lesion was generally hyperfluorescent; the part of the lesion superior to the superior temporal arcuate had very distinct hyperfluorescent edges surrounded by dark ovals (Figure 2a). The lesion and its margins appeared hypoautofluorescent on FAF imaging ( Figure 2b). The optic coherence tomography (OCT) cross-section passing through the fovea demonstrated type II NV, subretinal fluid, retinal surface irregularity, and thickening of the retina over the NV (Figure 3a). Type II NV secondary to URPED was diagnosed and intravitreal bevacizumab (IVB) (1.25 mg/0.05 mL) therapy was initiated. At 1-month follow-up, the patient's BCVA had decreased to 20/32 and there were no changes in the OCT findings. After 6 monthly IVB injections, BCVA improved to 20/20 and OCT showed regression of the subretinal fluid but persistent intraretinal fluid (Figure 3b). The patient is continuing IVB therapy. Discussion Dysgenesis refers to the abnormal or defective development of an organ. URPED is an extremely rare clinical condition of unknown etiopathogenesis, of which only 20 cases have been reported in the literature to date. It was first described by Cohen et al. 1 in 2002 in 4 patients with unilateral, idiopathic, leopardspot lesions of the RPE, 2 of whom also had choroidal NV. In 2009, they named the lesion URPED and with the addition of the previous 4 patients, presented the clinical characteristics of a total of 9 cases. 2 Retinal symptoms associated with URPED include epiretinal membrane, increased retinal vascular tortuosity, and retinal folds. Shimoyama et al. 3 published a case of choroidal NV secondary to URPED in 2014 and reported that the lesion did not respond to 2 doses of subTenon's triamcinolone acetonide and 1 dose of IVB injection. In 2019, Preziosa et al. 4 reported a case of choroidal NV secondary to URPED in which they attained both functional and anatomical success after 2 doses of IVB. The type 2 NV lesion in our case responded slowly to IVB therapy and 6 months of monthly injections resulted in complete recovery of visual acuity but did not fully inactivate the lesion. Despite its typical clinical appearance, URPED is most commonly confused with combined hamartoma of the retina and RPE. This lesion is also a rare clinical condition and is characterized by retinal thickening, epiretinal membrane, and vascular tortuosity. 5 There is one publication reporting that URPED may be an atypical form of combined hamartoma of the retina and RPE. 6 However, they can be distinguished based on FA and FAF imaging, which is pathognomonic for URPED, and clinical findings. Traumatic retinopathy is a clinical condition that is included in the differential diagnosis of URPED. Acute contusion necrosis, also known as commotio retinae, and resolution of hemorrhagic retinal detachment may lead to a similar appearance. 7 Although the visual prognosis of URPED is not clear, it has been shown to slowly progress toward fovea over a period of years and cause serious vision loss. 8 Moreover, the NV that develops as a complication impacts visual prognosis in URPED patients. Although there is insufficient information in the literature to reach a definite conclusion, it should be kept in mind based on the present case that NV lesions respond slowly to IVB therapy. Ethics Informed Consent: Obtained. Peer-review: Externally peer reviewed. Conflict of Interest: No conflict of interest was declared by the author. Financial Disclosure: The author declared that this study received no financial support.
2020-07-02T10:35:00.037Z
2020-06-01T00:00:00.000
{ "year": 2020, "sha1": "7969279fa4db5845c24eb0a3be1e314354c95a4d", "oa_license": "CCBY", "oa_url": "https://doi.org/10.4274/tjo.galenos.2020.89814", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "458e4529b5654f49d87b3f19fb9eec9bae360c4a", "s2fieldsofstudy": [ "Medicine" ], "extfieldsofstudy": [ "Medicine" ] }
16314473
pes2o/s2orc
v3-fos-license
Whither China? Reform and Economic Integration Among Chinese Regions This paper investigates the changing nature of economic integration in China. Specifically, we consider business-cycle synchronization (correlation of demand and supply shocks) among Chinese provinces during the period 1955-2007. We find that the symmetry of supply shocks has declined after the liberalization initiated in 1978. In contrast, the correlation of demand shocks has increased during the same period. We then seek to explain these correlations by relating them to factors that proxy for interprovincial trade and vulnerability of regions to idiosyncratic shocks. Interprovincial trade and similarity in factor endowments tend to make shocks more symmetric. Surprisingly, foreign trade and inward FDI have little effect on the symmetry of shocks. Since 1978, China has been undertaking a gradual and largely steady liberalization. The changes were especially profound in the economic sphere although, lately, they have extended also to the political domain. The three decades of economic liberalization have had far-reaching e¤ects on the Chinese economy and society. Most of the changes have been for the better: China has been able to maintain a high rate of growth, recently becoming the second largest economy in the world. Yet, the bene…ts of this expansion have not been universally shared. Most notably, the coastal provinces of Eastern and South-Eastern China have charged ahead while the inland provinces lag behind. There is a similar, though less pronounced, disparity between urban centers and their rural hinterlands throughout China. These regional disparities re ‡ect not only di¤erentiated economic regional development but are further reinforced by the continued implementation of the hukou system of household registration which restricts labor and residential mobility. 1 The large regional economic di¤erentials appear on the background of a high degree of economic decentralization. This is highlighted by Xu (2010) who describes China as a 'regionally decentralized authoritarian system'. He points out that while the central government controls key political appointments at all levels, it allows regional governments to run their economic a¤airs largely unimpeded. This, he argues, is the product of political upheavals and purges during the Great Leap Forward and, especially, in the course of the Cultural Revolution. During these upheavals, the Soviet-inspired centralized model was abandoned and instead the regions were encouraged to compete with each other. The inter-regional competition was aimed primarily at maximizing output but it also fostered experimentation with respect to production arrangements and policies (such as the creation of di¤erent commune set-ups). The result was a small (though politically powerful) central government and relatively strong regional governments. The decentralization continued and was even reinforced during the reform period. Arguably, a particularly dramatic step in this direction was the creation of special economic zones in the early years of liberalization. 2 This e¤ectively introduced a two-speed system, allowing selected regions to charge ahead in economic liberalization while the rest of the Chinese economy proceeded more cautiously. This appears to have laid the foundations of the subsequent economic gaps between the coastal areas and the rest of the country. 3 In this paper, we document the depth of economic integration among Chinese provinces and analyze the factors that foster such integration. Our analysis proceeds in two steps. First, we use a structural VAR model to identify province-speci…c shocks between 1955 and 2007. 4 Our methodology allows us to distinguish between shocks that have a temporary and permanent e¤ect on output, typically referred to as demand and supply shocks, respectively, in the relevant literature. We compute the correlations between these shocks for all possible pairs of provinces for four sub-periods: two before and two after the 1978 liberalization. These correlations capture the intensity of integration, and the changes therein, among China's provinces, over a period during which the country gradually abandoned central planning, state ownership as well as Maoism and embraced economic liberalization. Second, we analyze the determinants of these correlations using a stylized version of the gravity model (broadly in line with Artis and Okubo, 2008, although they use a di¤erent methodology for estimating the businesscycle correlations). In particular, we seek to explain the correlations of shocks by relating them to factors that proxy for the vulnerability of regions to idiosyncratic developments as well as factors that can facilitate inter-regional transmission of shocks. The latter include the endowments of physical and human capital, transport infrastructure, structure of the economic 2 On the history of SEZs and the role they have played in Chinese economic development, see Chen et al. (2011), and the references therein. 3 An especially poignant example of the fruits of this policy is Shenzhen, a city in Guandong, whose population exploded from around 300,000 to its current 14 million since it became the …rst special economic zone more than 30 years ago. 4 The use of structural VARs to assess the nature of economic integration between countries or regions was pioneered by Bayoumi and Eichengreen (1993) whose work was in turn motivated by the Theory of Optimum Currency Areas (henceforth OCA; Mundell, 1961). Bayoumi and Eichengreen applied this methodology to assess the merits of adopting the common currency in the European Union. They sought to identify which European countries tend to encounter shocks that are predominantly symmetric or asymmetric in nature.(the OCA theory suggests that monetary integration is less costly if it involves countries that are subject to symmetric shocks). Since their seminal contribution, this method has become accepted as the workhorse for assessing the depth of integration in other regions as well, see Fidrmuc and Korhonen (2006) activity, openness to foreign trade, foreign direct investment, geography, and economic policy. We also include variables used in gravity models of trade -distance between the regions and their economic size -which we interpret as proxies for inter-provincial trade. This analysis is carried out for the same four sub-periods so as to capture the determinants of economic integration in the various periods, and the changes therein. Our main …ndings are the following. First, the demand and supply shocks have evolved di¤erently in the course of the Chinese reforms: demand shocks appear to become more synchronized over time while supply shocks grow more dissimilar. Second, we …nd that factors that proxy for interprovincial trade and similarity in factor endowments tend to make shocks more symmetric. Rather surprisingly, foreign trade and inward FDI have had little e¤ect on the symmetry of shocks. The remainder of the paper is structured as follows. The next section brie ‡y discusses what we know about economic integration and decentralization in China. Section 3 describes the data and empirical methodology. Section 4 reports the main empirical …nding. Section 5 states the conclusions. Economic Decentralization in China During the period from the communist takeover in 1949 until 1978, the Chinese economy was tightly regulated: output quotas, resource allocations and prices were set centrally according to a plan formulated by the central government. This re ‡ected the initial desire of Mao Zedong's government to follow the Soviet model of organizing the economy. However, as argued by Xu (2010), China started to deviate from the Soviet model during the economic and political upheavals of the Great Leap Forward (1958-61) and Cultural Revolution (1966-76). Rather than plan and regulate the economic activity from the center, the central government granted wide-ranging economic autonomy to the provincial governments. This was to encourage the regions to compete with each other in order to deliver or exceed their quota of output. As a result, China became a collection of regional economies rather than a single centrally-planned Soviet-type economy, with the central government in Beijing retaining control over political appointments and decisions while devolving much of economic policy making to the provinces. The decentralization accelerated further after Mao's death in 1976. 5 The objective was to reinvigorate the stagnant economy by improving incentives and encouraging local initiative in production (Tang, 1998). The …scal and economic decentralization has been widely acknowledged as one of the key drivers of the fast growth of the Chinese economy in the last three decades. However, the decentralization has allowed some locals governments also to implement protectionist policies, ostensibly with the objective to develop their local economies (Bai, 1981). Another important change that took place after Mao's death was the liberalization of the economy. The liberalization initiated by Deng in 1978 was gradual not only with respect to time but also in space. In particular, the liberalization favored the development of the coastal regions. Most notably, the central government initially directed all foreign investment to a handful of special economic zones (SEZs), all of which were located in the costal regions (the best well-know of which is Shenzhen, close to Hong Kong, the …rst SEZ to be established in China). In e¤ect, the SEZs were allowed to be increasingly driven by market forces while central planning continued in the rest of the country. Following the success of the …rst zones, liberal policies were gradually extended beyond the SEZs, …rst throughout the coastal provinces and then later also throughout China. This helped stimulate the rapid development of the coastal regions and increased their competitiveness compared to the interior (Poncet, 2005). At the same time, the inland provinces continued to export raw materials to the coastal areas at …xed (low) prices, which translated to a net transfer of resources from the interior regions to the manufacturing provinces on the coast. The less developed regions responded by pursuing a policy of industrialization through import substitution, as decentralization combined with the fact that most of tax revenue accrued from industrial production made them keen to develop their industrial base (Lee, 1998, cited in Poncet, 2005). An important element of the Maoist regime is the household registration (hukou) system, 5 The central government's share of expenditures declined from 51 percent in 1978 to 28 percent in 1993 (Ma and Norregaard, 1998, as cited in Poncet and Barthélemy, 2008, p.899) which severely restricts the ability of Chinese citizens to move and even travel within China. Under this system, each person was tied to a particular area and could move to a di¤erent area only with a permission of the authorities of both origin and destination regions. Despite progressively accelerating economic liberalization, the hukou system has remained in place even after 1978. Unlike during the Maoist period, rural workers now can move to and take up jobs in the urban areas. However, changing their registration to the destination region is di¢ cult. This means that they can only bene…t from many public services in their region of origin: health care eligibility, children's education and pension claims, most notably, are not portable. Despite this, labor mobility has been steadily increasing, especially from the inland rural to coastal urban regions (Tang, 1998). In all, China is an economy with a single currency but capital or labor are not perfectly mobile. Its provinces are subject to centralized political rule but are growing more and more decentralized on the economic front. Asymmetric Shocks in China How well integrated is the Chinese economy? A common approach for assessing the intensity of integration is based on examining the similarity of business cycles. Compared with other approaches to assessing economic integration, the business-cycle approach has several advantages. It not only provides a comprehensive measure of the various factors that contribute to economic integration but it can reveal also whether there are any regional groups of the provincial economies that are highly integrated (Tang, 1998). A number of approaches have been utilized to assess the degree of asymmetry of shocks across economies -whether these are countries or regions within countries. One method is based on cross-country correlation of growth rates, in ‡ation rates, exchange rates, interest rates and stock prices. The weakness of this method is that it does not allow one to distinguish between the shocks themselves and the reactions to them. For example, Poncet and Barthélemy Another popular method is to identify shocks using the structural vector auto-regressive (SVAR) model formulated by Blachard and Quah (1989). An SVAR model allows one to identify shocks and the economic responses to them. This method has became a popular tool for identifying asymmetric shocks since it was applied by Bayoumi and Eichengreen (1993) to assess the similarities of economic cycles in Europe in the run-up to the formation of the European Economic and Monetary Union (Babetskii, 2005). The SVAR methodology allows us to distinguish between shocks that a¤ect both output and price level permanently (usually denoted as supply shocks) and those a¤ecting output only temporarily while having a permanent price-level e¤ect (demand shocks). The literature studying the business-cycle synchronization of the Chinese economy using the SVAR method remains very limited, however. Tang (1998) adopts an SVAR model to gauge the degree of economic integration within China using data on industrial output and the retail price index. He argues that a high degree of integration prevails in Eastern China only. This …nding is also replicated by Poncet In summary, the evidence so far, as limited as it is, suggest that the Chinese provincial business cycles have become more synchronized over time but this process has not been uniform. In particular, a gap may be emerging between the coastal and interior regions. Determinants of Business-cycle Co-movements There is no consensus as to which determinants of business-cycle co-movement are important. There are instead many potential candidate explanations of business-cycle synchronization or the lack thereof. One leading candidate is trade. Frankel and Rose (1998) present empirical evidence that higher bilateral trade between two countries leads to greater correlation of business cycles between them. An opposite view is put forward by Krugman (1993) who argues that international trade increases specialization, making shocks more asymmetric. Frankel and Rose (1998) argue that inter-industry and intra-industry trade play di¤erent roles in this respect. The former re ‡ects specialization and therefore may cause asymmetries. The latter implies that the country simultaneously exports and imports products of the same category. The total e¤ect of trade intensity on business-cycle correlation is therefore theoretically ambiguous and the question can only be answered empirically. Fidrmuc (2004) adopts the speci…cation of Frankel and Rose (1998) and applies it to a cross section of OECD countries over the last ten years with quarterly data, controlling for intra-industry trade in his analysis. His …ndings con…rm the Frankel and Rose view. Baxter and Kouparitas (2005), similarly, argue that trade is the only factor with a robust e¤ect on business cycle synchronization. In contrast, de Haan et al. (2008b) argue that the role of trade is less important than suggested by this literature. Empirical evidence of the positive relationship between similarity in structure of output and business-cycle synchronization has been stressed in a series of papers by Imbs (1998Imbs ( , 2003Imbs ( , 2004) and is found also in analyses using regional data by Kalemi-Ozcan et al. East, Center and West; besides re ‡ecting geography, this categorization also broadly captures the di¤erences in the degree of economic development. During the early transition period, the coastal areas in the East were the main bene…ciaries of the open door policy, developing much more quickly than the interior areas in the Center and West. Furthermore, we divide the 53 years 9 covered by the data into four sub-periods: 1955-1965, 1966-1977, 1978-1991 and 1992-2007. This break-down re ‡ects the main phases of China's economic and political development. Identi…cation of Shocks In this subsection, we present the methodology used to identify province-speci…c shocks. We use a SVAR model with two variables: the log of output (annual real GDP) and the log of prices (annual GDP de ‡ator). It is assumed that the ‡uctuations in these two variables result from two types of disturbances: supply and demand shocks. This terminology is motivated by the standard AS-AD analytical framework. Supply shocks, which are associated with the shifts of the aggregate supply curve, lead to changes in both real output and prices in the short and long-term. Demand shocks also have short-term e¤ects on both output and prices. However, since the long-term aggregate supply curve is vertical, demand shocks do not have any long-term e¤ect on the level of output and become fully absorbed by price-level adjustments. Following Blanchard and Quah (1989), Bayoumi and Eichengreen (1993) and Babetskii (2005), we estimate the following SVAR model involving real output growth and price-level growth: 9 The sample that we analyze is shorter than the period covered by the data (56 years) since we use lags. Output and price-level are in log-di¤erences: y = logGDP t logGDP t 1 and p t = logP t logP t 1 . b ijk are coe¢ cients, and k is the lag length. e y t and e p t are disturbances which are assumed to be serially uncorrelated and take the following form: where " D t and " S t are demand and supply disturbances, respectively. These equations state that the unexplainable components of output growth and in ‡ation are linear combinations of supply and demand shocks. The vector of structural disturbances, " t , can be obtained under the following restrictions: Correlations of Supply and Demand Shocks Having estimated the demand and supply shocks a¤ecting the individual provinces, we calculate S ij and D ij , the correlation of supply/demand shocks between any two provinces i and j during period . If the correlation of shocks is positive, the shocks are considered to be symmetric and if it is negative, they are considered asymmetric. Table 1 and Table 2 give Fidrmuc (2012), in contrast, formulates a model of …scal integration that emphasizes the qualitative di¤erence between permanent and temporary output shocks (recall that supply shocks a¤ect output permanently while demand shocks only have a temporary e¤ect). He argues that symmetry of permanent shocks is more important for the stability of integration than symmetry of temporary shocks: both kinds of shocks give rise to divergent policy preferences but the impact of temporary shocks is (by de…nition) short lived while permanent shocks can fundamentally undermine the stability of integration. In this context, the fact that China is experiencing falling correlation of supply (permanent) shocks may come across as worrying, despite the movement in the opposite direction by the correlation demand (temporary) shocks. Methodology So far, we have explored the changing nature of business-cycle synchronization during the last …ve decades of China's history. In this section, we investigate the determinants of business cycle co-movements and, thereby, shed some light on the factors behind the di¤erent development of supply and demand shocks discussed in the preceding section. The dependent variables are the correlations of supply and demand shocks, S ij and D ij , estimated for provinces i and j during period ; with the unit of observation thus being pairs of provinces. The correlation coe¢ cients, by construction, are bound between 1 and +1. Besides using the simple correlations, we therefore apply the Fisher-z transformation, which results in …gures that are not bound from above or below: . We therefore include some standard and commonly-used gravity variables: dummy for a common border: equal to 1 for adjacent provinces, same-region dummy: equal to 1 when both provinces belong to the same region, 10 1 0 As discussed above, the sample is divided to three regions, East, Center and West. coast and interior-coast dummies: equal to 1 when both provinces are located in the coastal region and when one province is on the coast while the other lies in the interior, respectively, 11 bilateral distance calculated as the shortest distance for freight transportation by railway in kilometers, and economic size, measured as the sum of the two provincial GDPs. Regions specializing in producing similar products are likely to be exposed to similar shocks. There is, however, no standard measure of similarity in the production structure. Following icy (with annual budget de…cits expressed as a percentages of GDP). We capture provincial 1 1 These two dummies should reveal up whether business cycles are more closely synchronized among coast provinces (captured by the coast dummy), between coast and interior provinces (coast-interior dummy), or among interior provinces (omitted category). 1 Thus, we estimate the following regressions for correlation of supply or demand shocks between the regions The dependent variable is either the standard correlation of supply and demand shocks (k = S; D) or its Fisher-z transformation (superscript f = c; z, denoting the two alternative de…nitions of business cycle synchronization). X is the vector of all explanatory variables discussed above with the corresponding coe¢ cient vector, . We estimate four cross-sectional regressions for business cycle similarity between regions i and j in sub-period identi…ed in the previous section. We start by including all variables in a broad multivariate regression. Alternatively, we consider separate relationships between correlation of shocks and the various potential determinants, one single explanatory variable at a time. We report robust standard errors using the White (1980) correction for heteroscedasticity of the residuals, . Empirical Results Tables 4 to 7 present the general regression results (with all variables) for each sub-period. For comparison, regression results are reported both for the correlations of shocks and their Fisher-z transformations. Table 4 reports on the correlations of supply shocks during the Maoist period while Table 5 covers the reform period. The main …nding concerning supply shocks during the Maoist period (Table 4) The picture becomes clearer during the reform period (Table 5). Adjacent regions and those located on the coast display higher correlations of supply shocks (however, the commonborder dummy is only signi…cant during the 1992-07 period). The dissimilarity in investment in physical capital continues to lower the correlation of supply shocks during both sub-periods, in line with expectations: regions with di¤erent patterns of investment have their business cycles less closely synchronized. The regressions results for the correlations of demand shocks are presented in Tables 6-7. Again, essentially none of the included variables explain the correlations of shocks during the early Maoist period (and again, the regressions for this period are not jointly signi…cant). During the later Maoist period, 1966-77, we see that the correlation of shocks falls with distance and also with dissimilarity in investment in physical capital. Much clearer picture again emerges during the reform period, especially the early sub-period, 1978-91. The degree of correlation of demand shocks again falls with distance (more so during the early reform period). Regions located on the coast tend to encounter similar shocks during the early reform period. However, this is counterbalanced by the negative coe¢ cient estimated for the same-region dummy during the same period. This surprising result may re ‡ect a dichotomy between the regional centers and their surrounding rural areas. Economic size appears to lower the symmetry of shocks during the early reform period: two relatively large provinces would be expected to display a lower degree of symmetry of demand shocks than two small provinces. Dissimilarity of investment in physical capital, counterintuitively, reverses sign for the late reform period, 1992-07, so that regions that have dissimilar investments appear to encounter shocks that are more similar. Several variables are notable for being consistently insigni…cant: dissimilarity indexes with respect to the output structures, exposure to trade and incoming FDI apparent to have no impact on the symmetry of supply or demand shocks. This is somewhat surprising, especially for trade and FDI, given the extraordinary importance of external economic relations for the post-1978 economic development (Huang, 2011, for example, …nds that exposure to FDI is an important determinant of economic growth of Chinese regions). A possible explanation of this absence is that the shocks attributable to foreign trade and FDI a¤ect much of China in much the same way (or else that their e¤ects quickly spillover across regions). Some of the variables included in the preceding regressions are likely to be collinear with each other and this could explain their low signi…cance. Therefore, in Tables 8-11, we report the results of univariate regressions between the correlations of supply and demand shocks, respectively, and each variable considered in our study. Few explanatory variables appear signi…cant during the Maoist period again: for supply shocks during both sub-periods and during the early Maoist period for demand shocks. Nevertheless, common border, distance and output size shape the correlation of demand shocks during the late Maoist period: demand shocks become less symmetric with distance while their similarity is higher for adjacent and for larger provinces. Provinces sharing a border, located in the same region and those on the coast also appear more similar during the reform period (though the coe¢ cients are not always signi…cant). The e¤ect of distance is similarly negative but not always signi…cantly so. Economic size is not a signi…cant determinant of supply shocks whereas it appears negatively related to the correlation of demand shocks during the reform period. Conclusion The Chinese society has experienced numerous dramatic changes during the last …ve decades: the communist take-over, the upheavals of the Great Leap Forward and Cultural Revolution, and …nally economic liberalization and opening up to the outside world and the rapid growth that this has generated. In this paper, we document the impact of these changes on the Chinese regional economies and on the degree of economic integration among them. The picture that our results paint is mixed: as the reforms progress, Chinese provinces encounter increasingly symmetric demand shocks but also increasingly asymmetric supply shocks. This is potentially worrying: supply shocks lead to permanent economic di¤erentials, unlike demand shocks, and therefore their falling similarity may undermine the stability of Chinese economic integration in the future. This may translate into growing economic and political tensions in the future, especially if appropriate adjustment channels are not introduced (for example, greater liberalization of migration between provinces). The experience of countries such as Belgium, Spain or Czechoslovakia demonstrates the dangers that growing economic divergence can pose serious danger for political unity of countries, especially ethnically diverse ones. We relate the interprovincial correlations of supply and demand shocks to a broad range of economic variables but we again obtain at best mixed results. Little explain the synchronization of business cycles during the Maoist period, especially during its early part, 1955-65. The limited explanatory power of economic factors should perhaps not be surprising, given that the Maoist period was dominated by politically-induced shocks of the Great Leap Forward and Cultural Revolution. During the reform period, factors typically associated with bilateral (interprovincial) trade matter, although their importance is not overwhelming. In particular, we …nd that the symmetry of both demand and supply shocks tends to fall with the distance between provinces and rises when provinces share a border or are located in the same region. We …nd also that provinces that experience similar patterns of investment in physical capital tend to encounter similar supply shocks. In contrast, similar patterns of investment in physical capital tends to make demand shocks less similar, possibly because investment behavior is itself driven by demand shocks. Hence, interprovincial trade increases the symmetry of both demand and supply shocks while investment in physical capital has opposite e¤ects on supply and demand shocks. Finally, and rather surprisingly, we …nd little evidence that inward FDI and foreign trade a¤ect the synchronization of demand or supply shocks, even though these are among the main factors highlighted as drivers of the recent Chinese growth,. Clearly, our analysis fails to account for a number of factors that can also contribute to the on-going divergence of permanent shocks in China. Chinese provinces may specialize in relatively narrow range of products but our data only distinguish very coarse categories of output structure. Migration is an important channel mitigating asymmetric shocks but we do not have any (reliable) data on this. Moreover, migration in China is still highly constrained by the continued enforcement of the hukou system of household registration which limits mobility of workers and their entitlement to public goods. Finally, the role of the special economic zones may deserve closer attention as the SEZs have e¤ectively enjoyed a substantial head start over rest of China. This, however, might require more disaggregate data than those that we have: the SEZs typically account only for a relatively small portion of the province in which they are located. Finally, future will show whether supply shocks a¤ecting Chinese regions will continue to diverge or whether this trend will be reversed.
2016-03-22T00:56:01.885Z
2013-04-30T00:00:00.000
{ "year": 2013, "sha1": "ec0d9640f71e1fe7df10371b3180f429927516cc", "oa_license": "CCBY", "oa_url": "https://doi.org/10.1016/j.chieco.2014.12.007", "oa_status": "HYBRID", "pdf_src": "ElsevierPush", "pdf_hash": "7f53a6cd11066a5bd6026d8cb4b4c866984c89a1", "s2fieldsofstudy": [ "Economics" ], "extfieldsofstudy": [ "Economics" ] }
256137671
pes2o/s2orc
v3-fos-license
Chronic Kidney Disease: Combined Effects of Gene Polymorphisms of Tissue Inhibitors of Metalloproteinase 3, Total Urinary Arsenic, and Blood Lead Concentration The tissue inhibitor of metalloproteinase 3 (TIMP3) is known to be an anti-fibrotic factor. Arsenic, lead, and cadmium exposure and selenium intake may affect TIMP3 expression. The downregulation of TIMP3 expression is related to kidney fibrosis. Genotypes of TIMP3 are related to hypertension and cardiovascular diseases. Therefore, this study explored whether TIMP3 polymorphism is associated with hypertension-related chronic kidney disease (CKD). In addition, the combined effects of TIMP3 polymorphism and total urinary arsenic, blood lead and cadmium, and plasma selenium concentrations on CKD, were investigated. This was a case-control study, with 213 CKD patients and 423 age- and sex-matched controls recruited. Polymerase chain reaction-restriction fragment length polymorphism was used to determine TIMP3 gene polymorphisms. The concentrations of urinary arsenic species, plasma selenium, and blood lead and cadmium were measured. The odds ratio (OR) of CKD in the TIMP3rs9609643 GA/AA genotype was higher than that of the GG genotype at high levels of total urinary arsenic and blood lead; the OR and 95% confidence interval (CI) were 0.57 (0.31–1.05) and 0.52 (0.30–0.93), respectively, after multivariate adjustment. High blood lead levels tended to interact with the TIMP3rs9609643 GG genotype to increase the OR of CKD, and gave the highest OR (95% CI) for CKD of 5.97 (2.60–13.67). Our study supports a possible role for the TIMP3rs9609643 risk genotype combined with high total urinary arsenic or with high blood lead concentration to increase the OR of CKD. Introduction Chronic kidney disease (CKD) affects >10% of the world's population and has emerged as one of leading causes of mortality worldwide [1]. Using an estimated glomerular filtration rate (eGFR) < 60 mL/min/1.73 m 2 to define CKD, the prevalence of CKD in Taiwan was 11.9%, of which only 3.5% of patients were aware of their disease [2]. The incidence of end stage renal disease in Taiwan ranks first in the world [3]; therefore, exploring the etiology of CKD is an important issue in Taiwan. Our recent study found that high plasma selenium concentrations significantly increased eGFR and decreased the odds ratio (OR) for CKD, and blood cadmium and lead concentrations, and total urinary arsenic concentration significantly decreased eGFR and increased OR for CKD [4]. Exposure to arsenic, lead and cadmium can cause tubular degeneration, fibrosis, hemorrhage and vacuolation in rat kidney tissue [5]. Studies have also found that arsenic, lead and cadmium can induce oxidative stress and cause nephrotoxicity [6][7][8]. Selenium can reduce the oxidative stress and fibrosis caused by these metals [5,9]. However, the mechanisms by which blood cadmium and lead, total urinary arsenic, and plasma selenium concentrations are associated with CKD have not been fully elucidated. The tissue inhibitor of metalloproteinase 3 (TIMP3) is a physiological inhibitor of matrix metalloproteinases (MMPs). A disruption of the balance between MMPs and TIMPs can alter the stability and normal function in the extracellular matrix (ECM) and lead to abnormal tissue remodeling and homeostasis [10]. Of the four TIMPs, TIMP3 is the only one with an affinity for proteoglycans in the ECM [11], and it is also known to have anti-fibrotic effects [12]. The downregulation of TIMP3 may enhance the extent of tubule interstitial fibrosis (TIF) [12]. Recent studies reported that TIF was associated with CKD development and progression [13]. The increased expression of TIMP3 was observed in human kidney 2 epithelial cells under arsenic exposure [14]. Exposure to cadmium during pregnancy causes structural changes in fetal kidney tissue, which can be detected by increased levels of some kidney injury biomarkers in amniotic fluid such as albumin, osteopontin, vascular endothelial growth factor and TIMP1 [15]. One study found that with high blood lead concentration, both MMP2 and MMP9 were significantly increased, while TIMP2 was significantly decreased in the placenta of women [16]. A low-selenium diet may lead to decreased selenium content in adult rat kidneys, upregulation of MMP1 and MMP3 and downregulation of their inhibitors (TIMP1 and TIMP3), resulting in renal ultrastructural and ECM damage [17]. Exposure to arsenic, lead and cadmium may be positively or negatively associated with TIMP3, whereas selenium appears to be positively associated with TIMP3. However, results of current research are inconsistent. The TIMP3 genes are located on chromosome 22q12.1 [18] and TIMP3 is a 24-kDa secreted protein that binds strongly to the ECM. A study of Chinese Han people found that TIMP3rs9619311 TC+CC or TIMP3rs2234921 AG+GG genotypes had a significantly higher risk of carotid plaque than those with the TT or AA genotypes, respectively [19]. A recent study reported that the TIMP3rs9619311 TT genotype had a significantly higher risk of essential hypertension than the TC+CC genotype [20]. One study demonstrated that TIMP3rs9609643 and TIMP3rs8136803 affect individual differences in breast cancer susceptibility and survival [21]. A study found that a significantly higher risk of colorectal cancer for TIMP3rs715521 AG+AA than GG genotype [22]. Current studies found that TIMP3 gene polymorphisms were associated with carotid plaques, hypertension, and cancer. Whether TIMP3 polymorphisms are associated with hypertension-related CKD remains to be explored. This study explored the association between TIMP3 genotypes and CKD. In addition, the combined effects of TIMP3 genotype and arsenic, lead or cadmium body burden and plasma selenium concentrations on CKD were evaluated. Study Subjects This was a hospital-based case-control study. From September 2005 to September 2011, 214 CKD patients and 423 age-and sex-matched healthy controls were recruited at Taipei Medical University Hospital and Taipei Wanfang Medical Center [23]. This study was approved by the Institutional Review Board of Taipei Medical University (N202101029). All study subjects were interviewed by questionnaires and biological samples were collected after they provided their informed consent. Based on blood urea nitrogen, serum creatinine, and proteinuria, the Modification of Diet in Renal Disease formula was used to calculate eGFR by nephrologists from Taipei Medical University Hospital and Taipei Wanfang Hospital to determine the different stages CKD patients: eGFR (mL/min/1.73 m 2 ) = 186.3 × serum creatinine −1.154 × age −0.203 × 1.212 (if patient is black) × 0.742 (if female) [24]. Interview and Bio-Specimen Collection Study subjects were interviewed using a structured questionnaire by well-trained interviewers. The contents of the questionnaire included sociodemographic data; lifestyle such as cigarette smoking habit and consumption of alcohol, coffee, and tea; analgesic usage; and disease history. An EDTA-vacuum syringe was used to collect 5-8 mL of blood, and the buffy coats were separated for DNA extraction and analysis of TIMP3rs9619311, TIMP3rs11547635, TIMP3rs715572, TIMP3rs9609643, TIMP3rs8136803, and TIMP3rs2234921 genotypes. Red blood cells were separated for measurement of lead and cadmium concentrations, and plasma was separated for measurement of selenium concentrations. Arsenic, Cadmium, Lead, and Selenium Measurement To ensure absence of arsenobetaine or arsenocholine (less toxic than inorganic arsenic and its methylated metabolites), high performance liquid chromatography was used to separate urinary arsenic species: arsenite (As III ), arsenate (As V ), and its metabolites, monomethylarsonic acid (MMA V ) and dimethylarsinic acid (DMA V ). Concentration of arsenic species was determined by hydride generator linked with atomic absorption spectrometry [25]. Plasma selenium and blood lead and cadmium concentrations were analyzed by inductively coupled plasma mass spectrometry [4]. If the experimental value was lower than the detection limit, the data analysis was carried out at the half-of-detection-limit concentration. The determination method, detection limit, reliability, and validity are shown in Supplementary Table S1. The sum of As III , As V , MMA V , and DMA V concentrations was termed the total urinary arsenic concentration. Determination of TIMP3 Gene Polymorphisms Genomic DNA was extracted by digestion with proteinase K followed by phenol and chloroform. The Agena Bioscience MassARRAY System was used according to the manufacturer's instructions to determine the TIMP3rs9619311, TIMP3rs11547635, TIMP3rs715572, TIMP3rs9609643, TIMP3rs8136803 and TIMP3rs2234921 genotypes. Statistical Analysis Continuous variables are presented as mean ± standard deviation, while categorical variables are presented as frequencies (percentages). The chi-square test was used to analyze the distribution of categorical variables among the groups of the subjects, and to test whether the TIMP3 genotypes of the control group fitted a Hardy-Weinberg equilibrium. The Wilcoxon rank-sum test was conducted to compare the continuous variables between CKD cases and controls. Multiple logistic regression was used to evaluate the associations between TIMP3 genotypes and CKD by estimating OR and 95% confidence interval (CI). All models were adjusted for confounders including age, sex, and educational level; consumption of alcohol, coffee, and tea; analgesic usage; and disease histories of diabetes and hypertension. All data were analyzed using SAS 9.4 software (SAS Institute, Cary, NC, USA). A two-sided p-value < 0.05 was considered significant. Table 1 shows sociodemographic characteristics, lifestyle, and disease history for CKD cases and controls. The percentage of educational level above high school was significantly higher in the controls than in CKD cases. There was no difference in the proportion of cigarette smoking between the two groups. The proportion of frequent or occasional consumption of alcohol, tea and coffee was significantly higher in controls than in CKD cases. The proportion of CKD patients who routinely used analgesics was higher than in controls. Significantly more CKD patients had hypertension and diabetes than controls. Table 2 presents the association between TIMP3 polymorphisms and CKD. The gene polymorphisms of TIMP3rs9619311, TIMP3rs11547635, TIMP3rs715572, TIMP3rs9609643, TIMP3rs8136803 and TIMP3rs2234921 were not associated with CKD. Table 3 compares the total urinary arsenic, blood cadmium and lead, and plasma selenium concentrations between CKD and control groups. The total urinary arsenic and blood cadmium and lead concentrations were significantly higher, while plasma selenium levels were significantly lower in CKD cases than in controls. Results We analyzed the association between TIMP3 genotypes and CKD after stratifying for total urinary arsenic, blood cadmium and lead, and plasma selenium concentrations ( Table 4). The median values of total urinary arsenic, blood cadmium and lead, and plasma selenium concentrations in the control group were used as cut-off points for stratification analysis. The OR of CKD in the TIMP3rs9609643 GA/AA genotype was significantly lower than that in GG genotype at high total urinary arsenic and blood lead concentrations, but not at low total urinary arsenic and blood lead concentrations. The OR of CKD in TIMP3rs8136803 GT/TT genotype was lower than that in GG genotype under low total urinary arsenic, and the OR of CKD in the TIMP3rs8136803GT/TT genotype was higher than in the GG genotype under low blood lead concentration. Thus, TIMP3rs9609643 and TIMP3rs8136803 may interact with total urinary arsenic or blood lead concentrations. Therefore, the combined effects of TIMP3rs9609643 and TIMP3rs8136803, and total urinary arsenic or and blood lead concentrations, on CKD were subsequently analyzed. However, under blood cadmium and plasma selenium stratification, no association between TIMP3 polymorphisms and CKD was observed. Abbreviations: CKD, chronic kidney disease; TIMP3, tissue inhibitor of metalloproteinase 3; OR, odds ratio; CI, confidence interval. Seven participants were missing for TIMP3rs11547635; four were missing for TIMP3rs2234921 and TIMP3rs9619311; three were missing for TIMP3rs9609643; and five were missing for TIMP3rs715572. a Adjusted for age, sex, educational level, analgesic usage, disease histories of diabetes and hypertension, and alcohol, coffee, and tea consumption. Multiple logistic regression models were used to calculate the association between TIMP3 genotypes and CKD. Pairwise analysis of combined effects of high total urinary arsenic or blood lead levels and TIMP3 risk genotype is shown in Table 5. The OR of CKD was significantly increased in dose-response with no risk factor, one risk factor, or both risk factors. We observed 3.75-fold increased odds (95% CI 1.89-7.45) of CKD cases carrying the TIMP3rs9609643 GG genotype and high levels of blood lead (>37.44 µg/L) compared to controls. The p-value for the interaction term of TIMP3rs9609643 and blood lead concentration was 0.027, and it appeared that TIMP3rs9609643 had a multiplicative interaction with blood lead on CKD; however, the significance disappeared with the multivariate adjustment. Other interactions were not significant. Abbreviations: TIMP3, tissue inhibitor of metalloproteinase 3; OR, odds ratio; CI, confidence interval. & tested for linear trend. + 0.05 < p < 0.1, * p < 0.05, ** p < 0.01, *** p < 0.001. a Adjusted for age, sex, educational level, analgesic usage, disease histories of diabetes and hypertension, and alcohol, coffee, and tea consumption. Discussion We found the distributions of TIMP3rs9619311, TIMP3rs11547635, TIMP3rs715572, TIMP3rs9609643, TIMP3rs8136803 and TIMP3rs2234921 between CKD cases and controls did not differ. However, subjects with the TIMP3rs9609643 GA/AA genotype had a marginally or significantly lower OR of CKD than those with GG genotype at high total urinary arsenic or high blood lead concentration. The combined effect of TIMP3rs9609643 or TIMP3rs8136803 risk genotypes and high urinary total arsenic or high blood lead concentrations gradually increased the OR for CKD with increasing risk factors. The pathogenesis of CKD is complex and may be caused by a combination of environmental, genetic, and other factors. According to our study, the TIMP3rs9609643 GA/AA genotype was significantly associated with lower risk of CKD compared with GG genotype at high total urinary arsenic or at high blood lead concentration. There are few studies on TIMP3rs9609643; of three studies in China, one explored the correlation between TIMP3rs9609643 and thoracic aortic dissection [26], one explored its relationship with osteoarthritis [27], and one explored the correlation with high myopia [28], but all had negative results. However, one study found that women with the TIMP3rs9609643 AA genotype were 60% less likely to develop breast cancer than women with the GG genotype (OR 0.4, 95% CI 0.2-1.0) [21]. However, we observed that the TIMP3rs8136803 GT/TT genotype decreased the risk of CKD at low total arsenic levels while increasing the risk of CKD at low blood lead concentrations, compared with the GG genotype. A current study found that the genotype and allele of TIMP3rs8136803 significantly differed between with and without primary open-angle glaucoma; the frequency of TIMP3rs8136803 genotype GG in primary openangle glaucoma was higher than that in controls [29]. However, the TIMP3rs8136803 genotype was not associated with osteoarthritis [27] and one study indicated that women with the TIMP3rs8136803 TT genotype were five times more likely to develop breast cancer than those with the GG genotype (OR 5.1, 95% CI 1.1-24.3) [21]. In addition, breast cancer cases with TIMP3rs8136803 TT were almost four times more likely to have reduced diseasefree survival and there was a trend toward reduced overall survival compared to the GG genotype [21]. A Chinese study found that TIMP3rs2234921 and TIMP3rs9619311 were associated with mixture plaque [19]. Another study pointed out that TIMP3rs9619311 is related to essential hypertension [20]. Studies found that TIMP3rs9619311 was associated with hepatocellular carcinoma [30] and colorectal cancer [31]. A recent study in Taiwan found that TIMP3rs9619311 was associated with survival in cervical cancer [32]. Additionally, TIMP3rs715572 was associated with colorectal cancer [22] and survival of adenocarcinoma of the gastroesophageal junction [33]. However, TIMP3rs9619311, TIMP3rs11547635, TIMP3rs715572 and TIMP3rs2234921 were not associated with CKD in our study, and there are also some studies with similar results to ours [34][35][36][37]. At present, there are few studies on the relationship between TIMP3 genotype and disease and they have inconsistent results, so further investigation is needed. The functional relevance of these polymorphisms is also unclear. They may directly affect the expression or activity of TIMP3, or be markers for other functionally relevant variants, which requires further investigation. Significantly gradually increased ORs for CKD with increasing risk factors (high total urinary arsenic concentration, high blood lead concentration, and TIMP3rs9609643 and TIMP3rs8136803 risk genotype) were observed in this study. This may be because exposure to arsenic, lead and cadmium induces oxidative stress and fibrosis, resulting in nephrotoxicity [6][7][8]. High concentrations of lead [16] or total urinary arsenic [38] reduced the expression of TIMP, leading to an imbalance in the MMPs/TIMPs ratio, favoring proteolytic enzyme activity and leading to generation of tissue abnormalities. In contrast, some studies found that long-term exposure to arsenic, possibly due to downregulation of TIMP3, and TIMP3 deficiency may lead to oxidative stress [39], resulting in increased renal fibrosis [12,14]. However, the role of TIMP3 in CKD remains unclear. Although the functional significance of the polymorphisms of TIMP3rs9609643 and TIMP3rs8136803 is unknown, some of the associations identified in our study support a possible role for these polymorphisms as, when combined with high total urinary arsenic or high blood lead concentration, they increased the OR of CKD. This study had some limitations. The small number of homozygous individuals with rare alleles may have produced unstable OR estimates. Further studies with larger sample size are needed to improve the precision of point estimates when assessing TIMP3 polymorphisms and environmental metals exposure in relation to CKD. The analysis of six TIMP3 polymorphisms may not represent all the gene functions. Our study did not analyze gene polymorphisms regulating TIMP3 expression. Further studies should be conducted to assess the function of TIMP3 and its associated gene polymorphisms to determine their role in CKD development. Conclusions The risk of CKD related to high levels of blood lead or high levels of total urinary arsenic was modified by TIMP3rs9609643 GA/AA genotypes. High blood lead levels tended to interact with the TIMP3rs9609643 risk genotype to increase the risk of CKD. We recommend future studies of the levels of serum TIMP3, to determine the relevant mechanism regarding the relationships between CKD and TIMP3 polymorphisms, and environmental metals exposure. Supplementary Materials: The following supporting information can be downloaded at: https: //www.mdpi.com/article/10.3390/ijerph20031886/s1, Table S1. The validity and reliability of urinary arsenic species, plasma selenium, and red blood cell lead and cadmium. Institutional Review Board Statement: This study was conducted in accordance with the Declaration of Helsinki, and approved by approved by the Institutional Review Board of Taipei Medical University (N202101029). All study subjects were interviewed by questionnaires and biological samples were collected after they provided their informed consent. Informed Consent Statement: Informed consent was obtained from all subjects involved in the study. Data Availability Statement: The data presented in this study are available on request from the corresponding author. The data are not publicly available due to privacy or ethical restrictions.
2023-01-24T16:50:06.256Z
2023-01-19T00:00:00.000
{ "year": 2023, "sha1": "d40c031c7ad40110b6732b847944d0dde819e2f7", "oa_license": "CCBY", "oa_url": "https://www.mdpi.com/1660-4601/20/3/1886/pdf?version=1674131048", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "1e0e5da7839c33797734eb5e258068ebe1957af3", "s2fieldsofstudy": [ "Biology" ], "extfieldsofstudy": [ "Medicine" ] }
221564511
pes2o/s2orc
v3-fos-license
A mental health paradox: Mental health was both a motivator and barrier to physical activity during the COVID-19 pandemic The COVID-19 pandemic has impacted the mental health, physical activity, and sedentary behavior of people worldwide. According to the Health Belief Model (HBM), health-related behavior is determined by perceived barriers and motivators. Using an online survey with 1669 respondents, we sought to understand why and how physical activity and sedentary behavior has changed by querying about perceived barriers and motivators to physical activity that changed because of the pandemic, and how those changes impacted mental health. The following results were statistically significant at p < .05. Consistent with prior reports, our respondents were less physically active (aerobic activity, -11%; strength-based activity, -30%) and more sedentary (+11%) during the pandemic as compared to 6-months before. The pandemic also increased psychological stress (+22%) and brought on moderate symptoms of anxiety and depression. Respondents’ whose mental health deteriorated the most were also the ones who were least active (depression r = -.21, anxiety r = -.12). The majority of respondents were unmotivated to exercise because they were too anxious (+8%,), lacked social support (+6%), or had limited access to equipment (+23%) or space (+41%). The respondents who were able to stay active reported feeling less motivated by physical health outcomes such as weight loss (-7%) or strength (-14%) and instead more motivated by mental health outcomes such as anxiety relief (+14%). Coupled with previous work demonstrating a direct relationship between mental health and physical activity, these results highlight the potential protective effect of physical activity on mental health and point to the need for psychological support to overcome perceived barriers so that people can continue to be physically active during stressful times like the pandemic. Introduction During the initial phase of the COVID-19 pandemic, governing bodies worldwide took decisive action to protect their citizens against the novel coronavirus by enforcing public lockdown barriers and motivators to PA, but provide concrete factors practitioners can explore with clients in an effort to promote PA behavior during a global pandemic. Design and respondents The ethics application and supporting documents for the study were reviewed and cleared by the McMaster University Research Ethics Board (MREB #3808) to ensure compliance with the Tri-Council Policy Statement and the McMaster Policies and Guidelines for Research Involving Human Participants. Consent was obtained via an online survey preamble. To achieve a small margin of error based on a population size of 37 million, we recruited a total of 1669 respondents over a two-month data collection period (April 23 to June 30, 2020), for a 2% margin of error with 95% confidence intervals. The survey was open to all respondents at least 18 years of age, fluent in English, and able to complete the online survey. Respondents were recruited through the personal social media accounts of the research team and through local news sources (news articles by media at McMaster University and Hamilton Spectator). Respondents were also recruited via a link provided at the end of an op-ed piece published in The Conversation Canada, a national independent news source from the academic and research community. The survey consisted of 30 questions and used a mix of multiple-choice, single choice, and short answer questions to query respondents about their demographic information, and their current and past (prior to the pandemic) physical activity behaviour (minutes/week). Additionally, respondents were asked about their current and past mental health status (i.e., stress levels, anxiety and depressive symptoms). All questions pertaining to physical activity and mental health were designed using validated rating scales. Respondents were included in a draw for 20 cash prizes of $100 CAD as remuneration for their participation in the form of an emailed prepaid voucher. Measurements Physical activity. The Physical Activity and Sedentary Behavior Questionnaire (PASB-Q) [27] was adapted (i.e., rewording of questions to include to quantify self-reported levels of physical activity and sedentary behaviour 6-months prior to and during the COVID-19 pandemic. Respondents were asked to report minutes/week of strength training and aerobic exercise, hours/week of sedentary behavior, and self-rated activity level status on a 5-point scale where 1 = "Completely sedentary", 2 = "Slightly active", 3 = "Very active", 4 = "Recreational athlete", 5 = "Elite athlete". Barriers and motivators to exercise. Respondents were asked to report current and prior (i.e., 6 months prior to COVID-19) barriers preventing them from being physically active using a multiple-choice list (e.g., "I could/cannot find the time in my day", "I did/do not have access to a gym or recreational facility") and motivators encouraging them to be physically active (e.g., "To maintain a healthy body weight", "To build muscle and/or strength") (S1 Appendix). The barriers and motivators assessed in the current study have been previously investigated and shown to significantly impact physical activity levels [28,29]. Mental health. Anxiety was measured using an adapted (i.e., on a 5-point scale instead of a 3-point scale to match other questionnaires and ease participant burden) version of the Generalized Anxiety Disorder 7-item Scale (GAD-7) [30]. Respondents were asked how often they felt bothered by each anxiety symptom since the onset of COVID-19. Response options were 1 = "Not at all", 2 = "Several days", 3 = "More than half the days", 4 = "Most days", and 5 = "Every day". All seven items were combined to form a global measure of anxiety. Depression was measured using an adapted version of the Patient Health Questionnaire (PHQ-9) [31]; all but one of the 9 items (i.e., the one pertaining to suicidal thoughts and/or self-harm) were included for a total of 8 items, which were combined into a global measure of depression. Respondents were asked how often they feel bothered by each depression symptom since the onset of COVID-19. Response options were 1 = "Not at all", 2 = "Several days", 3 = "More than half the days", 4 = "Most days", and 5 = "Every day". Question 3 from the Perceived Stress Scale (PSS) [32] was used to measure psychological stress. Respondents were asked how often they felt nervous and "stressed" both prior to and since the onset of COVID-19 on a 5-point scale where 1 = "Never", 2 = "Sometimes", 3 = "Fairly often", 4 = "Often", and 5 = "Very often". To capture an overall change in mental health since the onset of COVID-19, respondents were asked to rate their overall mental health since COVID in relation to how it was in the sixmonths prior to COVID with the options of choosing "Much better", "Better", "No change", "Worse", or "Much worse". Statistical analyses The IBM SPSS 1 statistics software platform (Version 26) was used to carry out all analyses. Descriptive statistics (means and standard deviations for continuous variables, and frequency counts and percentages for categorical variables) were computed to describe demographic characteristics, mental health, and physical activity levels. Normality was assessed using Shapiro-Wilkes tests and through visual inspection of histograms. For all analyses, significance was considered at p < 0.05, and nonparametric tests were chosen wherever data did not meet the assumption of normality. For correlational analysis, all respondents who left 100% of the survey questions blank were removed (N = 166). Physical activity and mental health data were then screened for missingness which ranged from 8.2-11.8% and 10.2-17.3% respectively. Missing cells were subsequently imputed using expectation-maximization [33] for all physical activity and mental health variables. In the case where a negative physical activity datum or score exceeding the maximum mental health score was imputed, the datum was removed. Physical activity and mental health data used in correlations had a resulting 0.1-0.5% and 0.1% missingness respectively. Sample characteristics and mental health status Survey respondents were primarily female between 18-29 years of age, living in Canada and well-educated (Table 1). Most respondents spent at least four weeks in social isolation at the time of the survey, and a large portion were currently working regular hours from home. More respondents reported that they were making "less than enough" since the onset of the pandemic compared to their income within the 6 months before the pandemic. Although few respondents indicated a close exposure to someone with COVID-19 or COVID-19 symptoms, nearly half knew someone immunocompromised and therefore at high risk. Impact of the pandemic on mental health status Average anxiety (19.0±0.2, max = 35) and depression (20.2±0.2, max = 40) scores reflect moderate symptoms for both anxiety and depression. Tables 2 and 3 show positive correlations between anxiety and depression from the onset of the pandemic revealing that individuals with more anxiety symptoms also had more depressive symptoms. To identify respondents at With respect to income, respondents making "less than enough" had significantly higher levels of anxiety and depression than those making "just enough" (H(3,1310) = 6.93, p < 0.01; H(3,1278) = 9.19, p < 0.01) and "more than enough" (H(3,1310) = 8.64, p < 0.01; H(3,1278) = 10.96, p < 0.01). As well, respondents To assess changes in self-perceived psychological stress, a Wilcoxon Signed Ranks Test was performed on ratings before and during the initial stages of the COVID-19 pandemic and McNemar's tests were used to assess changes in frequencies. There was a significant increase in stress levels during the pandemic (Z = -17.00, p < 0.01) (Fig 1). Since the onset of COVID-19, 22% of respondents who had felt stressed "sometimes" (p < 0.01) now felt stressed "often" (+7%; p < 0.01) or "very often" (+17%; p < 0.01) (Fig 2). The pandemic did not impact the number of respondents who reported "never" (p = 0.13) feeling stressed or feeling stressed "fairly often" (p = 0.28). Impact of the pandemic on physical activity and sedentary behaviour To test the hypothesis that physical activity levels dropped during the initial stages of the COVID-19 pandemic, Wilcoxon Signed Rank statistics were computed on changes in aerobic activity, strength training activity and sedentary behaviour and McNemar's tests were used to assess the change in self-identified exercise status. Since the onset of COVID-19, respondents' aerobic activity decreased by 22 minutes (-11%; Z = -2.50, p < 0.05), their strength-based activity decreased by 32 minutes (-30%; Z = -7.89, p < 0.01), and their sedentary times increased by 33 minutes (+11%; Z = -14.18, p < 0.01) (Figs 3 and 4). Respondents who had been "recreational athletes" (-6%; p < 0.01), "very active" (-6%; p < 0.01), or "moderately active" (-5%; p < 0.01) before the pandemic, now identify as being "completely sedentary" (+17%; p < 0.01). There was no change in the frequency of respondents who self-identified as "elite athletes" (p = 0.12) (Fig 5). Tables 2 and 3 show correlations between physical activity and sedentary behaviour both before and during the COVID-19 pandemic. Although total physical activity decreased during COVID-19, each respondents' physical activity level remained proportional to their activity level prior to the pandemic. To identify respondents at higher risk for decreased physical activity during the initial stage of the COVID-19 pandemic, Kruskal-Wallis tests were conducted on between-group differences in physical activity change by income and age. Respondents who made "less than enough" (H(3,1384) = -3.60, p < 0.01) or "just enough" (H(3,1384) = -2.96, p < 0.01) income to meet their needs had significantly lower levels of MVPA during COVID-19 than those making "more than enough". Although this trend was seen overall, the effect was largest for the 18-29 age group. A similar trend was observed in sedentary behaviour wherein those who made "less than enough" experienced greater increases in daily sedentary time compared to Did physical activity and sedentary behaviour predict mental health during the pandemic? When examining the change in total physical activity level split by the change in mental health status, respondents whose mental health got "worse" or "much worse" had greater reductions in physical activity since COVID-19 than those who experienced "no change" or got "better" or "much better" (Fig 6; H(5,1381) = 7.23, p < 0.01; H(5,1381) = 6.23, p < 0.01). Spearman's rank-order correlations were conducted assessing relationships between physical activity (prior, during, change) with mental health status (Tables 2 and 3). Overall, respondents who reported a greater decrease in their aerobic and strength-based physical activity during the pandemic also experienced more anxiety and depression (r(1544) = -0.12, p < 0.01; r(1544) = -0.21, p < 0.01). This was not only reflected in their activity levels during the pandemic (i.e., people who engaged in less physical activity during the pandemic were more anxious and depressed; MVPA: r(1540) = -0.18, p < 0.01; r(1540) = -0.31, p < 0.01; ST: r(1542) = -0.14, p < 0.01; r(1542) = -0.22, p < 0.01) but also before (i.e., those who engaged in less physical activity before the pandemic were more depressed during the pandemic; r(1539) = -0.11, As an exploratory analysis, we conducted a series of linear regressions to determine whether self-reported levels of anxiety and depression predicted self-perceived barriers and motivators to exercise, and they did. Unsurprisingly, respondents who reported greater depressive symptoms were more likely to endorse 'lack of self-motivation' as a barrier to engaging in physical activity during the pandemic (F(1,1283) = 29.97, p < 0.01, R 2 = 0.02). Respondents who reported greater symptoms of anxiety were more likely to endorse 'stress relief' as a motivator Change in total physical activity by change in mental health status. Respondents whose mental health got "worse" or "much worse" had greater reductions in physical activity time since COVID-19 compared to those who experienced "no change" or got "better" or "much better" (p < 0.01). https://doi.org/10.1371/journal.pone.0239244.g006 Discussion The present study examined the effect of the COVID-19 pandemic on the mental health, physical activity, and sedentary behavior of individuals undergoing pandemic lockdowns and Motivators that increased significantly include 'stress reduction', 'anxiety relief', 'improve sleep' and 'no motivators'. Motivators that decreased significantly include 'weight loss', 'strength building', 'enjoyment', 'appearance goals', 'social engagement', 'sports training' and 'healthcare provider (HCP) recommended'. There was no change in how 'increase energy' was viewed as a motivator to exercise during the pandemic (p > 0.05). https://doi.org/10.1371/journal.pone.0239244.g007 physical distancing measures. Respondents reported higher psychological stress and moderate levels of anxiety and depression brought on by the pandemic. At the same time, the pandemic made it more difficult for them to be active, with aerobic activity down 11%, strength training down 30%, and sedentary time up 11% in comparison to their self-reported activity 6 months prior to the pandemic. Critically, respondents whose physical activity declined the most during the pandemic also experienced the worse mental health outcomes. Whereas, the respondents who maintained their physical activity levels, despite the pandemic, fared much better mentally. Why was it so difficult for people to stay active during the pandemic? To address this important question, we assess barriers and motivators to being physically active that may have changed during the pandemic. According to the health belief model, lack of time is the most common perceived barrier to being physically active [27]. However, the context of the global pandemic decreased the perceived barrier of lack of time but created new barriers. Overall, respondents were not motivated to be physically active because they felt too anxious and lacked social support. Respondents who were able to maintain their activity levels noticed a shift in what motivated them: they were less motivated by physical health and appearance, and more motivated by mental health and wellbeing. Stress relief, anxiety reduction, and sleep improvements were among the top motivators that increased during the pandemic, and indeed, research supports the use of physical activity for brain health [34] stress management [35] and sleep quality [36]. However, our results highlighted a paradox with mental health being both a motivator and barrier to physical activity. People wanted to be active to improve their mental health but found it difficult to be active due to their poor mental health. For example, despite the anxiolytic effects of exercise [34], respondents viewed their anxiety as a barrier to being physically active. Likewise, respondents who were more depressed were also less motivated to engage in physical activity, and amotivation is a symptom of depression itself. Although this is not a new challenge for clinicians whose depressed patients struggle to adhere to a prescribed exercise program [37], the stressfulness of the pandemic has made this a global issue that now must be considered when devising physical activity programs to support the mental wellbeing of citizens. Was the drop in physical activity from the pandemic a cause or consequence of worsened mental health? Although this study cannot answer that question, it suggests the benefits of a two-pronged approach in promoting physical activity during stressful times that includes: 1) adopting a mode of physical activity that supports mental health, and 2) providing support to help minimize perceived psychological barriers to exercise [38]. For example, symptoms of anxiety may increase with high-intensity exercise and therefore moderate-intensity exercise might be preferable [39]. At the same time, to help overcome "feeling too anxious to exercise", people should be encouraged to schedule their physical activity ahead of time in a calendar [40] to reduce feelings of uncertainty and decision fatigue that can aggravate their anxiety symptoms [41]. Not surprisingly, government-mandated closure to gyms and other recreational training facilities made it more difficult for people to be physically active. This was realized as a lack of necessary space and equipment during the pandemic reported as major barriers to being physically active. The pandemic forced a shift in doing everything at home but not everyone's home is large enough or well-equipped to support their physical activity needs. Indeed, income level was predictive of activity level during the pandemic. People who reported "just enough" or "less than enough" income experienced greater decreases in physical activity and worsening mental health, especially younger adults aged 18 to 29 years old. Interestingly, these findings do not mirror the common trend that physical activity level declines with age [18] and instead, highlight a potential interaction between age and income that may reveal unique barriers to being physical activity. It is plausible that younger adults who typically work longer hours and earn less are lacking both the time (e.g., due to long hours) and space (e.g., smaller dwelling) to meet physical activity goals. Outdoor activity could be a viable substitute [42], although this was not permitted in some countries during the pandemic [43]. Furthermore, increasing the number of repetitions performed during resistance training exercises can serve to adjust relative training intensity if lack of equipment is perceived as a barrier [44]. On top of being less active, our respondents reported spending significantly more time seated. The pandemic increased sedentary time by 10% or approximately 30 minutes per day. Although this may not seem like a lot, increasing sedentary time by just one hour has been associated with a 12% greater risk of mortality over a 6-year period [45]. But sedentary behavior is not only associated with poor physical health [46], it is also associated with poor mental health including lower perceived ratings of mental health and poorer quality of life [47]. Prolonged periods of sedentary behavior increase inflammatory markers [48] that may exacerbate symptoms of depression and anxiety [49]. Breaking up sedentary time with short frequent breaks (e.g., 1-2 minutes every half hour) may be sufficient to negate the negative health outcomes sedentary behaviour. Research shows that shorter frequent breaks are easier to adhere to than longer infrequent breaks [50] and can reduce sedentary behavior by more than 35 minutes per day, which would be enough to counteract the reported increase observed in this study. Despite the valuable insights provided by this study; it is not without limitations. Our sample consisted mainly of young (18)(19)(20)(21)(22)(23)(24)(25)(26)(27)(28)(29), highly educated (Bachelor's degree or higher), femaleidentifying Canadian inhabitants which may limit the generalizability of the results. We recognize that this bias may have been partially attributed to one of our modes of recruitment being through an academic news source. On average, our respondents were meeting the physical activity recommendations [17], which is not representative of the population at large. Moreover, a self-reported web-based survey was used to collect data and therefore response accuracy was unverifiable, and respondents required a device to access the internet; however, our large sample size would help minimize the impact of individual bias in reporting. In conclusion, our findings highlight the importance of physical activity in mental health while also capturing the most prevalent perceived barriers and motivators to exercise during a global pandemic. These findings have the potential to inform health and fitness practitioners as they navigate their practice during a pandemic. During stressful times, like the COVID-19 pandemic, people are especially motivated to be physically active for their mental health but may be too anxious or depressed to partake. Our results point to the need for additional psychological supports to help people maintain their physical activity levels during stressful times in order to minimize the psychological burden of the pandemic and prevent the development of a mental health crisis.
2020-09-10T10:02:36.701Z
2020-09-03T00:00:00.000
{ "year": 2021, "sha1": "19a6add9226192544d735df131c5b8b069332ea8", "oa_license": "CCBY", "oa_url": "https://journals.plos.org/plosone/article/file?id=10.1371/journal.pone.0239244&type=printable", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "4979e832a275ac1b0167bb0c7a46c738603fb18d", "s2fieldsofstudy": [ "Psychology", "Medicine" ], "extfieldsofstudy": [ "Biology", "Medicine" ] }
226263785
pes2o/s2orc
v3-fos-license
Differential Patterns of Gyral and Sulcal Morphological Changes During Normal Aging Process The cerebral cortex is a highly convoluted structure with distinct morphologic features, namely the gyri and sulci, which are associated with the functional segregation or integration in the human brain. During the lifespan, the brain atrophy that is accompanied by cognitive decline is a well-accepted aging phenotype. However, the detailed patterns of cortical folding change during aging, especially the changing age-dependencies of gyri and sulci, which is essential to brain functioning, remain unclear. In this study, we investigated the morphology of the gyral and sulcal regions from pial and white matter surfaces using MR imaging data of 417 healthy participants across adulthood to old age (21–92 years). To elucidate the age-related changes in the cortical pattern, we fitted cortical thickness and intrinsic curvature of gyri and sulci using the quadratic model to evaluate their age-dependencies during normal aging. Our findings show that comparing to gyri, the sulcal thinning is the most prominent pattern during the aging process, and the gyrification of pial and white matter surfaces were also affected differently, which implies the vulnerability of functional segregation during aging. Taken together, we propose a morphological model of aging that may provide a framework for understanding the mechanisms underlying gray matter degeneration. INTRODUCTION The morphology and function of the human brain change throughout the aging process. However, the mechanisms underlying how the structures of gyri and sulci are altered with age remain unclear. In recent decades, in vivo magnetic resonance imaging (MRI) has been widely utilized to investigate the effects of aging on the human brain (Good et al., 2001;Jernigan et al., 2001;Raz et al., 2004;Salat et al., 2004), and these studies have provided information on how the structures of the human brain change during the course of a lifespan. The decrease in cortical volume is the most dramatic change that occurs during aging (Scahill et al., 2003). Nevertheless, the highly convoluted and complex nature of the cerebral cortex implies aspects such as folding and thickness of the cortex may have different influences on cortical morphology and brain function. In that case, in addition to volumetric measurements, surface area, gyrification, and thickness measurements also provide detailed information for brain morphology (Panizzon et al., 2009;Winkler et al., 2010;Gautam et al., 2015). The cerebrum attains its folding structures through a complex orchestrated set of systematic mechanisms including differential proliferation, mechanical buckling, and differential expansion (Richman et al., 1975;Van Essen et al., 2018;Llinares-Benadero and Borrell, 2019). Previous studies have suggested that the development of cortical anatomy is dominated by genetic factors rather than random convolutions (Peng et al., 2016). These genetic factors could contribute to changes of cortical structure throughout the lifespan (Fjell et al., 2015;Ronan and Fletcher, 2015). As such, previous findings suggest that cortical features might also degenerate in a non-random, systematic way. It is worth noting that the locations of specific gyri and sulci (e.g., central sulcus, visual cortex. . . ) are consistent in different individuals and correspond to regional functions, and these structures are considered structural-functional entities (Brodmann, 1909;Welker, 1990). With this point of view, gyrification and cortical thickness are thought to reflect the functional aspects of the cortex, such as intelligence, behavior complexity, and cognition (Kaas, 2013;Gautam et al., 2015). Gyrification is thought to be a mechanical process that causes the surface to buckle (Ronan et al., 2014;Van Essen et al., 2018), although the primary underlying mechanism is still under debate (Xu et al., 2010;Ronan and Fletcher, 2015;Llinares-Benadero and Borrell, 2019). Traditionally, local gyrification index (LGI) (Schaer et al., 2008) has been used as a proxy to probe the regional folding degree of the human brain (Nanda et al., 2014, Zhang et al., 2014. However, due to the ratio-principle of LGI, it has less sensitivity compared to the intrinsic curvature on describing the complex pattern of cortical surface, especially in the deep brain regions, e.g., the insula (Griffin, 1994;Ronan et al., 2011). It has been suggested that intrinsic curvature is a more sensitive index to describe cortical folding and is presumed to reflect the differential development and cortical connectivity in different areas of the cortex (Ronan et al., 2014). Intrinsic curvature of the pial and white matter surface has also been found to be related to the structural changes of both superficial or deeper layers of cortex (Ronan et al., 2012;Wagstyl et al., 2016). Cortical thickness is a common feature reflecting total neuronal body in the cortex (Nadarajah and Parnavelas, 2002). Sulci, which are thinner than gyri, communicate locally with neighboring structures, while gyri act as functional centers that connect remote gyri to neighboring sulci (Fischl and Dale, 2000;Deng et al., 2014). Stronger functional connectivity was found in gyri than sulci, as supported by a series of experiments (Deng et al., 2014). Previous studies have shown that cognitive performance is associated with pattern changes in regional gyri (Jones et al., 2006;Turner and Spreng, 2012;Gregory et al., 2016), and that the variability of sulcal volume is found to be related to the progression of the neurological diseases in patients (Mega et al., 1998;Sullivan et al., 1998;Im et al., 2008). In the healthy neurodevelopmental process, cortex nonuniformly thins and demonstrates increased thinning in sulci compared to gyri (Vandekar et al., 2015). Moreover, gyri and sulci exhibit opposite trends in curvature changes during aging (Magnotta et al., 1999) and show different kinds of specialized organizations and connectivity (Welker, 1990). Together, the effects of aging on the regional thickness and gyrification of the brain result in decreased cognitive functions (Salat et al., 2004;Gregory et al., 2016). However, sulci are recently deemed to be more functionally segregated in structural connectivity and the increasing age is accompanied by decreasing segregation in large-scale brain systems Liu et al., 2017). The morphological evidence for the mechanism underlying such degeneration of the human brain reported by the studies mentioned above and others is not unified. Thus, investigations of distinguishing gyral and sulcal morphology and comparing them are critical. Previous studies have suggested that the relationship between cortical degeneration and advancing age is not linear, and the degeneration trajectories of cortical features might also differ (Klein et al., 2014;Storsve et al., 2014;Cao et al., 2017). Therefore, understanding the variations in brain morphology and its trajectories throughout the lifetime is a crucial step toward revealing the epicenter of aging-related degeneration of cortical morphology, because the pattern may reflect the configuration of connectivity of the brain (Ronan et al., 2011). By using the image dataset that was collected by a single scanner with a large sample size and a broad age range, this study aimed to investigate the following three issues at both a whole-brain and a regional level: (1) to reassure whether the effects of age on the cortical morphological features are nonlinear across adulthood to old age, (2) whether age differentially and/or systematically affects gyri and sulci, (3) whether the pial and white matter surfaces display distinct variations in their patterns during aging. We believe that measuring detailed morphological features to depict the degenerative pattern of the cerebral cortex could help elucidate underlying degeneration mechanisms. Participant Characteristics and Image Data Acquisition A total of 417 healthy Chinese controls were recruited from northern Taiwan, and their ages ranged from 21 to 92 (Male/Female: 211/206). All of the included participants had sufficient visual and auditory acuity to undergo basic cognitive assessment. This research was conducted in accordance with the Declaration of Helsinki and approved by the Institutional Review Board of Taipei Veterans General Hospital. Written informed consent was obtained from all participants after they had been given an overview of this study. A trained research assistant used the diagnostic structured Mini-International Neuropsychiatric Interview (M.I.N.I.) to evaluate each subject (Sheehan et al., 1998). The participants' cognitive functions were assessed using the Mini-Mental State Examination (MMSE) (Folstein et al., 1975), Wechsler Digit Span tasks (forward and backward), and Clinical Dementia Rating (CDR) scale (Hughes et al., 1982) to avoid enrolling participants with possible dementia or cognitive impairments. Subjects who met any of the following exclusion criteria were not enrolled in the study: (1) any Axis I psychiatric diagnoses on the Diagnostic and Statistical Manual of Mental Disorders-IV (American Psychiatric, 2000), such as mood (4) an MMSE score less than 24; and/or (5) elderly (age of 65 or over) with a CDR score over 0.5. The demographic information for the subjects is listed in Table 1. All MRI scans were performed on a 3 Tesla Siemens scanner (MAGNETOM Trio TIM system, Siemens AG, Erlangen, Germany) with a 12-channel head coil at National Yang-Ming University. High-resolution structural T1-weighted (T1w) MRI scans were acquired with a three-dimensional magnetizationprepared rapid gradient echo sequence (repetition/echo time = 2,530/3.5, inversion time = 1,100 ms, field of view = 256 mm, flip angle =7 • , matrix size = 256 × 256, 192 sagittal slices, isotropic voxel size= 1 mm 3 , and no gap). Each participant's head was immobilized with cushions inside the coil to minimize the generation of motion artifacts during image acquisition. Cortical Reconstruction The pial and white matter surfaces of each subject were reconstructed using an automated cortical surface reconstruction approach in FreeSurfer 5.3 (http://surfer.nmr.mgh.harvard.edu) according to the following steps: registration, skull-stripping, segmentation of the gray matter, white matter and cerebrospinal fluid, tessellation of the gray-white matter boundary, automated topology correction, and surface deformation according to intensity gradients to optimally place the gray-white and graycerebrospinal fluid borders at the locations of the greatest shifts in intensity, which defines the transition to the other tissue class . The vertices were arranged in a triangular grid with the spacing of approximately 1 mm (∼16,000 grid points) in each hemisphere. For any inaccuracies, the qualities of the segmentation and surface reconstruction were checked carefully by four researchers using the double-blinded method (The dataset was divided into four parts. Each of the researchers checked through two parts of the datasets, and every part was checked twice by two different people.) The data of the subjects were excluded if two people marked it as poor quality. For those only marked once by the researchers, we checked the data again and decided whether it should be removed from further processing and analyses. A total of 15 subjects were excluded in the end because of the poor quality. We separated both the left and right hemispheres into gyral and sulcal regions using FreeSurfer-generated mean curvature (concave-convex division, Sulci: mean curvature value > 0/Gyri: mean curvature value < 0) for further comparison. Cortical Thickness By calculating the shortest distance between the pial surface and the gray matter-white matter boundary of the tessellated surface, we obtained the vertex-wise cortical thickness. To first validate the aging dataset, we tested the relationship between thickness and age across the whole-brain vertices. The surface was smoothed by a 15-mm Gaussian kernel . The effects of smoothing with 10 and 20 mm kernels can be seen in Supplementary Figures 2, 3. To investigate the differential effects of aging between two cortical features, we further calculated the average thicknesses of the gyral and sulcal regions and the ratio of the gyral and sulcal thicknesses (Gyri/Sulci ratio = Gyral thickness divided by sulcal thickness). Intrinsic Curvature Intrinsic curvature is a fundamental property of a surface, and its measurements reflect higher complexity intrinsic information of surfaces and may provide a more sensitive measure of cortex than other larger-scale gyrification measures (Ronan et al., 2012). In addition to the curve values, intrinsic curvature also contains the shape information which gives rise to an un-uniform expansion of the surfaces and is demonstrated to have a greater spatial frequency when quantified at a millimeter-scale (Ronan et al., 2011). The degree of intrinsic curvature is dependent on the degree of the differential, with a bigger differential resulting in a greater degree of curvature, which might reflect the underlying connectivity of the human cortex (Ronan et al., 2014). Thus, it has been hypothesized that the higher the intrinsic curvature value, the higher the complexity, underlying connectivity, and folding or curves are in one region, although the direct evidence of the relationship between intrinsic curvature and connectivity was not presented in literature yet. The vertex-wise intrinsic curvature was calculated using Caret software (v5.65, https://www.nitrc.org/projects/caret/) as the product of the principal curvatures (Ronan et al., 2011(Ronan et al., , 2012. The post-processing and filtration of the curvature were done in MATLAB (The MathWorks, Inc., Natick, MA, USA). We took the absolute values of intrinsic curvature for the pial and white matter surfaces. A low-pass filter (threshold = 2 mm −2 ) was applied to minimize error, keep the curvature values compatible with the resolution of the cortical reconstruction and remove the abnormal values from further analysis (Ronan et al., 2011(Ronan et al., , 2012. Additionally, the intrinsic curvature of the pial and white matter surfaces was separately extracted for the statistical analyses. To first validate the aging dataset, a vertex-wise analysis was applied to look at the relationship between gyrification and age. The surface was smoothed by a 15-mm Gaussian kernel . The effects of smoothing with 10 and 20 mm kernels can be seen in Supplementary Figures 2, 3. Next, we calculated the average intrinsic curvatures for gyral and sulcal regions and the ratio of the gyri/sulci (Gyri/Sulci ratio = Gyral intrinsic curvature divided by sulcal intrinsic curvature) at the pial and white matter surfaces to investigate the differential aging effects. Regional Analysis We performed a regional analysis by dividing the cortical surface of each subject into five lobes: frontal, temporal, parietal, occipital, and cingulate. Insula was excluded in the analysis because it is situated deep in the lateral sulcus. The lobes of the brain were defined by the Desikan-Killiany atlas (Desikan et al., 2006). The average gyri, sulci, and gyri/sulci ratio of thickness and intrinsic curvature were calculated in each lobe. Statistical Analyses and Curve Fitting For each hemisphere, an age correlation analysis of cortical thickness and intrinsic curvature (on the pial and white matter surfaces separately) was first tested using the General Linear Model vertex-by-vertex with gender and total intracranial volume as covariates in order to reveal any general variation trends on the brain surface. The linear, quadratic and cubic models were applied in the regression analyses across age to determine the age-dependencies of cortical thickness (gyral thickness, sulcal thickness, and gyri/sulci thickness ratio) and intrinsic curvature (gyral intrinsic curvature, sulcal intrinsic curvature, and gyri/sulci intrinsic curvature ratio on the pial and white matter surfaces separately). For model selection, the linear (y = p * age + p1), quadratic (y = p * age 2 + p1 * age + p2), and cubic (y = p * age 3 + p1 * age 2 + p2 * age + p3) models were all tested using averaged wholebrain measurements, and we chose the better model according to the properties of goodness-of-fit, including the Akaike information criterion (AIC) (Akaike, 1974), root mean square error (RMSE), and R 2 . To test the significance of each fitted model and minimize the type-I errors, we applied permutationbased multiple testing on all age-dependencies, reassigning age randomly 10,000 times. All p-values were adjusted by the Bonferroni correction (p < 0.05). Relationships Between Cognitive Performance and Structural Measures We examined whether the general cognitive performance (MMSE/Digit span tasks) can be explained by the structural measures of gyri and sulci on the pial and white matter surfaces, after controlling for covariates. We further hypothesized that different cortical measures may contribute different amounts of effect to cognitive performance. Therefore, hierarchical multiple regression analysis was performed in SPSS to investigate the relationship between general cognitive performance (MMSE/Digit span tasks, as dependent variable) and the structural measures of gyri and sulci on the pial and white matter surfaces (independent variables), while age, gender, TIV and education were used as covariates of non-interest in the regression model. To test the significance of the regression models, we used a p-value threshold of 0.05. Stepwise Regression Predicting Age Using the Structural Measurements as Predictors Stepwise regression analysis was performed to determine the relative contribution of the structural measurement to chronological aging. Sex and TIV were forced to enter in the stepwise model as covariates, and the six structural measurements including gyral/sulcal thickness and intrinsic curvature on the pial or white surface were then all entered to the stepwise model at once using a selection criterion of p < 0.05. Aging Effect in Brain Tissue Volume To ensure that the recruited samples are in the aging process without muddling obvious development of brain tissues, we firstly examined the relationship between brain tissue volume (GMV, WMV, CSFV, and TIV) and age across 21-92 years old. The GMV, WMV, CSFV, and TIV are plotted as a function of age in Supplementary Figure 1. We confirmed that GMV and WMV decrease with age, CSFV increase with age and no correlation between TIV and age was observed in the analyzed sample. Vertex-Wise Linear Correlations Between Age and Cortical Measurements Across the age range of 21-92 years, vertex-wise cortical thickness showed a global negative linear correlation with age in both hemispheres ( Figure 1A). The vertex-wise age correlation results showed a regional decline (uncorrected p < 0.01) of the intrinsic curvature on the pial surface ( Figure 1B). However, the intrinsic curvature had an overall increasing pattern (uncorrected p < 0.01) on the white matter surface (Figure 1C). Goodness-of-Fit Tests of the Polynomial Regression Models We examined the goodness-of-fit of the linear, quadratic, and cubic models, including looking at the two parameters: Root Mean Square Error (RMSE) and Akaike Information Criterion (AIC). For cortical thickness, quadratic models stood out to be the best fit (Supplementary Table 1). On the pial surface, bigger differences were found between the linear and quadratic models in both the RMSE and AIC results (Supplementary Table 2). The parameters of the quadratic and cubic models showed nearly the same, but the cubic values were smaller in most cases. The goodness-of-fit profile on the white matter surface was identical to that of the pial surface (Supplementary Table 3). The RMSE and AIC values were steady and smaller in the quadratic and cubic models compared with those of the linear model. Although some parameters were smaller using the cubic model compared with the quadratic model, those of cubic and quadratic models were very similar. Hence, we integrated the findings and only looked at the quadratic effects in the following analyses, which illustrated the aging process better but were not over-fit (For the linear results, see Supplementary Figure 4 and Supplementary Table 4). Quadratic Relationship Between Age and Whole-Brain Cortical Measurements All fitted curves were tested using permutation tests, and the results were all significant (p < 0.05). According to the quadratic regression results (Figure 2, Table 2), both the cortical thicknesses of the gyri and sulci decreased with age [Gyral R 2 (lh/rh): 0.400/0.370; Sulcal R 2 (lh/rh): 0.562/0.526]. The gyri/sulci thickness ratio increased with age [R 2 (lh/rh): 0.287/0.231], which implied that the degree of decrease in the sulci was larger than that of the gyri. For gyrification, the aging process affected the gyri and sulci in opposite ways on the pial surface: a negative quadratic correlation was found with age in the sulcal region [R 2 (lh/rh): 0.563/0.539], while a positive quadratic correlation was found with age in the gyral region [R 2 (lh/rh): 0.244/0.212]. Both the gyral and sulcal intrinsic curvatures of the white matter surface increased quadratically with age [Gyral R 2 (lh/rh): 0.465/0.447; Sulcal R 2 (lh/rh): 0.317/0.326]. The correlations between age and gyri/sulci intrinsic curvature ratio on the white matter surface were insignificant. The results were all similar and consistent between the right and left hemispheres. The average thickness and intrinsic curvature values are shown in Supplementary Table 5. A correlation matrix between all the structural measurements for gyri, sulci, and gyri/sulci ratio can be seen in Supplementary Figure 5. Quadratic Relationship Between Age and Cortical Measurements of the Five Lobes The relationship between cortical measurements and age for the five lobes including frontal, parietal, temporal, occipital lobe, and cingulate were examined. Most of the age-cortical measure relationships in the frontal, parietal, and temporal lobes were consistent with the whole-brain results. The quadratic relationships between age and the sulcal intrinsic curvature on the white matter surface of the occipital lobe, gyral intrinsic curvature on the pial surface of occipital lobe and cingulate and gyri/sulci ratio of thickness in cingulate were not significant (p > 0.05/108) ( Table 3, Supplementary Figures 6-10). Relationships Between Cognitive Performance and Structural Measures The hierarchical regression models were found significant between MMSE and sulcal cortical thickness and gyral intrinsic curvature on the white matter surface. In the first step, the regression model including age, gender, education, and TIV was significant (R 2 = 0.236, p < 0.001). In the second step, adding the whole brain sulcal cortical thickness into the model explained additional 0.8% of variance, and the model stayed significant (R 2 = 0.244, F-change p = 0.041). In the third step, adding the whole brain gyral intrinsic curvature into the model also explained additional 0.8% of variance, and the model was significant (R 2 = 0.252, F-change p = 0.039). The models at the second and third steps revealed that MMSE was positively correlated with the cortical measurements (Figure 3; sulcal cortical thickness: β = 0.141, p = 0.029; gyral intrinsic curvature: β = 0.109, p = 0.039). All p-values reported here were uncorrected, and after FDR correction, all models were non-significant. FIGURE 2 | Results of the regressions of age with gyri, sulci, and the gyri/sulci ratio of cortical thickness and intrinsic curvature on the pial and white matter surfaces. The lines refer to the fitted curve for the age and measurements, and the dots indicate the distribution of that data for the subjects. Only the R 2 for the significant curves were shown in the table (p < 0.05/18; adjusted by the Bonferroni correction for multiple comparisons). The variables are adjusted R 2 ; + represents positive correlation, -represents negative correlation. Different Contribution of the Structural Measurements in Chronological Age-Dependencies By using stepwise regression that adjusted for the sex and TIV effect, four of the six measurements were selected into the final model with adjusted, including mean thickness of sulci (standardized β = −0.214, p < 0.001), mean intrinsic curvature of the pial (standardized β = −0.524, p < 0.001), and white surface of sulci (standardized β = 0.544, p < 0.001), and mean intrinsic curvature of pial surface of gyri (standardized β = −0.284, p < 0.001). All p-values were significant after FDR correction. The Aging Model To summarize the various measurements and structural findings in the current and previous studies, we suggest a putative model of the general pattern of cortical aging (Figure 4). In this model, the gyral intrinsic curvature of the pial surface increases slightly with age, while the tips of the gyri stay close to the skull [ Figure 4(1)]. The gyral intrinsic curvature on the white matter surface increases with age, so we hypothesized that the surface may move outward, resulting in a decrease in cortical thickness [ Figure 4(2)]. The sulcal intrinsic curvature of the pial surface The significance of the curve fitting is evaluated using the permutation test, only the R 2 for the significant curves were shown in the table (p < 0.05/108; adjusted by the Bonferroni correction for multiple comparisons). The variables are adjusted R 2 ; + represents positive correlation, -represents negative correlation. decreases with age [ Figure 4(3)], which indicates that the sulci are getting wider and flatter on the surface with increasing age. The pial border might also move outward, resulting in a decline in the sulcal depth and increase in sulcal width during aging which was reported in previous literature (Kochunov et al., 2005;Liu et al., 2013). On the white matter surface, the sulcal intrinsic curvature increases with age, and the gray-white matter boundary may move outward and becomes cusped [ Figure 4(4)]. Finally, the decreasing extent of the sulcal thickness appears larger than that of the gyral thickness. Additionally, we found that the trends of all correlations were consistent for both gender (see Supplementary Figure 11), indicating that the overall pattern of the current putative model reflects common mechanisms of change during aging. DISCUSSION This study aims to reveal the pattern of the gyral and sulcal changes on the pial and white matter surfaces during the normal aging process using a relatively large cohort dataset. We investigated the whole-brain vertex-wise pattern of the structural changes during the aging process, in which the white matter and pial surfaces showed a different association of gyrification with age. The gyri/sulci ratio was used to highlight the difference of cortical thinning and curvature changes between gyri and sulci spanning across aging. Current findings suggest that, instead of gyri, the changes of sulcal thickness and curvature contribute more during the normal aging process. Finally, we also concluded a putative model of aging in this study based on previous evidence for the fundamental shape of the cortex that accompanied by the current results, which provides a better understanding of the cortical structure degeneration across adulthood to old age. As the advance of age, we observed that the curvature decreased on the sulcal-pial surface and increased on the sulcalwhite matter surface (Figure 2). Previous studies have found that sulcal width increases while sulcal depth decreases with age (Kochunov et al., 2005;Liu et al., 2013). This finding partially supports the fact that the sulcal-pial surface might flatten and move outward instead of shrinking toward the white matter surface, which is mainly caused by the steady production of CSF from the choroid plexus that inflates the cerebrum during the loss of brain parenchyma (Miller et al., 1987;Matsumae et al., 1996;Scahill et al., 2003). Moreover, the increased curvature of the WM surface is associated with imbalanced WM-GM shrinkage (Deppe et al., 2014). The specific changes of sulcal morphology in WM-GM surface can be linked with the loss of short association fibers (U-fiber) underneath sulcus that contribute to the impaired local clustering of the brain connections (Toro and Burnod, 2005;Gao et al., 2014;Van Essen et al., 2018). Third, by analyzing the gyri/sulci ratio, we found that the degree of sulcal thickness thinning was larger than that of gyral thickness during the normal aging process. These findings implied that the changes of sulcal morphology are more prominent than gyral regions during aging, which resulted in the variation in the following ways: (1) For sulcal morphology, while the sulcal pial surface moved outward and flatten, the GM-WM surface became cusped and the thickness of sulci decreases. (2) For gyral regions, the curvature on both surfaces became cusped, with mildly decreased gyral thickness. All inferences and current evidence were integrated and graphed in the putative model of aging (Figure 4). One of the noticeable findings in the current study is that gyri and sulci altered differently during the normal aging process. (2) Gyral intrinsic curvature on the white matter surface increases with age. (3) Sulcal intrinsic curvature on the pial surface decreases with age; sulcal width increases and sulcal depth decreases with age (4) Sulcal intrinsic curvature on the white matter surface increases with age. (1-4) Sulcal thickness declines more than gyral thickness (see the length of the red and blue arrows). Gyral crowns were reported having specialized and enhanced connections and organization between cortices (Brodmann, 1909;Welker, 1990). Although common functional regions were mostly defined in the gyral region and its adjacent sulci, our findings suggested that the sulci itself greatly altered and may be responsible for the decrease of functional segregation during aging. Previous studies using diffusion-weighted imaging have indicated that gyral regions show denser white matter fibers than sulcal regions (Nie et al., 2012;Chen et al., 2013). Deng et al. (2014) further supported these findings that gyri are functional connection centers, while sulci are likely to be local functional units connecting neighboring gyri through inter-column cortico-cortical fibers. Moreover, Gao et al. (2014) found association between the disintegrity in short-range fibers and lower cognitive efficiency on prospective memory, and the loss of the clustering coefficient has also been found to correlate with inferior intelligence quotient (IQ) (Li et al., 2009). Taking together, we suggest that morphological degeneration in sulcal regions could be more vulnerable to the effects of aging. The loss of local interconnectivity in the brain might be more pronounced in the normal aging process. In the current study, we found mild trends that MMSE was positively correlated with sulcal cortical thickness and gyral intrinsic curvature on the white matter surface. Therefore, we assumed that the sulcal thickness declines suggest that part of the short-ranged connectivity changes and may influence intra-cortical brain function (Schuz and Palm, 1989;Elston, 2003;Cullen et al., 2010;Wagstyl et al., 2015). Moreover, targeting the cognitive correlates of sulcal degeneration and its impact to brain topology should be investigated in future studies. We also found different trends of curvature changing on the WM-GM boundary and the pial surface during aging. Based on the retrogenesis mechanism, the differential proliferation hypothesis (Richman et al., 1975) may support differential degeneration process of gyrification of the pial and white matter surfaces during aging. Changes in myelin and synapses in the cortex could be the reason for the morphology development and degeneration, and thereby resulting in cognitive decline (Bartzokis, 2004;Masliah et al., 2006;Fjell and Walhovd, 2010;Whitaker et al., 2016). Nonetheless, a direct link between cortical myelination and gyrification still needs to be found. Although several hypotheses of the mechanisms underlying gyrification are currently being debated by neuroscientists, some have suggested that gyrification is shaped or influenced by multiple mechanisms . The alterations in aging-related gyrification could reflect the underlying cortical connections and the functions of our brain by either pruning or degenerative processes (Ronan et al., 2011;Jockwitz et al., 2017). The vulnerability of sulcal cortical thinning that was discovered in the current study could shed light on the structure-function relationship in the human brain. The aging model of cerebral cortex we proposed in this study is a global effect spanning across most of the brain regions. However, while most of the regions showing greater gyri/sulci ratio, frontal lobe and cingulate showed similar degree of a decrease between gyral and sulcal thickness, where the amount of decrease was non-significant correlating with age ( Table 3). Our findings implied that the gyral and sulcal thickness in the frontal and cingulate regions decreased to the same extent during the normal aging process. Several studies have found accelerated regional gray matter volume decline in the frontal lobe compared with other lobes (Tisserand et al., 2002;Resnick et al., 2003) and have demonstrated decreases in cortical volume specifically in the frontal and cingulate regions. These findings consistently suggest that the frontal and cingulate cortices may be the key regions in the brain aging process. Another study reported a higher increase of sulcal width in the superior frontal sulci and a lower correlation between age and decreased sulcal depth in the inferior and orbitofrontal sulci (Kochunov et al., 2005). This report might support our findings that both gyral and sulcal thickness decreased to the same degree so that sulcal width increased significantly and sulcal depth decreased because of the high atrophy in gyri. Moreover, the trend for sulcal gyrification changes on the white matter surface in the occipital lobe did not increase as we found in the whole-brain examination. Small variations exist in different brain regions because the development of the lobes and their functions are diverse. In this study, we characterized the effects of age on different structural progressions in a large sample of healthy adults. However, this study had several limitations. First, the causality of cortical thickness and intrinsic curvature affecting brain function was not investigated. Due to the nature of the cross-sectional design, we were unable to avoid cohort effect or indicate which structure degenerated earlier or had a higher impact on the brain function. The structure-function relationships among gyri, sulci, and the pial and white matter surfaces and how they impact each other still need to be examined. In this case, the envisioned model of the degeneration of the cerebral cortex may need to be investigated with longitudinal data. Second, to generalize the degenerative process, the current model focused on the trends for comprehensive changes in the cortex and brain lobes. However, cortical variations during aging, including changes in volume and thickness, have been found to decline regionally (Thambisetty et al., 2010;Westlye et al., 2010). Therefore, although we posited general aging-related alterations in the gyri and sulci of the cortex in the model, regional variations still need to be specified. Third, we used a simple concave-convex concept as a division of gyri and sulci which may seem to be an arbitrary classification. Adopting the convexity map generated by FreeSurfer to classify cortex into gyral, sulcal and undefined regions Yang et al., 2019;Zhang et al., 2019) is recommended for future analysis especially when looking at regional changes in the analysis. Next, our aging model is partly based on previous literature. The measure of sulcal width and depth was not directly quantified in the current study. The completeness of the aging model could be tested and reproduced with all the measurements together. Lastly, participants with mild or severe cognitive functions were excluded in this study, and only general cognition assessments were conducted for current participants. Thus, the generalizability of current findings is limited to healthy population, moreover, its cognitive implications should be further examined with detailed cognitions such as verbal memory, visual executive in future studies. CONCLUSION This study illustrates a cortical degeneration model from the perspective of brain morphology which provides an overview for the brain aging process using multiple structural measurements. We found systematical and nonuniform cortical thinning during normal aging, that the overall degree of sulcal degeneration is greater than gyri in terms of thickness and gyrification. These degeneration mechanisms might relate to pruning, life-long reshaping and neurodegenerative processes, associating with differential brain functional degeneration and the underlying neuronal tension. We suggest that the cortical features of gyri, sulci, the pial and white matter surfaces should be considered independently in future studies, which could be associated with segregation and integration alterations in brain connectome during aging. DATA AVAILABILITY STATEMENT The original contributions presented in the study are included in the article/Supplementary Material, further inquiries can be directed to the corresponding author/s. ETHICS STATEMENT The studies involving human participants were reviewed and approved by the Institutional Review Board of Taipei Veterans General Hospital. All participants provided their written informed consent to participate in this study. AUTHOR CONTRIBUTIONS H-YL, C-CH, and C-PL conceived and planned the experiments. AY and S-JT contributed to data collection. H-YL and C-CH conducted the experiments and data analysis. H-YL, C-CH, K-HC, AY, C-YL, S-JT, and C-PL contributed to the interpretation of the results. H-YL and C-CH took the lead in writing the manuscript. All authors provided feedback and helped with the analysis and manuscript.
2020-11-05T09:09:48.695Z
2020-10-30T00:00:00.000
{ "year": 2021, "sha1": "a0838431143d229d6d74eb70cac67c6d0c65fb65", "oa_license": "CCBY", "oa_url": "https://www.frontiersin.org/articles/10.3389/fnagi.2021.625931/pdf", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "3fc746ccc3724718b8c4f3baab449aacd541f74b", "s2fieldsofstudy": [ "Biology" ], "extfieldsofstudy": [ "Biology" ] }
237566863
pes2o/s2orc
v3-fos-license
Horizontal collaboration in the freight transport sector: barrier and decision-making frameworks In the freight transport sector, competing companies horizontally collaborate through establishing Collaborative Transport Networks (CTNs). Fruitful implementation of CTNs will leverage environmental and socio-economic goals of sustainable development in the freight transport sector. The benefits of CTNs in horizontal collaborative settings have been widely demonstrated through several modelling approaches. However, in practice, the real applications of CTNs have been challenging and most did not achieve satisfactory performances. Some studies have addressed this issue by identifying different barriers to CTN implementation. However, a conceptual framework for the barriers is not well-established. In addition, the literature lacks a decision-making framework for the CTN implementation which considers the different barriers. To address this gap, this paper conducted a literature review of the barriers to CTN implementation. In total, 31 different barriers were identified. A conceptual barrier framework is developed by grouping the 31 barriers into five categories: the business model, information sharing, the human factors, the Collaborative Decision Support Systems (CDSSs), and the market. The paper additionally proposes a stage-gate model integrating the conceptual barrier framework into the CTN implementation decision-making process. The current work contributes to the existing literature by developing both theoretical and practical understandings of the barriers to implementing CTNs and will support decision makers in CTN implementation to maximize the CTN benefits and minimize the risk of CTN failure. Introduction In the freight transport sector, the high market fragmentation has caused a serious problem of inefficient transport planning and empty running, leading to negative consequences such as increased delivery costs and environmental impacts, e.g., carbon emissions, road accidents, and noise. In 2018, 12.3% of the total distance made by freight trucks between EU countries was empty running while this percentage reached between 15 and 30% inside some European countries [1]. Many governments have recently set a long-term goal to make the freight transport sector more sustainable. Several studies showed that collaboration among companies on their transport activities can help achieve this goal and also enable lower delivery costs [2], fewer carbon emissions [3], increased service levels [4]. Some research projects and start-ups have been recently received public funding, aiming to encourage collaborative practices in the freight transport sector among shippers, carriers, and receivers (e.g., retailers) [5]. For instance, the CTN 'Nistevo' reported that two big shippers through collaboration could achieve savings of 19% in transport costs compared to the non-collaborative practice [6]. The present paper focuses on the horizontal transport collaboration in which a group of competing companies, i.e. shippers, carriers, or receivers, agrees to collaborate on their freight transport activities [7]. This is named as a Collaborative Transport Networks (CTNs). In most collaborative scenarios, collaborating companies (partners) are required to share the information on their transport orders and delivery trucks with a central coordinator, e.g., a logistics service provider. The central coordinator then uses the shared information to identify opportunities for collaboration, develop a joint delivery plan, or suggest freight exchanges among partners. Due to the huge amount of shared information and the need for efficient decisions, the coordinator often uses CDSSs to plan the collaboration processes [8]. The difference between transport costs in collaborative and non-collaborative settings is known as collaboration profits that the CDSSs might allocate among partners using various profit-sharing mechanisms [9]. Despite the extensively reported benefits of collaborative transport and the government funds to encourage the uptake of CTNs, their real applications have been rare and some CTNs were formed but did not last for a long time or did not achieve satisfactory performances [10][11][12][13][14]. Recent studies suggest that the decision to implement CTNs should not be made without analyzing the many barriers to implementing CTNs. For example, Pan et al. [7] discussed the implementation barriers of CTNs with respect to the design and management of the CTN, CDSSs, and communications technology. More recently, Basso et al. [12] identified 16 barriers and further classified them into design, organization, information sharing, profit allocations, and human factors. Despite their merit, these studies have not provided a wider perspective of the barriers by considering specific CTN solutions and/or have not mentioned some important barriers, e.g., the barriers related to the CTN business model. Knowledge of these barriers is of great value since the failure or limited success of CTNs can be strongly attributed to the fundamentals of the business model [15][16][17]. In addition, these studies as well as exiting literature rarely identified a framework that systematically guides the decisions to implement the CTN with consideration to barriers. The present paper makes two main contributions to the CTN implementation discourse. The first contribution is a conceptual framework for the barriers to implementing CTNs while the second is a decision-making model to guide the CTN implementation process with considerations to the identified barriers. The findings have valuable implications on theoretical and practical levels. Theoretically, it extends the existing classification of barriers into five groups: business model, information sharing, human factors, CDSSs, and market. This constitutes a better framework than analyzing the barriers on their own, as most of the existing studies did. The present paper performed a comprehensive literature review and identified 31 barriers while existing review studies identify a maximum of 16 barriers. Practically, the conceptual framework, as well as the decision-making model, will be useful to logistics service providers, freight carriers, logistics IT developers, researchers, decision makers in the logistics industry, funding organizations, and entrepreneurs in making decisions to implement collaborative freight transport. Furthermore, the identified barriers represent a valuable checklist for any CTN implementation and future research. Overview of horizontal collaboration in freight transport Collaboration is an old practice in the supply chain and can be mainly classified into vertical and horizontal collaboration. Vertical collaboration involves companies working at different levels of the supply chain, e.g., shipper-carrier collaboration [18] while horizontal collaboration involves companies working at the same level of the supply chain. The present paper focuses on the horizontal collaboration that can be applied through CTNs involving competing companies. To update the transport research field with a rich set of CTN barriers, the current work considers different CTN solutions found in the literature and categorizes them into three CTN types similar to [19,20] as follows: • An open electronic marketplace platform enables companies to build a temporary CTN in the spot market and is often executed without formal documentation. E-marketplace platforms are applied as a web-based information system and connect different companies, i.e. shippers, carriers, and LSPs, that might not have collaborated before. In E-marketplace, competing companies can build horizontal transport collaboration, for example, the Nistevo platform facilitates shipper collaboration by consolidating loads into full truckloads [21]. Another wellknown example is TIMOCOM [22]. • A strategic alliance is a CTN based on a long-term partnership and contractual agreements among companies [7]. For example, transport alliances might be formed by a group of small carriers to achieve economies of scale through serving transport demands from many small shippers or a few large shippers [23]. The formation of transport alliances requires making strategic decisions (e.g., partner selection), tactical decisions (e.g., cost and profit sharing), and operational decisions (e.g., collaborative vehicle routing and order sharing decisions). In horizontal collaboration, transport and logistics alliances might include two or more competing companies, e.g., shippers, logistics service providers (LSPs), carriers, or receivers. Thus, there exist carrier alliances, shipper alliances, retailer alliances, and LSP alliances [7,19]. The shippers are those companies existing at the origin of a delivery. In shipper alliances, two or more shippers, e.g., raw material suppliers or manufacturers, agree to use one logistics service provider or carrier to reduce their transport and logistics costs, see for example [24]. The retailers are mostly considered as the final destination of the products [25]. In retailer alliances, two or more retailers use the same logistics service provider or one carrier [26]. Carriers are companies that perform deliveries among shippers and receivers. In carrier alliances, two or more carriers consolidate their freight in a few trucks, aiming to use fewer trucks and accordingly, reduce delivery costs and negative externalities [27]. Finally, LSPs are companies that manage the whole supply chain of different companies. Since LSPs act as interfaces between carriers, shippers, and receivers, they have an essential role in many collaborative logistics initiatives [19]. Moreover, partners of the alliance might collaborate on transport activities within longdistance transport or urban transport. However, both long-distance and urban transport alliances face similar organizational and technological challenges [20]. • Urban Consolidation Centers (UCCs) have been most frequently considered as a CTN to solve city logistics problems [20]. The idea of UCC is to replace multiple last-mile delivery movements with a common receiving facility, i.e. the consolidation center, where deliveries are sorted and consolidated in a small freight vehicle. For example, a group of retailers can perform last-mile deliveries from a UCC to their end customers in urban regions [26]. All CTN solutions require the use of CDSSs for information sharing and efficiently developing collaborative plans. Figure 1 shows the basic structure of the CDSS including three major modules, i.e. database, computational algorithms, and an interactive dialog. The database module receives and stores all logistics information to be fed into the algorithmic module either by manual entry or automatic feeding from partners' transport planning systems. The algorithmic module is responsible for processing the shared information and planning the collaborative decisions, e.g., exchange proposals and profit allocation among partners. The literature has mostly addressed two main algorithmic approaches for transport Fig. 1 The basic structure of a CDSS alliances: order sharing and capacity sharing. Order sharing is also known as a centralized collaborative planning approach and mostly used for collaboration among transport service providers. It requires that all partners share the information on their transport requests (orders) with a central coordinator who reallocates the orders amongst them to achieve a match between needed and available trucking capacities [28]. Compared to order sharing, capacity sharing is most frequently applied to collaboration among all types of companies. It requires that partners only share their available trucking capacities instead of sharing their transport requests [29]. Capacity sharing enables partners not to share their most sensitive information (customer requests) and therefore, companies might prefer this approach due to the high competition in the industry. E-marketplaces mostly use auction-based decentralized planning algorithms while solving twoechelon vehicle routing problems is mostly employed to manage the operations of UCCs. For various planning techniques, see [8,20,30]. Finally, the dialog module provides interactive communication interfaces so that partners can receive messages from the CDSS, communicate with each other, and search and filter their shared information. Search strategy and results Three steps have been used to identify different works reviewed in the present paper. Firstly, two databases, Web of Science (WoS) and Scopus have been searched using the following search terms: • "Collaboration" and "Freight" with "logistics", "supply chain", or "transportation". • "Cooperation" and "Freight" with "logistics", "supply chain", or "transportation". The search resulted in 343 and 557 works written in English in WoS and Scopus respectively. Secondly, we carefully screened the works and included only those that satisfy the following criteria: • Articles published in journals. • Articles that discuss factors affecting collaborative freight transport in different supply chain contexts using qualitative and quantitative approaches. • Articles that present case studies or results based on real applications of CTN. Additionally, we excluded articles focusing purely on mathematical models and algorithms. The authors also made a Google search to identify relevant reports that discuss the barriers to the CTNs. We initially identified 63 works. Finally, we applied backward snowballing, i.e., examining the references of the papers identified from the second step. The backward snowballing identified 21 additional works, resulting in the inclusion of 84 works covering the period 1996 to 2020. The analyzed works included 61 journal papers, 9 review papers, 3 conference papers, 7 industry reports, 3 books, and 1 master thesis. The journal papers are published in 41 journals, e.g., Transportation Research Part E: Logistics and Transportation Review (6) Figure 2 shows that the period 2007-2020 witnessed increasing research attention to collaboration. This might be because several collaborative logistics projects were funded by EU countries starting from 2007. To investigate the evolution of research topics over this period, the author keywords were analyzed using VOSviewer (www. vosvi ewer. com). Figure 3 visualizes the keywords mentioned at least two times in the identified works. Each rectangle refers to a keyword and its size is proportional to the number of publications in which the keyword was mentioned. The color gradient represents temporal trends in keyword occurrence during the period from 2007 (blue) to 2020 (yellow). From 2007 to 2010, attention was given to the role of information technologies to support collaboration in supply chain management, e.g., cryptographic technology [32], E-marketplace platforms [33], and web-based information systems [34]. In addition, studies addressed operational governance and strategic alliance contracts [15] and proposed mathematical models for collaborative planning among logistics service providers, see for example [2]. From 2010 to 2015, scholars considered trust building among partners through developing new coordination mechanisms (e.g. [35,36],), and profit and cost-sharing methods (e.g., [37,38]) using approaches of vehicle routing and game theory. The same period witnessed many CTN applications with CDSSs for group-decision making and negotiation among partners, e.g., [28,39,40]). From 2016 to 2020, attention was given to survey papers, e.g., collaborative solutions and benefits [12,27,41], collaborative vehicle routing approaches [5], cost allocation methods [9], and implementation issues [7]. Given the great advancements in Information and Communication Technologies (ICTs), more complex technologies and concepts for information sharing were studied, e.g., blockchain technology [42], and the physical internet [43]. Recently, the success factors of collaborative business models have become an important topic, especially for urban logistics [16,17,44,45]. This demonstrates the merits of the current work because it considers barriers related to CTN business models that have not been sufficiently considered by previous works. The conceptual barrier framework To determine and categorize the barriers from the 84 works, this paper used a qualitative research approach, known as meta-synthesis, in combination with our expert knowledge. Meta-synthesis is a systematic approach for interpreting data across qualitative studies to identify qualitative evidence answering a specific research question [46]. The meta-synthesis has been successfully utilized in recent review studies to identify barriers affecting the adoption of sustainable practices, see for example [47]. The meta-synthesis includes three main steps: the first step was to perform free line-by-line coding of the different sections of each work. In the coding process, the authors summarized text that describes potential barriers. In particular, a barrier was considered an obstacle that can hinder the diffusion, implementation, and continuity of the CTN. This identified descriptive barriers capturing the meaning of the data in each work. Then subsequent works were coded into previously defined barriers, if this was not possible, a new barrier was created. The first step identified an initial set of 60 barriers from the 84 works. The second step was to group similar barriers into one broad barrier. In doing so, the authors identified the barriers that describe similar issues but were written in different wording. Then, these were grouped into one or more broad barriers. After several iterations of the second step, a final set of 31 different barriers were identified and listed with their corresponding references in "Appendix 1". The identified 31 barriers were further reviewed by an external researcher to ensure the consistency and clarity of the identified barriers. The third step was to organize the identified barriers into categories that better illustrate the nature of the identified barriers. Analysis of the existing literature revealed that most studies discussed the barriers in relation to the following categories: fundamentals of collaborative business models, quality of shared information, human factors, CDSSs, or market. To align our findings with the existing literature, the authors clustered the identified barriers into five groups: CTN business model, information sharing, human factors, CDSSs, and market. In addition, this classification also better illustrates the nature of barriers and facilitates informed decision-making since these five groups also represent the most essential elements for the CTN application. Figure 4 shows the conceptual framework for the barriers to successful implementation. As shown in Fig. 4, the identified barriers form the black box of collaboration and act as deterring factors to achieve the collaboration benefits. "Appendix 2" shows different collaboration benefits with their corresponding references. The next sections provide a deeper look into this black box by providing evidence examples for each barrier and the possible best practices to overcome each barrier. Of the 31 barriers, nine are related to the business model; five are associated with information sharing; six are related to the human factors; six are related to the CDSSs, and six are related to the market. In the following, a detailed description of the identified barriers is provided. Barriers related to the CTN business model At the early planning, the CTN's business model is formulated to describe how the CTN creates, markets, and delivers values to its customers. This business model should be seen in conjunction with existing business models of collaborating companies. A good business model can be described by answering key questions such as: what are the required resources and how they are financed, value propositions, revenue streams, cost structures, and management frameworks of the CTNs [48]. In the following, important characteristics of the business model and their inherent barriers are discussed. The organizational setup and operational governance model The organizational setup and operational governance model are two interrelated barriers. An organizational setup specifies how key decisions are made and the different roles and responsibilities of partners and coordinators [7]. In addition, an operational governance mode is required to ensure partners' commitment to their duties and support to CTN development [15,49,50]. In general, organizational setups and governance modes differ between CTNs, depending on who is the owner of the CTN. Table 1 compares the organizational setups between strategic alliances and electronic marketplaces. For example, alliance partners are often the CTN owners, and therefore, they have a strong motive and commitment for scaling up the alliance and achieving the highest efficiency. Some scholars reported that when "Who owns the CTN" is not clearly defined, this might result in a lack of commitment and support from CTN partners [44]. Electronic marketplaces have at most one purpose that is connecting different freight companies without having any direct control over how they collaborate. Therefore, organizational setup and operational governance modes are not so important in electronic marketplaces. Value propositions The value propositions of the CTNs describe what problems to be solved, strategies to solve these problems, and the benefits that companies can get. Although CTNs provide environmental, societal, and economic benefits (see Fig. 4), companies are more motivated to join CTNs when there are clear economic benefits, e.g., a direct measurable effect on costs [19,51]. For instance, Fig. 4 Main elements of collaboration, barriers to collaboration success (black box), and potential benefits (ordered from inner to outer circles). See "Appendix 1" for references of the identified barriers and "Appendix 2" for references of potential benefits Table 1 Comparison of organizational setups among alliances, and E-marketplace platforms Membership Contract-based membership to ensure responsibilities Partners might be required to pay membership fees or be shareholders to enter the alliance Non-contract-based membership but partners have to sign the electronic license agreement to use the platform and pay a fee Management and ownership CTN management is made by a limited liability company (LLC) owned by alliance partners The board of the LLC includes representatives from the largest partners Strategic decisions might flow from above to down or might be made in a horizontally centralized way Partners who do not accept the decisions can leave the alliance The electronic marketplace is developed, managed, and owned by an IT specialist company or 3PL Partners (users) do not necessarily have shares in the ownership of the platform Main duties The LLC is responsible for operating the CTNs, providing the CDSSs, and expanding the infrastructure and transport resources The CTNs are formed by the partners' assets, e.g., freight terminals Partners and the LLC are responsible for sales and marketing, feeding the alliance network, and executing transportation services for the alliance The developer company develops the marketplace to connect the users Partners develop collaborative solutions themselves The developer company operates, improves, and markets the marketplace The developer company verifies all necessary documents of a new user before joining the marketplace the CTN 'CO 2 CITY' started with focusing on the environmental benefits, but eventually the focus was placed on the economic benefits to attract additional companies [16]. We note that the societal and environmental benefits are less considered by the majority of the CTNs although these sometimes are defined as one of the goals. Additionally, CTNs that offer only one service might be less attractive to many companies and might not achieve financial sustainability [44]. To improve the value proposition, CTNs should strengthen the economic benefits and offer a variety of valuable services to their partners. Examples of valuable services are transport management systems, enabling landed cost calculation, supply chain event management, routing and tendering, a spot market exchange and auctions for long-term contracts, etc. [33]. Key resources Most studies interpret key resources as CDSSs, infrastructures, operators, and IT developers. However, logistics, supply chain, and innovation competencies are also key resources for developing, and marketing the CTN. Such competencies are not accounted for by most CTNs as indicated by [16]. Resource financing is also an important aspect. The main financing source of CTNs might come from governmental subsidies, partners, or IT specialist companies. Multiple studies showed that whenever resources are financed by partners, this ensures the commitment and support of the partners to make the CTN successful [10,52]. In large alliances, transport resources and assets are owned by partners while CDSSs, marketing, central management costs are financed by profits or transactional fees from the daily business in the CTN [23]. For example, the leading CTN 'Transplace' was founded in 2000 by six large freight carriers, each of which contributed $5 million in the funding of the CTN [33]. Revenue streams and cost structures Before CTN implementation, future revenues and expected costs should be estimated to identify conditions of break evening. Revenues are generated from payments by the partners according to the following: membership fees, per-transaction fees, profit-sharing ratio, subscription-based fees, software license fees, and Ad-based fees. These methods can be employed individually or in a combined manner [16,23,45]. A membership fee is paid by a partner to enter the CTN alliance. A per-transaction fee, profit-sharing ratio, or a subscription-based fee is used to derive revenues from the daily business in the CTN. The ad-based fee is paid for allowing commercial advertisements on the transport vehicles of the CTN, see for example [16], or for online advertising [33]. Service prices must be competitively set so that partners can make profits. Some studies argued that better revenue can be generated from logistics management software with subscription-based fees rather than depending solely on transaction fees or a profit-sharing ratio [33]. Two main types of costs are associated with CTNs: fixed costs and operational costs. Fixed cost includes all strategic investments and it depends on the adopted collaborative approach. For example, consolidation centers require a relatively higher fixed cost than electronic freight matching platforms [44]. Operational cost represents salaries for administrative, marketing, IT development, operating staff, and maintenance costs for any equipment to operate the CTN. As stated before, the CTN needs to increase the scope of its services to attract more companies to secure sufficient revenues for a break even. Stakeholders Stakeholders might include eight entities: customers (partners), coordinators, owners of the CTNs, initiators, funding agencies, public entities, consultants, and research institutions. An entity can have multiple roles, for example, the initiator can be also the coordinator and owner [16]. Selecting the right stakeholders is imperative to secure better collaboration synergies, resource financing, market-positioning, and conformity to the law. Additionally, key actors should be involved in making key decisions, problem-solving, resource financing, and not only be informed of development issues. Key actors are those stakeholders who formulate policies, finance the required resources, have the required knowledge, and perform significant freight activities [53]. Initiators Initiators must have the required skills, expertise, and market knowledge to select the best collaborative approach and the right stakeholders for this approach [53]. There are some cases where companies join the CTN but never participate in the collaboration process [45]. Coordinators In addition to the CTN management, coordinators play an important role in scaling up and marketing the CTN [23]. For instance, the Lucca municipality managed a CTN for a while, but eventually outsourced the CTN management to a logistics service provider that could offer new services and new marketing channels [16]. A pilot study reported that more than 70% of transport service providers are incentivized to join CTNs if a neutral third party, e.g., an IT development company or research institution, coordinates the collaboration processes [54]. This is because they believe that a neutral third party will treat them fairly and keep their shared information confidential [55,56]. Customers The CTN customers might be carriers, shippers, and/or receivers. The literature has mostly addressed the CTN customers through the issue of finding the right partners for collaboration [11,41]. Identifying the right partner depend on many factors such as geographical locations of partners and their delivery areas, sizes of orders, freight flow balance, shipment compatibility [2]. Compared to other collaboration types, carrier-carrier collaboration is more problematic since it is conducted on the core business activity, i.e. transport, which is visible to their direct customers. Therefore, carrier-carrier collaboration always reports less success rate compared to other collaboration types [57]. Also, carriercarrier collaboration might have less flexibility and synergy due to transport constraints imposed by the freight owners, shippers, and receivers [7,37]. Therefore, many studies recommend that shippers/receivers join the CTNs and ask their contracted transport service providers to collaborate [37]. Barriers related to information sharing A common remark from real applications is that information sharing was a significant cause for the limited success of collaboration [14]. Therefore, several studies considered it as a key requirement for the foundation of CTNs. This section discusses barriers to information sharing caused by poor IT infrastructures at the collaborating companies. This represents a significant limitation especially for small-and medium-sized companies [35,45]. It should be noted that information sharing might also be constrained by the attitudes of individuals. This will be discussed in the 'human factor' category of barriers. Five barriers are identified related to information sharing. Incomplete logistics information Optimal collaborative decision making requires detailed logistics information such as volume, pickup and delivery times, locations, and specifications of the needed trucks. Most collaborative planning approaches are mostly based on the assumption that the required information is always available. However, in practice, such information is rarely documented and is often estimated based on the planners' experience [58]. This issue has been also reported by many studies on supply chain and logistics collaboration [59,60]. The issue can be alleviated by minimizing the input data required for the collaborative planning approach. For example, the matching algorithm of the CTN 'Tri-visor' utilizes only the geographical data, i.e. origin and destinations, of transport requests to match the most frequently used routes by companies [61]. Inefficient information flow and updates Partners might not share their logistics information simultaneously and/or do not timely update their shared information [12,62]. Amours and Rönnqvist [63] stated that inefficient information flow is a big challenge to achieving connectivity among partners. Real-time information flows from partners were an essential requirement for the CTN implementation in [28]. This barrier might frequently happen if companies use simple database software and thus, it might be difficult for them to timely update the CDSS with any changes in their shared information. This might result in invalid and/or lowquality collaborative planning decisions. Inaccurate information Inaccurate information can result in completely different collaborative decisions and higher or lower estimation of the collaboration profits [39]. Several studies showed that a low level of digitalization at some partners leads to highly inaccurate information [12,63,64]. Heterogeneous information formats Some partners may measure the size of the order in different units, e.g., loading meter or cube meter. In this case, the CDSS requires an additional processing effort to unify the information formats. This in turn can lower the quality of the decision-making process [12]. In addition, the formats of the data files shared by partners may also vary when partners have different transport systems [34,65]. According to Liu et al. [66], the similarities among partners in information technologies have to be considered when selecting partners for CTNs. Lack of ICT systems Efficient collaboration requires that partners manage their operations using advanced ICT systems such as transport and warehouse management systems, barcode systems, and fleet telematics systems [67]. In the CDSSs developed by [28], partners were required to use advanced ICT systems for sharing reliable, complete, and real-time information, and quickly evaluate collaborating opportunities. Furthermore, such partners can provide better visibility as well as service level by allowing partners to track the shipments [68]. For example, a partner without advanced ICT systems cannot inform other partners of the real-time location of the delivery trucks or estimate expected delivery time to customers. Amours and Rönnqvist [63] noted that the lack of software agents, standardized information flows, and electronic data exchange technology represent a challenging problem related to collaborative planning and might leads to errors in shared information and collaborative planning. Barriers related to human factors Cruijssen [69] noted that the attitudes of transport planners might hinder the collaboration's success. Basso et al. [12] stated that problems arising from human factors are distrust and opportunism. Such behavioral problems make it difficult for companies to collaborate even if positive business cases exist [68,69]. In the following, we illustrate seven barriers related to human factors. Experiences Some transport planners might have negative experiences from past collaboration attempts. For instance, a partner may share many orders but gain small profits, while other companies share relatively few orders and get more profits. Some partners might provide false information intentionally on capacities and prices to get more matches and gain sensitive information on competitors, compared to the partners whose delivery costs are relatively higher. This may lead to instability and dissolution of collaboration [70]. Some CDSSs detect intentionalfalsification behaviors through feedback rating, then partners who misbehave are blacklisted, see for example TIMOCOM. A real application in Germany showed that the fees offered in the CTN are always higher than the fees outside the CTN [28]. In addition, some companies perceive that collaborative practice may lead to a reduction in their autonomy in the future [35]. Lack of commitment Kwon et al. [52] defined commitment as the belief of the partners that their partnership is so important and so, they do maximum efforts for maintaining it. Lydeka and Adomavičius [70] noted 'a common theme among responders [carriers] was dissatisfaction with some members of cooperation which failed to follow through with commitments' . The success of collaboration requires that partners commit to providing the expected service level and paying promised savings. When partners do not fulfil their commitments, this certainly lowers the CTN's competitive advantage [71]. When this occurs frequently, companies prefer partners from their networks using phones to ensure quick responses and reliable services. With an increasing in the frequency of late responses or no commitments, CTNs lose their competitive advantage, i.e. short lead time and the ability to serve urgent requests at low cost. Distrust among partners Distrust among partners is cited as the most challenging barrier to successful collaboration and it leads to further issues such as fear of sharing information [50,70,72,73]. The lack of trust becomes worse in carrier-carrier collaboration since partners collaborate on their core business activities [74]. The literature provides some suggestions for improving trust. Badraoui et al. [10] suggest that partners invest in the CTNs since they will do their best to achieve a good return on their investments. Lydeka and Adomavičius [70] suggest that partners start with collaboration on non-core business activities, e.g., purchasing fuel and tires in large quantities, before collaborating on their core business activities. Los et al. [75] suggest that trust can be improved by adopting collaborative approaches requiring less information sharing. The use of cryptographic and blockchain technologies was investigated as a solution for securing information sharing among partners, and thus trust is improved [32,42]. The CDSS can allow its users to advertise their freight either privately or publicly [76]. Distrust of the CTN This barrier means that partners distrust both coordinators and methodologies of the CDSS [12,55,56]. Many CTNs were disintegrated because of such issues [31]. Trust among the coordinator and the partners necessitates the existence of positive past collaboration experiences [77]. Vargas et al. [45] argued that this issue might happen because small carriers often lack IT-based skills and so, it is not easy for them to understand how and with whom their shared information will be used in the CDSS. Another reason for this barrier is that compared to a traditional freight broker, some CTNs do not take any responsibility for the actual service provided to the partners [76]. Besides, transport operation managers often believe that their way of finding partners is the best and any technology or system developed to replace their ways of working is not practical [70]. Fears of changing their business model Companies may worry about the future consequences of changing their business models towards collaborative practice and joining transport alliances [78]. According to Lydeka and Adomavičius [70], the managers in the small companies are most often the founders and they consider their companies as " their baby" and so, it is no easy for them to lose direct control over their customers. Furthermore, collaboration may reduce the dissimilarity between their transport services and simultaneously increases the distinctiveness between their services and the non-collaborator 's service [79]. Consequently, small companies may be afraid that collaboration can cause them to lose customers or get pushed out of the market completely. Unawareness of collaboration benefits Pilot studies assured that increasing the partners' awareness of the collaboration benefits is imperative before real applications [69,78]. Many CTNs were launched in Europe, but they were marketed to the freight industries with too much focus on sustainability goals, i.e. reduced congestion and emissions. This in turn ignores the important fact that companies prioritize what can improve their profits over societal or environmental benefits [80]. Transport managers in small companies might have less formal education, thus they do not care about sustainability issues and do not believe in the benefits of collaboration [70]. Other reasons may be the misconception of collaboration aims and mechanisms, lack of experience, and poor understanding of how sustainable performance can be used as a competitive advantage. Barriers related to CDSSs CDSSs aim basically to connect the different freight firms and enable them to share their information. Additionally, CDSSs might include algorithmic approaches for making decisions on joint route planning, freight auctions, tendering, and profit allocations [5]. Some CDSSs might integrate real-time logistics information into the decision-making process if partners have the required technology to supply their real-time information [28]. Many scholars have developed complex algorithms utilizing several variants of the vehicle routing problem [8], while the industrial communities focus on enabling technologies for real-time information, developing effective userinterface and predictive analytics. In the following, six barriers related to the CDSSs are discussed. Cost/profit-sharing mechanism Sharing profits and costs in a way that is accepted by all partners is a significant barrier to the collaboration [9]. For instance, eight forest transport companies discontinued the collaboration since the outcomes were insufficient [81]. Although review studies [5,9] identified more than 40 profit-sharing mechanisms, many studies agree that real applications require a transparent, straightforward mechanism rather than a sophisticated, theoretical one [82,83]. The mechanism should consider partners' characteristics, e.g., sizes of partners and their contributions to the collaboration synergies. For instance, companies that have large market coverages can significantly improve collaboration synergies. Such companies have to be privileged when sharing the collaboration profits [82,84]. Mathematical approaches might not get the consensus of all partners and they might show inconsistent performances from a case to another [82]. To overcome this issue, some CTNs adopt a negotiation-based policy in which profit sharing is negotiated among partners instead of using a specific method imposed by the CTN [85]. In this way, partners are not required to share their most sensitive information, i.e. service costs, with the coordinators. It also allows them to estimate the cost by their own accounting systems [86]. One issue with negotiation is that large partners might use their market position to take advantage of small partners. One way to overcome this issue is to specify a set of sharing policies instead of only one policy, then partners can negotiate on which rule they use [82]. Plenty of collaborative proposals An important barrier is the too many 'collaborative proposals' emails that transport planners receive daily from the CDSS [28]. This consumes the planners' time in reviewing these emails and as the day goes on, more emails are piled up [87]. Some studies suggest imposing filtering constraints on the delivery time window, required handling equipment, truck class, or a specialized driver [88]. Such filter constraints might not be possible if the shared logistics information lacks the required details. Some studies handled this issue through predictive analytical tools that process previously accepted proposals to extract the human preference regarding proposals of interest [39]. Trucker Tools, a smart logistics solution provider, predicts the proposals of interest by an email read-and-analysis method that is based on techniques of machine learning and natural language processing [87]. Some studies solve these issues using knowledge-based DSSs integrated with multi-criteria decision-making approaches [58]. Collaborative planning algorithms Typically, CTNs have huge amounts of information flows from many trucks serving hundreds of transport requests across several postal codes and each transport request may have different transport requirements. Thus, the efficiency of collaborative planning algorithms is a key requirement for efficient real-time planning decisions [8,40]. This becomes more crucial for transport requests characterized by short lead times such as express couriers [28,62]. In addition, the design of the planning algorithm might be constrained by information availability. Most real applications developed algorithms are based on the rolling horizon concept [28,40]. The idea of the rolling horizon is to run the algorithm and keep the identified plans in an adaptive memory. Every time, an event occurs, e.g., a new request is known or a truck has finished its current job, the algorithm is rerun considering the latest updates in shared information. Evaluation of collaborating proposals A recent survey [5] showed that most studies evaluate and rank proposals based on the economic impacts (e.g., total travel distance) while environmental and societal impacts (e.g., emissions, safety, and service level) are paid relatively less attention [4]. This certainly underestimates the collaboration benefits. In addition, past services' quality provided by the partners is not considered when evaluating new proposals. Some CTNs measure the service quality by the ratings of the partners for on-time performance, ratios of damages, and claims [89]. Accordingly, partners who have low rating scores will rarely acquire collaborating proposals. This way can operationalize the negative implications of the partners not fulfilling their commitment. Lack of interactive communication tools Email server clients and websites are the most frequently used communication tools in existing literature [28,34,90]. With increasing dynamics, competition, and uncertainty in the logistics industry, these traditional tools are unsatisfactory and can impair the efficiency of collaborators. Some CDSSs use advanced web-based mobile applications and custom APIs to facilitate tracking and check-ins. For example, TIMOCOM [22] embedded a messenger service into its freight exchange platform for chatting among collaborating partners. Compared to emails, messengers save time and are much easier to be used, especially when negotiating with many partners at the same time. Lack of system integration System integration allows for automatic information flows from partners' planning systems to the CDSS and back [57]. Many studies cited the lack of system integration as a significant barrier to collaboration success. Lieb and Miller [91] stated that coordination by 3PLs failed due to the inability to integrate information systems of buyer and provider. Piplani et al. [92] noted that for 3PLs "it would become imperative that they integrate [their information systems] with the IT-systems of their partners and customers to increase the effectiveness of the systems and to get the real value out of them". For achieving system integration, companies have to upgrade their information systems to an agent-based information system to enable connectivity with the information systems of their partners, customers, and suppliers [63]. However, this requires companies to invest in their IT-infrastructure, which is not an easy decision for them to make. Barriers related to the market The market represents the physical place, i.e. a country or set of countries, where collaborative activities are to be performed. The market characteristics are a key determinant for probability, potentials, strategies, and legality of the collaborative practices [70]. Market characteristics include important factors such as regulations on collaborative practices, intensity of vertical integration among different freight players, and freight flow balances among regions. In the following, we illustrate five barriers associated with the market characteristics. Regulations Most market imposes regulations (e.g., competition laws) on the collaboration among companies and such regulations might act as legal barriers to horizontal collaboration. For example, the European Union Antitrust Act [93] states that agreements and business practices that restrict competition are generally not allowed. This disables information sharing among competing companies because this might lead to collusion if partners agree on a specific service price or lead to market protection if partners reject that other companies join the CTN. Such regulations are more concerned with collaboration among big companies. Therefore, many CTNs are allowed for small and medium-sized companies if they do not coordinate prices or capacity. In other words, forming a CTN is allowed when a trustee party leads the collaboration and makes sure that the collaboration satisfies the competition laws and the shared information remains strictly confidential [38,69]. Vertical integration For better customer privacy and service quality, shippers or freight receivers outsource their freight transport via long-term contracts to freight carriers. Such a long-term contract is known as vertical integration [36]. Vertical integrations also provide carriers with stable demands. However, the contractual agreements might disallow carriers to collaborate if shippers impose specific delivery times or prefer deliveries with their contracted carriers' private fleet [36]. With high vertical integration, a few transport requests are shared in the market and this, in turn, reduces the revenues for the CTNs. Therefore, multiple scholars suggest that shippers/receivers can be rewarded relatively lower service prices for making their delivery characteristics flexible, e.g., allowing delaying or advancing the pickup or delivery times [7,37]. Imbalanced freight flows On the national level, there might be imbalances in the freight flows among different regions. This imbalance varies according to the freight types. This causes most trucks to drive fully loaded from some regions and return partially or fully empty in the reverse direction [45,94]. In this case, national carriers might have lower collaboration synergies, and finding a partner for collaboration is a significant barrier. To overcome this issue, Lydeka and Adomavičius [70] suggest inviting international carriers to the CTN. In case of imbalanced freight flow, the CDSSs must detect back-hauling opportunities among partners not only "collect-and-or-drop" opportunities [61]. Low market share of the CTN In recent years, IT development companies like Uber Freight and TIMOCOM launched mobile-or web-based marketplaces that offer many logistics solutions including freight matching services. Many companies have explored the values of such marketplaces and started to extensively use them in their daily operations [95]. Therefore, unless a new CTN brings new and useful values than the available marketplaces, it might not attract the freight firms and gain a sufficient market share to generate economies of scope [96]. The recent advances in electronic marketplaces might be one of the main reasons why some research projects and startups on collaborative freight transport are no longer operational after their pilot development [20,21]. Therefore, the CTN initiators have to first analyze the transport market before starting a new CTN. No public incentives for collaborative practices Public authorities can fund collaboration initiatives and start-ups that require a relatively high investment to develop CDSSs [45]. For example, Project U-TURN by the European Commission was proposed to encourage collaboration in city logistics to reduce carbon emissions and transport costs. Project NextTrust aimed to develop a collaborative decision-making system for identifying collaboration potentials through matching excess capacities with available loads. See more initiatives funded by public subsidies in [97]. In addition to funding, policies like granting priority access to highly utilized vehicles or vehicles serving consolidation centers can encourage the collaborative practice [78,94]. Insights into the identified barriers To provide some insights into the findings, Table 2 shows a two-dimensional matrix based on types of barriers and the CTN solutions. Each cell indicates the number of works that reported a specific barrier for a particular CTN solution. To visualize the results, we use a color map where high values are marked in green, low values are marked by yellow-green, and zero value is marked in red. It should be noted that some works reported more than one barrier and therefore, the sum of all numbers in Table 2 does not equal 84. Thirty-three works reported business model-related barriers (strategic alliance 48%, UCCs 24%, electronic marketplace platforms 6%, and general works 21%). Few contributions addressed electronic marketplace platforms because they do not have any direct control over how users collaborate and therefore most aspects of business models are not so relevant, while business model-related barriers are considered important to both strategic alliances and UCCs. In particular, value propositions, revenue streams, and cost structure are important barriers to UCCs while almost all barriers related to business model were considered important to strategic alliances. Twenty works reported barriers of information sharing and are classified into a strategic alliance (70%), UCCs (0%), electronic marketplace platforms (5%), and general works (25%). Most information related-barriers are important to strategic alliances. The literature rarely addressed informationrelated barriers when implementing the UCCs and electronic marketplaces. However, many general works confirmed that the information-related barriers are significant issues with implementing supply chain and logistics collaboration [12,59,63]. Thirty works reported barriers due to human factors and are disturbed into a strategic alliance (63%), UCCs (10%), electronic marketplace platforms (11%), and general works (14%). Most barriers were reported by studies on strategic alliances. Trust-related issues were mostly reported by studies on alliances and electronic marketplace platforms. Overall, distrust among partners, distrust of the CTN, fear of changing business model, and unawareness of collaboration benefits are common and important barriers in almost all collaborative solutions. Thirty-one works reported barriers of CDSSs and are classified into a strategic alliance (61%), UCCs (6%), electronic marketplace platforms (6%), and general works (16%). Generally, the cost/profit-sharing mechanisms and lack of interactive communication tools were most frequently cited as an important barrier to strategic alliances. The plenty of collaborative proposals is an important barrier to electronic marketplace platforms since they often have many companies resulting in too many collaborative proposals. Twenty works reported market-related barriers and are distributed into a strategic alliance (50%), UCCs (15%), electronic marketplace platforms (10%), and general works (25%). All market barriers were reported by studies on strategic alliances and imbalanced freight flows were the most cited barrier while studies on electronic marketplace platforms reported only one market barrier, i.e. low market share of the CTN. Solution strategies The following strategies can be considered to overcome the business model-related barriers. All stakeholders should agree on aims, operating rules, and the most suitable collaborative solutions that achieve clear and sensible benefits for them [19,51]. The CTNs have to be managed by a strong LSP with high practical and technological experiences [23]. To attract more companies, the CTN should offer a variety of valuable services such as transport management systems [33]. Additionally, shippers and retailers might be involved in the CTN to bring their contracted carriers or LSPs to the CTNs [7,37]. Also, the CTNs should have a clear cost structure ("Who pays what") for all stakeholders and benefits have to be allocated according to the contribution of each stakeholder to the cost structure [78]. Furthermore, operational governance models have to be in place to secure the commitment of all stakeholders to their responsibilities [7,15,49,50]. Regarding the information sharing-related barriers, a primary and necessary step is to check carefully the digitalization readiness of the partners. This helps to select the most suitable information exchange systems that might be based on real-time logistics information as in [28] or historical logistics information as in [61]. Information quality has to be defined clearly and considered when selecting the CTN partners [59]. Partners are recommended to standardize their information flows, e.g., using electronic data exchange technology, to enable rapid information flows with high accuracy [63]. Regarding human factor-related barriers, trust can be improved by adopting group decision-making processes involving all stakeholders when identifying the CTN characteristics [19,98]. In addition, trust uncertainties about the benefit/ cost-sharing allocation and shared information privacy can be eliminated by open and frequent communications among all stakeholders [59]. There is a need for using advanced information technologies (e.g., advanced webbased mobile applications and blockchain technologies) to ensure high information security, and thus trust can be improved [32]. To eliminate the negative perception of collaborative practices, awareness of the CTN benefits should be spread at the industry level [69,70]. Regarding CDSSs-related barriers, negotiation-based policies might be adopted for allocating costs and profits so partners can negotiate and select among different sharing policies instead of using a specific method imposed by the CTN [82,85]. Partners should transit towards an agent-based information system that enables connecting the entire CTN and allows for reactive, transparent, integrated, and reliable collaborative planning [63]. Machine learning techniques are valuable to predict the preference of partners and impute any missing information [87]. Regarding market-related barriers, legal barriers (e.g., competition law) can be overcome by involving a trustee party [38,69,99]. Authorities might relax regulations that restrict the collaboration practice and might impose taxes on carbon emissions from freight vehicles, specific load factors, and road pricing schemes [100,101]. To resolve the imbalanced freight flow among the country's regions, international partners might join the CTNs [70] and possible back-hauling opportunities among partners are detected [61]. A stage-gate model of the decision-making process in CTN implementation The conceptual barrier framework indicates that several barriers should be analyzed before the implementation of the CTN. Thus, there is a need for a decision-making model to guide the CTN implementation process with consideration to the identified barriers. As mentioned by Martin et al. [102], most existing decision-making models have a broad scope towards strategic alliance formation in general, while very few contributions addressed models for implementing and managing horizontal collaboration. For example, Fawcett et al. [103] presented a three-stage model to develop supply chain collaboration. Their model starts from creating commitment and understanding and removing resisting forces to collaborative practice, and continuously improving collaboration capabilities. Also, Bhattacharjee and Mohanty [104] developed a conceptual model that includes nine stages: environmental scan, internal alignment process, partner selection, alliance alignment, project alignment, work process alignment, review and feedback, and reward and recognition. Regarding horizontal logistics collaboration, Verstrepen et al. [55] presented a four-stage model: strategic positioning, design, implementation, and moderation. However, their model did not adequately consider important aspects such as partner selection and interactions among different implementation stages. More recently, Martin et al. [102] presented a decision model for developing horizontal logistics alliances consisting of five stages: orientation, partner selection, negotiation, implementation, and management. Despite the merits of existing models, most of them hardly discussed potential barriers that need to be considered in each stage. Thus, we propose a stage-gate model to consider the different barriers into a guideline of the CTN implementation decision-making process. The proposed model builds over existing models and addresses their limitations, e.g., the fact that existing models did not include potential barriers in each stage. The stage-gate model is a project management methodology used to guide the project creation from an idea-to-launch systematically and efficiently. The model decomposes the overall decision-making process into a number of sequential stages and gates [105,106]. Stages represent the different phases of the project where analytical studies and/or CDSSs are developed. Each stage has a gate at which the decision makers give a decision on whether to proceed to the next planning stage. Moreover, the stage-gate model has an easy-to-understand structure and thus, the proposed model provides a valuable tool to guide the CTN implementation process with considerations to the identified barriers. Figure 5 shows the proposed model that divides the CTN implementation decision-making process into four stages involving scoping, building the business case, developing the CDSS, and operating and maintaining the CTN. These stages are relevant to the CTN implementation as suggested by previous studies [51,55,102]. However, it is worth noting that the stagegate model conceptualizes that not all the four stages might be needed in the CTN implementation decisionmaking process. Based on the need of the stakeholders, one or more stages might be enough for the CTN implementation decision-making process. For example, initiators and partners might be only interested in identifying a suitable CTN solution based on their objectives. In this case, the scoping stage alone can be enough to provide this information by evaluating the characteristics of the proposed CTN solution against the market barriers. At each stage, knowledge on the relevant barriers can provide the right base for making the proper decisions at its gate. For example, suppose that an initiator, e.g., shippers or 3PL, wants to initiate a CTN and aims to identify the suitable specifics of the CTNs and an initial list of partners. The scoping stage can be used to achieve this aim. Within the scoping stage, the initiator has to evaluate the specifics of possible collaborative approaches against the market barriers using approaches like the SWOT method. The scoping stage constitutes a preliminary assessment of whether the CTN is better than as-usual-business practice and which collaborative approach is most suitable for the potential partners. If forming a CTN is found better than the current practice of the partners, the initiator proceeds to the next stage where the right partners are selected, followed by inviting other key stakeholders to develop a viable business case for the CTN. At 'the building business case' stage, relevant stakeholders require knowledge on barriers associated with both the CTN business model and human factors to develop reliable fundamentals of the collaborative business model leading to a sound business case. This might require information on operational costs, fixed costs, expected demand, and the use of simulation to investigate the feasibility of the suggested business case. Besides, it is also critical that the selected partners have a clear understanding of the CTN objectives, and that they agree on the basic rules, e.g., organizational setup and resource financing, that support viability of the developed business case. If a positive business case could be developed, the next stage is to develop the CDSS according to the specifics of the approved business case. At 'the developing CDSS' stage, the first step is to analyze the digitalization level of the partners and investigate their logistics information quality. It should be noted that developing a CDSS which demands detailed information might face severe data-quality issues and limit the success as well as scaling-up of the CTNs [61,122]. Similar to previous steps, knowledge on the barriers relevant to this stage will help to develop the right CDSS that satisfies the partners' needs and their data-quality conditions. The final stage concerns the operation, maintenance, and evaluation of the CTN. At this stage, great attention should be paid to marketing and scaling up the CTN. Besides, the CTN performance must be evaluated primarily on economic performance, e.g., cost savings of the partners. It is also essential to evaluate aspects like commitment and trust of partners, shared information flows, and partner satisfaction. Based on the CTN evaluation results, decision makers must decide on some actions to avoid the CTN failure, for example changing or adding new rules to the business model and improve subsequent elements in other stages. All or some of the four stages might be repeated periodically for the CTN development process or when a new partner is willing to join the CTN. Conclusions, limitations, and future research directions Despite the extensively reported benefits and the government funds to encourage the uptake of CTNs in the horizontal setting, their real applications have been rare and with varying degrees of success. Thus, this paper contributes to existing literature through an extensive review of the barriers, developing a conceptual barrier framework, and proposing a decision-making model for the CTN implementation process. 84 studies on horizontal collaborative logistics have been analysed and in total, 31 barriers are identified. To illustrate the barrier nature and facilitate informed decision-making, these barriers were conceptualized into a framework of five categories involving the business model, information sharing, human factors, the CDSSs, and the market. Of the 31 barriers, nine are related to the business model; five are associated with information sharing; six are related to the human factors; six are aligned with the CDSSs, and six are related to the market. The main finding is that the successful CTN implementation is not only dependent on developing a CDSS for sharing information and making collaborative decisions, but it is strongly driven by the ability to identify and overcome the barriers associated with the collaborative business models, human factor, market, and shared information. These barriers form the black box of the CTN implementation process and negatively affect the collaboration benefits. Thus, the current work provided a deep look into this black box and identified a wide range of barriers by illustrating evidence examples and the best practices for each barrier in the literature. Moreover, the results showed that implementing strategic alliances face much more barriers compared to UCCs and electronic marketplace platforms. This is because the more the collaboration becomes strategic, the more resources and sensitive information sharing are needed. Business model-related barriers were often reported by studies on UCCs and strategic alliances while the literature rarely discussed business models of the electronic marketplace platforms. Information barriers were exclusively reported by studies on strategic alliances where the need for sharing detailed and high-quality information is higher than other CTN solutions. Trust-related barriers were mostly reported by studies on different CTN solutions. CDSSs related-barriers were mostly considered by studies on strategic alliances and electronic marketplace platforms while studies on UCCs addressed the CDSS through only the need for solving complex two-echelon distribution problems and system integration. Market barriers were most reported by studies on strategic alliances, followed by studies on UCCs while studies on electronic marketplace platforms reported only one market barrier, i.e. low market share of the CTN. Additionally, a stage-gate decision-making model is developed to guide the CTN implementation by considering the conceptual barrier framework. The proposed model divides the CTN implementation decision-making process into four stages involving scoping, building the business case, developing the CDSS, and operating and maintaining the CTN. The conceptual framework provides knowledge on the barriers relevant to each stage. This knowledge can provide the decision makers with the right base for making the proper decisions at each gate. The findings of this paper have practical and theoretical contributions. On the theoretical level, the paper contributes to the conceptual understanding of the barriers to CTN implementation. On the practical level, the results provide valuable discussions on the CTN implementation decisionmaking process. This will be useful to logistics service providers, freight carriers, logistics IT developers, researchers, decision makers in the logistics industry, funding organizations, and entrepreneurs in identifying the best practices to maximize the CTN benefits and minimize failure risk. This research has some limitations. First, the identified barriers were not ranked to indicate their relative importance. Although the ranking of the barriers can be made based on their citation frequency, such ranking might be misleading due to the change in focus on specific barriers in the literature over time. Second, the interlinkages among identified barriers were not considered. Future work should address the limitations of this study: First, the relative importance of identified barriers can be determined through ranking and pairwise comparison techniques such as Delphi and Analytical Hierarchy Process (AHP) techniques. Second, investigating the interactions among the barriers by using causal analysis approaches, e.g., Decision Making Trial and Evaluation Laboratory (DEMATEL), to develop better mitigation strategies. Additionally, quantifying the impacts of different barriers on collaborative freight transport is of great value to be considered through agent-based simulation and modelling approaches. Another potential limitation is that the proposed stage-gate model lacks validation and therefore, an important direction for future research is to validate the proposed stage-gate model by means of several case studies.
2021-09-20T13:48:12.395Z
2021-09-20T00:00:00.000
{ "year": 2021, "sha1": "278a8278f4cf4a7e24ef8a9b027205b4eae0b0c0", "oa_license": "CCBY", "oa_url": "https://etrr.springeropen.com/track/pdf/10.1186/s12544-021-00512-3", "oa_status": "GOLD", "pdf_src": "ScienceParsePlus", "pdf_hash": "049ffa7d366852d34cd9275078586fe1eb774b9b", "s2fieldsofstudy": [ "Business", "Engineering", "Environmental Science" ], "extfieldsofstudy": [] }
59252666
pes2o/s2orc
v3-fos-license
Region‐specific differences and areal interactions underlying transitions in epileptiform activity Key points Local neocortical and hippocampal territories show different and sterotypical patterns of acutely evolving, epileptiform activity. Neocortical and entorhinal networks show tonic–clonic‐like events, but the main hippocampal territories do not, unless it is relayed from the other areas. Transitions in the pattern of locally recorded epileptiform activity can be indicative of a shift in the source of pathological activity, and may spread through both synaptic and non‐synaptic means. Hippocampal epileptiform activity is promoted by 4‐aminopyridine and inhibited by GABAB receptor agonists, and appears far more sensitive to these drugs than neocortical activity. These signature features of local epileptiform activity can provide useful insight into the primary source of ictal activity, aiding both experimental and clinical investigation. Abstract Understanding the nature of epileptic state transitions remains a major goal for epilepsy research. Simple in vitro models offer unique experimental opportunities that we exploit to show that such transitions can arise from shifts in the ictal source of the activity. These transitions reflect the fact that cortical territories differ both in the type of epileptiform activity they can sustain and in their susceptibility to drug manipulation. In the zero‐Mg2+ model, the earliest epileptiform activity is restricted to neocortical and entorhinal networks. Hippocampal bursting only starts much later, and triggers a marked transition in neo‐/entorhinal cortical activity. Thereafter, the hippocampal activity acts as a pacemaker, entraining the other territories to their discharge pattern. This entrainment persists following transection of the major axonal pathways between hippocampus and cortex, indicating that it can be mediated through a non‐synaptic route. Neuronal discharges are associated with large rises in extracellular [K+], but we show that these are very localized, and therefore are not the means of entraining distant cortical areas. We conclude instead that the entrainment occurs through weak field effects distant from the pacemaker, but which are highly effective at recruiting other brain territories that are already hyperexcitable. The hippocampal epileptiform activity appears unusually susceptible to drugs that impact on K+ conductances. These findings demonstrate that the local circuitry gives rise to stereotypical epileptic activity patterns, but these are also influenced by both synaptic and non‐synaptic long‐range effects. Our results have important implications for our understanding of epileptic propagation and anti‐epileptic drug action. Introduction Epilepsy is a condition that is defined by sudden transitions from a functional brain state into pathological states. These transitions are associated with dramatic changes also in the electrophysiological signals, and indeed EEG recordings provide a very sensitive assay of brain states. The interpretation of these signals, though, is often difficult, and in most cases we still do not understand what biological processes underlie the key shifts in the electrophysiological signal. They do, though, offer great potential for providing advance warning about imminent seizures, and so warrant further study. Epileptic transitions can arise from local network interactions (Bernard et al. 2000;Ziburkus et al. 2006;Huberfeld et al. 2011;Avoli et al. 2016) or cellular changes, such as intracellular chloride concentration (Dzhala et al. 2010;Pavlov et al. 2013;Ellender et al. 2014;Pallud et al. 2014). A role in these transitions has also been hypothesized for larger scale network interactions (Kramer et al. 2012;Martinet et al. 2017;Liou et al. 2018). To address this hypothesis, it is extremely helpful to identify where the source of the pathological discharges is, whether there can be more than one source, and if so, which is primary, and how they might switch; this detail can then further provide insight into how epileptic activity spreads. Most epileptic seizures are thought to arise from pathology located in hippocampal, parahippocampal or neocortical circuits, but it remains unclear to what extent the pathological activity is set by the intrinsic excitability of the local networks (Traub & Wong, 1982;Miles & Wong, 1983;Prince & Connors, 1984;Dichter & Ayala, 1987;Ziburkus et al. 2006) or interactions between the areas (Miles et al. 1984;McCormick & Contreras, 2001). Brain slice preparations offer unique experimental opportunities for recording, manipulating and isolating network activity. These preparations have yielded many insights into a wide range of topics from cellular excitability and synaptic interactions up to network dynamics, for instance by providing a framework to understand human recordings (Schevon et al. 2012;Smith et al. 2016) where the potential for invasive investigation is greatly limited. We set out to investigate the role of interactions between brain areas in epileptic transitions. An important series of studies using the 0 Mg 2+ model (Swartzwelder et al. 1986b;Mody et al. 1987;Anderson et al. 1990;Dreier & Heinemann, 1990, 1991Bragdon et al. 1992;Morrisett et al. 1993;Zhang et al. 1995;Dreier et al. 1998) characterized a notable transition from an early tonic-clonic patterns of epileptiform discharges into a different, recurrent pattern of discharge. The nature of this critical transition, however, has remained a mystery. Of added interest is that this transition is associated with a marked change in the pharmaco-sensitivity of the pathological discharges (Heinemann et al. 1994b). Since the various brain areas may differ in how epileptic discharges are manifest, we hypothesized that a key component of this transition might reflect changes in the level of involvement of different brain territories. These prior studies all used brain slices prepared from adult rats, but have not been repeated using tissue from other species. We now show that the same evolution of activity is also seen in mouse brain slices, thereby opening up this phenomenon for further study in transgenic animals carrying mutations relevant for human epilepsy. We then identify an important correlate of the transition, which is the surprisingly late involvement of hippocampal activation in this model, and which subsequently acts as a pacemaker, entraining activity in other cortical networks. Interestingly, the entrainment of overlying neocortex does not require intact synaptic pathways, but instead can arise from field effects secondary to focal discharges (Jefferys & Haas, 1982;Jefferys, 1995;Frohlich & McCormick, 2010;Anastassiou et al. 2011). We further show that the entrainment does not happen through the diffusion of extruded K + , because the rise in extracellular [K + ] associated with epileptiform discharges is very focal. Finally, the site of the dominant epileptiform activity in these preparations is highly sensitive to drugs that affect K + conductance. The GABA B agonist is a very powerful suppressor of the hippocampal focus, and shifts the slice back towards the neo-/entorhinal cortical pattern of tonic-clonic-like discharges, whereas the K + channel blocker 4-aminopyridine strongly promotes hippocampal activity, far more rapidly than in the other areas, the exact opposite of the evolving pattern induced by 0 Mg 2+ . These results show that locally recorded transitions in the pattern of epileptiform discharge may arise from the new involvement of distally located epileptic circuits. These changes thus reflect which cortical territories are involved and how the activity spreads to other networks. These models illuminate a variety of epileptic phenomena, including the evolution of epileptic foci, sudden shifts from one focus to another, and how different cortical areas show distinctive patterns of epileptic discharge and propagation. As such, they can provide a wealth of metrics for comparing anti-epileptic drugs, and for understanding phenotypes in genetic models of epilepsy. Ethical approval All animal handling and experimentation were done according to the guidelines laid by the UK Home Office and Animals (Scientific Procedures) Act 1986, and were approved by the Newcastle University Animal Welfare and Ethical Review Body (AWERB reference no.: 545). All mice used in this study were housed in individually ventilated cages in a 12 h light, 12 h dark lighting regime. All mice were provided with food and water ad libitum. Electrophysiology Slices were placed in an interface recording chamber and perfused with warmed ACSF (2-3 ml min −1 , driven by a peristaltic pump; Watson-Marlow Pumps Ltd, Falmouth, UK, model 501U). The temperature of the chamber and perfusate was maintained at 33-36°C using a closed circulating heater Grant FH16D (Grant Instruments, Cambridge, UK). Extracellular field recordings were made using normal ACSF-filled 1-3 M borosilicate glass microelectrodes (GC120TF-10; Harvard Apparatus, Camborne, UK) pulled using a Narishige electrode puller (PP-83, Narishige Scientific Instruments, Tokyo, Japan), and mounted headstages (10× DC pre-amp gain) held in Narishige YOU-1 micromanipulators. In experiments involving dissections, scalpel blades were used to make cuts in the slices after placing them in the recording chamber. Waveform signals were acquired using BMA-931 biopotential amplifier (Dataq Instruments, Akron, OH, USA), Micro 1401-3 ADC board (Cambridge Electronic Design, Cambridge, UK) and Spike2 version 7.10 software (Cambridge Electronic Design). Signals were sampled at 10 kHz, amplified (gain: 200) and bandpass filtered (1-3000 Hz). A CED4001-16 Mains Pulser (Cambridge Electronic Design) was connected to the events input of a CED micro 1401-3 ADC board and was used to remove 50 Hz hum offline. Recordings were initiated while slices were still being perfused with normal ACSF, and only then was the perfusate switched to an epileptogenic ACSF solution lacking Mg 2+ ions (0 Mg 2+ ACSF) or 100 μM 4-aminopyridine (4-AP). Hippocampal recordings were generally made from CA1, except for the baclofen experiments, where we made them from CA3, to provide a direct comparison with a previous study performed in rat (Swartzwelder et al. 1987). Extracellular potassium (K + ) was measured using single-barrelled K + -selective microelectrodes. The pipettes were pulled from non-filamented borosilicate glass (Harvard Apparatus), and the glass was exposed to dimethyl-trimethyl-silylamine vapour (Sigma-Aldrich, Gillingham, UK), baking at 200°C for 40 min, the pipettes were then backfilled with ACSF. A short column of the K + sensor (Potassium Ionophore I, cocktail B; Sigma-Aldrich, cat. no. 99373) was taken into the tip of the salinized pipette by using slight suction. The recordings through the K + -sensor electrode were referenced to a second electrode filled with ACSF, and from the differential signal we calculated the [K + ] o from calibration recordings made in an open bath, using sudden increments in [K + ] o . We checked the stability of the electrodes at the start and end of each recording. Data from unstable electrode recordings were discarded. This provided a scaling factor, S, of 55-59 mV, where the K + concentration at a given moment in time, t, was calculated from the differential voltage, V(t), as follows: baseline for our experiments was 3.5 mM. Data analysis and statistics Data were analysed offline using Clampfit (Molecular Devices, Sunnyvale, CA, USA), Igor (WaveMetrics, Lake Oswego, OR, USA) and Matlab R2015b (The MathWorks, Natick, MA, USA). The analysis of entrainment of epileptiform events was performed by deconvolution of an averaged 'template' (Fig. 5D) of electrophysiological discharges against the continuous trace from that same recording. In this way, the higher frequency components of these discharges are effectively removed. This is very helpful, because the cross-correlation analyses between hippocampal and neocortical discharges is optimal if the signal can be simplified essentially to the timing of the events, thereby minimizing any aliasing issues that might arise from these higher frequency components. The deconvolution was done by first creating a template of an average discharge (6-10 events), aligned by the time point at which they exceeded a threshold set at between 25 and 40% of the peak deflection. The templates were then used as a normalizing filter on their respective raw traces, by deriving peak cross-correlation coefficients for the time-shifted template relative to the trace. This 'template-filtered' trace ( Fig. 5D) removed most of the brain region-specific fine structure of the individual discharges, but preserved their timing. Since the individual events in the late-stage activity are extremely reproducible, the peaks in this filtered trace tend towards 1. We used the cross-correlation between these template-filtered recordings as a measure of the entrainment of the two recording locations. Matlab code for these analyses is available from the authors upon request. Percentage changes were measured by normalizing treatment to pre-treatment measures in each slice. A one-way analysis of variance (ANOVA), with a post hoc Tukey test was used for data with three or more groups. Groups of two were analysed using Student's t test. Data that were not normally distributed were analysed by the Wilcoxon rank sum test. Significance was set at P ࣘ 0.05 for all analyses. Multiunit activity was extracted from raw data by high-pass filtering it to >300 Hz. Data are presented as means ± SEM, and n is the number of brain slices, unless otherwise stated. Terminology The terminology of epileptic discharges is problematic, reflecting the fact that there is a large range of activity patterns, and the equivalence of, or distinction between, these is often hard to discern. This is particularly so for the term 'interictal' , which in the clinical setting refers to electrophysiological activity which is clinically covert (if not completely so; see Binnie et al. 1987;Kleen et al. 2010). These are typically rather short discharges, and consequently, animal researchers have taken their ephemeral nature to be the defining feature. Unfortunately when one examines the activity patterns closely, this term conflates two very different types of activity; an appreciation of the difference is critical for the understanding of the present study. The key distinction is the presence or absence of local intense activity, as defined by the presence of a significant high frequency component. The importance of this is that it helps distinguish sites where there is local pathological activity from those where the deviation in the recording reflects pathological activity that is elsewhere. Consequently, in this paper, we refer to the discharges during late stage activity as 'spike and wave discharges' , a term that has been used previously to describe what appear to be comparable events. Readers should note, however, that previous studies have described this activity pattern as 'interictal' , but for the reasons outlined above, we prefer not to use this term. Region-specific patterns of evolving epileptiform activity We investigated the evolution of epileptiform activity in horizontal brain slices, prepared from young adult (2-3 months old), wild-type C57B6J mice, following the removal of Mg 2+ ions from the bathing medium (ACSF). Extracellular recordings were made at two or three locations, always including a hippocampal (CA1 or CA3) and a neocortical (temporal association areas) recording site, and in most slices, also recording from medial entorhinal cortex (Fig. 1A). Following the washout of Mg 2+ ions, there was a gradual build-up of epileptiform discharges, evolving in a highly characteristic way (Fig. 1A). The earliest large field deflections in the raw traces were seen at all recording sites, although the events appeared far larger in the neocortex and entorhinal cortex. This early activity involved episodes of sustained rhythmic bursts suggestive of the temporal dynamics of clinical tonic-clonic discharges ( Fig. 1B and C). The mean number of tonic-clonic-like events in the neocortex was 9.35 ± 0.73 per slice (range 4-17 events; n = 17 slices), before a second transition to regular epileptiform bursts ('late-stage activity pattern'), with individual bursts lasting a few hundred milliseconds, and occurring every 3.32 ± 0.38 s (n = 10; Fig. 1D and E). This pattern of evolution has been described previously in rat brain slices (Swartzwelder et al. 1986a;Mody et al. 1987;Anderson et al. 1990;Dreier & Heinemann, 1990, 1991Bragdon et al. 1992), but has been less studied in mice. Recent studies of human extracellular recordings of epileptic discharges in humans have highlighted the importance of examining the high frequency component of epileptiform discharges to determine whether an event involves locally active neurons (Schevon et al. 2012;Weiss et al. 2013). In this regard, there appeared a striking difference between activity recorded in the hippocampus and the neocortical signals: the early events, including the tonic-clonic ictal events, were associated with only small field events in the hippocampus, and notably, with no measurable high frequency component ( Fig. 1A-C), indicating there is little local neuronal firing. We therefore considered these early events not to have invaded the local hippocampal networks. Using this high frequency component as the critical marker of ictal involvement, the first hippocampal ictal discharges occurred significantly later than the first neocortical discharges (Fig. 1A arrows; Fig. 1F; neocortex latency, 671 ± 41 s (n = 13); entorhinal cortex, 699 ± 69 s (n = 7); hippocampus, 2238 ± 284 s (n = 11); ANOVA F [2,28] = 25.76, P = 4.5 × 10 −7 ; neocortex vs. hippocampal, post hoc Tukey test, P = 0.001). Figure 1. Typical pattern of evolving epileptiform activity following wash-out of Mg 2+ ions from the bathing media (0 Mg 2+ model), showing delayed recruitment of hippocampal circuits relative to neocortex Aa, extracellular recordings (broad band), showing typical pattern of evolving epileptiform activity following washing out of Mg 2+ . The arrows indicate the first full ictal events, as indicated by intense multiunit (high frequency) activity, in the three recordings. b, schematic representation and photomicrograph of a horizontal brain slice, showing the locations of three extracellular recording electrodes in hippocampus (CA1, red), the entorhinal cortex (EC, blue) and deep layers of the neocortex (NC, black). B, broad band signals show small deflections in the hippocampal field at the time of large neocortical discharges during early ictal-like events (green vertical bar in A), but high pass filtering (C) shows that these hippocampal signals are not associated with any significant unit activity. D and E, similar broad band (D) and high pass filtered (E) expanded views of a representative period of late-stage activity (orange bar in panel A). Note that the prominent high frequency component indicative of local network firing, is now seen at all three electrode sites. F, boxplot illustrating the pooled data, showing a highly significant delay of the earliest hippocampal epileptiform discharges relative to the first neocortical or entorhinal discharges (ANOVA F [2,28] = 25.76, P = 4.5 × 10 −7 ). The results of individual comparisons (post hoc Tukey test) and the sample sizes (different brain slices; these are not paired recordings) are shown above the data distributions. [Colour figure can be viewed at wileyonlinelibrary.com] J Physiol 597.7 Epileptiform discharges in entorhinal cortex evolved in tandem with the neocortical discharges (neocortex vs. entorhinal, not significant; entorhinal vs. hippocampal, post hoc Tukey test, P = 0.001). When finally, the hippocampal epileptiform discharges began, they showed a fundamentally different pattern, generally being a single large spike and wave discharge lasting up to 1.26 ± 0.11 s (n = 10), or a short burst of discharges. In a further contrast to the prior neocortical activity, the inter-event intervals were short (2.98 ± 0.78 s, n = 10), compared with the intervals between neocortical tonic-clonic ictal events (1st-2nd event interval = 126.2 ± 17.2 s; 2nd-3rd interval = 117.1 ± 15.1 s; 3rd-4th interval = 68.9 ± 10.6 s). Interestingly, the pattern of neocortical discharges also changed once the hippocampal discharges started, to the same pattern of transient, but regular, spike and wave discharges. Discharges in the two structures, from this time forward, were tightly coordinated ( Fig. 1D and E), but with the hippocampal discharges occurring before the neocortical unit activity (delay of onset of neocortical activity, relative to hippocampal activity = 87.1 ± 25.5 ms, n = 8). A noteworthy feature of these recordings was that tonic-clonic discharges appeared to be a hallmark only of neocortical recordings, with repeated events in every recording (n = 13). In contrast, we recorded such events in hippocampal electrodes in just 7.7% of the slices (1 in 13 recordings). There are, however, published records from rat brain slices of hippocampal tonic-clonic ictal events (Swartzwelder et al. 1987;Lewis et al. 1989), but using much thicker brain slices (625 μm). A key question then was whether this represented a species difference, or if instead, the thicker brain slices showed hippocampal tonic-clonic events because of better preserved neuronal connectivity. Our recordings were made typically from the middle sections in the dorsal-ventral axis, but we reasoned that on account of the curvature of the hippocampus, other levels may show different preservation of the axonal pathways. We therefore examined the more extreme slices, and discovered that the most ventral mouse brain slices (400 μm, n = 4 slices) also showed tonic-clonic activity in CA1 ( Fig. 2A), as seen in thick (>600 μm), rat sections; in this regard, therefore, there is no species difference. Notably, the tonic-clonic activity occurred in the entorhinal cortex before the CA1 region ( Fig. 2A), suggestive that the CA1 activity was conditional on the entorhinal cortex activity. This was confirmed by separating the two structures, after which the tonic-clonic pattern was maintained in entorhinal cortex, but abolished in CA1 (Fig. 2B), which instead resorted to the spike and wave events already described. We conclude therefore that the early pattern of tonic-clonic activity is a hallmark of neo-and entorhinal cortex, and that instances of such activation in the hippocampus are downstream of activity at these other sites. Non-canonical propagation pattern of late epileptiform discharges In contrast, during late stage activity, the hippocampal activity appears to be the pacemaker, entraining the other areas. We hypothesized that the entrainment is mediated through a polysynaptic pathway involving the entorhinal cortex. To test this, we cut away the caudal pole of the brain slice (we refer to these, henceforth, as 'disconnected slices'), thereby entirely removing any potential synaptic pathway. Surprisingly, following the removal of the entorhinal pole, the hippocampal entrainment of neocortical discharges persisted unchanged (neocortex, pre-cut rate = 0.47 ± 0.08 Hz, post-cut = 0.47 ± 0.09 Hz, n = 5, paired t test, P = 0.96; hippocampus, pre-cut rate = 0.48 ± 0.09 Hz, post-cut = 0.49 ± 0.11 Hz, n = 5, paired t test, P = 0.83), and latency of onset of neocortical activity after hippocampal activity also remained unaltered (pre-cut = 71.1 ± 7.7 ms, post-cut = 62.4 ± 3.6 ms, n = 5, paired t test, P = 0.13). In slices with the entorhinal pole removed from the start of the experiment (simultaneous with washing out Mg 2+ ions), the evolving epileptiform activity showed the same general pattern in 'intact slices' (slices including the entorhinal pole), albeit at a slightly slower rate ( Fig. 3A; neocortex latency, 1035.71 ± 77.8 s, n = 14; intact vs. disconnected, unpaired t test, P = 0.0006). The first hippocampal ictal discharge occurred significantly later than the first neocortical discharge (hippocampus latency, 2356.10 ± 189.10 s, n = 14; latency: hippocampus vs. neocortex, paired t test, P = 0.0004; hippocampus latency, intact vs. disconnected, unpaired t test, P = 0.79). And as with the intact slices, the start of the hippocampal discharges entrained the neocortical activity to the same pattern, despite the absence of any conventional polysynaptic connectivity between the two regions ( Fig. 3B-E; neocortical latency, 57.8 ± 9.1 ms, one-sample t test, P = 0.0002; intact vs. disconnected, unpaired t test, P = 0.27). This entrainment was only lost when a second cut was made along the axis of the white matter bundle deep to the neocortical layer 6, thereby physically separating the neocortical and hippocampal networks (Fig. 4). In these separate networks, the hippocampal discharge rate increased significantly (pre-cut, 0.35 ± 0.07 Hz; post-cut, 0.49 ± 0.13 Hz; n = 9, paired t test, P = 0.045; Fig. 4C-E), whereas the neocortical discharge rate dropped significantly (pre-cut, 0.31 ± 0.05 Hz; post-cut, 0.12 ± 0.02 Hz; n = 9, paired t test, P = 0.001; Fig. 4C-E). Consistent with these respective changes, the rate of discharge in the isolated neocortex differed significantly from that in the isolated CA territories (post-cut; paired t test, P = 0.011). In tandem with the reduced rate of discharges in the neocortical networks, the duration of events became longer (pre-cut = 1.82 ± 0.21 s, post-cut = 6.70 ± 2.19 s, n = 9, paired t test, P = 0.048; Fig. 4E), and again there was a significant difference between the isolated neocortex and hippocampus (post-cut, n = 9; paired t test, P = 0.041). This result suggests that, in this late stage activity pattern, the interactions between hippocampal and neocortical networks are bidirectional: the hippocampal-to-neocortical influence is reflected in the pacing of neocortex by hippocampus; whereas the opposite influence is manifest as a mild brake on the hippocampal pacing, presumably by the tendency of the neocortical events to be extended, thereby also extending the refractoriness of the hippocampal pacemaker. Consistent with these opposite changes in rates, there was a highly significant drop in the correlation of events in the two networks (n = 9 brain slices; pre-cut R 2 range = 0.83-0.98; post-cut R 2 range = 0.33-0.58; P = 4.1 × 10 −5 ; Wilcoxon rank sum test). We concluded from these experiments that the late stage epileptiform discharges arise in hippocampus, and these act as a pacemaker, driving discharges also in juxtaposed neocortical territories, and that this entrainment can occur independent of synaptic interactions (importantly, note that it does not exclude the possible involvement of conventional synaptic pathways in the development and propagation of epileptiform activity). The interactions between the areas only really affected the late stage activity, because in slices that were dissected at the start of the experiment, to isolate the neocortex, entorhinal cortex and hippocampal subfields, the time to first ictal events in all three territories was unaltered, relative to recordings from intact slices (isolated neocortex, 607.3 ± 107.3 s, n = 6, unpaired t test vs. intact, P = 0.504; entorhinal cortex, 1057.2 ± 200.1, n = 5, unpaired t test vs. intact, P = 0.082; hippocampus, 1595.4 ± 186.3, n = 6, unpaired t test vs. intact, P = 0.140). One possible mechanism by which late-stage entrainment may happen is through diffusion of extracellular K + from a local source of intense neuronal activation (Moody et al. 1974;Heinemann & Lux, 1977;Somjen & Giacchino, 1985;Hablitz & Heinemann, 1987), thereby reducing the threshold for recruitment of other neighbouring territories. To assess the effects of [K + ] o, Figure 3. The late-stage epileptiform discharges are coordinated in hippocampal and neocortical networks through a non-synaptic pathway A, extended recording of extracellular field potentials in CA1 and neocortex (NC), following wash-out of Mg 2+ , in a disconnected slice, i.e. with entorhinal cortex removed, thereby disconnecting the two regions via any conventional multisynaptic path. As in intact slices, the early discharges showed pronounced unit activity in neocortex, but not in the CA1 pyramidal layer (inset, green box). B and C, expanded view of wide-band (B) and high-pass (>300 Hz) filtered (C) late stage activity in the same slice, showing prominent levels of unit activity in both territories. D, the same traces filtered by a moving template of an average discharge. E, graphical representation of data from 9 brain slices, showing that in synaptically disconnected, hippocampal-neocortical slices, late-stage discharges in neocortex (NC) follow hippocampal (CA) discharges (average lag = 57.8 ± 9.1 ms; one-sample t test: P = 0.0002, n = 9). [Colour figure can be viewed at wileyonlinelibrary.com] rises, we made simultaneous recordings from four electrodes, at two sites, two located in neocortex and two in the CA1 region of the hippocampus, to record the local [K + ] o , using an ionophore tip-filled electrode, and the local field potential (LFP). We found that the largest rises in [K + ] o associated with epileptiform events all occurred during the early tonic-clonic events in neocortex ( Fig. 5; note that much larger rises were, on occasion, recorded during spreading depression events, but these did not show high frequency activity denoting local neuronal firing). We analysed 24 events, in six brain slices (intact, not disconnected), for which the multiunit activity showed that the event only occurred at one electrode site ( Fig. 5; 23 neocortical events, and 1 hippocampal discharge). Critically, in all cases, the site of unit activity was associated with a large rise in [K + ] o (neocortical examples (n = 23), [K + ] o = 9.55 ± 2.70 mM), but this did not spread to the other recording site (hippocampal [K + ] o = 3.57 ± 0.18 mM; not significantly different from baseline [K + ] o = 3.50 mM; Fig. 5A and B). This was also the Figure 4. Entrainment of discharges is lost following physical separation of hippocampal and neocortical networks A, photomicrograph and schematic representation showing the electrode placements in physically separated CA1 and neocortical (NC) areas, derived from a single horizontal brain slice, together with a period of late stage epileptiform discharges. Note the desynchronized discharges in the two territories, with a far slower rate of discharges in the neocortical tissue. Ba, further expansions show the broadband signal of the de-synchronized hippocampal and neocortical discharges. b, prominent unit activity is seen in both territories. C, the relative rates of epileptiform discharges in the two territories before and after physical separation. In disconnected slices ('pre-separation'), the rates were equivalent (n.s., n = 9), but following physical separation of the tissues, the rates are significantly different (paired t test, P = 0.011, n = 9). D, comparisons of discharge rates before and after physical separation of the hippocampal and neocortical tissues. Note how the neocortical data all fall below the line of unity, indicating a consistent slowing of the rate of discharges there (black stars; paired t test, P = 0.001, n = 9). In contrast, the hippocampal data tend to lie above the line, indicative of an increase in hippocampal rate after the separation (red stars; paired t test, P = 0.045, n = 9). E, the duration and inter-event intervals in the physically separated hippocampal and neocortical tissues, normalized to the values in the pre-cut brain slice. (Error bars depict the standard deviation.) [Colour figure can be viewed at wileyonlinelibrary.com] J Physiol 597.7 case for the single example of a prominent hippocampal discharge without neocortical involvement (hippocampal [K + ] o = 7.26 mM; neocortical [K + ] o = 3.13 mM). This showed that the [K + ] o changes are highly focal, indicating that this entrainment at a distance is not mediated by diffusion of K + . We next examined the late stage activity in which there was multiunit activation at both neocortical and hippocampal locations (41 events from 6 slices; Fig. 5C and D). As expected, both sites also showed significant rises in [K + ] o associated with these bursts of local neuronal firing, and in all cases the main rise came after the local peak in the high frequency filtered LFP signal. There are, however, inherent problems with comparing timing between high and low bandpass-filtered signals, so we performed a further analysis comparing the latency of the rise in [K + ] o between those events which lead in the hippocampal electrode (n = 30), and those that led in the neocortical electrode (n = 11). We reasoned that if activity in the follower territory was being triggered by a rise in [K + ] o diffused from the other site, then for those events, the rise would appear to occur significantly earlier relative to the local firing. In fact, there was no significant difference in latency between 'leader' and 'follower' events for either the hippocampal or the neocortical recording sites (Fig. 5D), leading us to conclude that in both groups, the local [K + ] o rise reflected, rather than caused, the local firing. We did observe that the latency for the [K + ] o rise in hippocampal circuits was significantly shorter than for neocortical circuits (unpaired t test; P = 0.00015), perhaps reflective of more densely packed neurons in the hippocampus. Collectively, these various analyses of [K + ] o rises associated with local neuronal firing indicate that the entrainment of neocortical events by the hippocampal activity in this preparation does not happen by diffusion of K + ions. Instead, it is likely to occur by the distant effects of a field potential onto circuits that are already highly excitable. This type of entrainment, we suggest, is also possible in vivo, during clinical epileptic events. Region-specific differences in drug sensitivity influence epileptic activity patterns Previous work suggests that the different phases of evolving activity in this model activity may show differential sensitivity to drug manipulation. We investigated whether this may relate to regional sensitivity. One promising candidate is the GABA B agonist, baclofen, which was reported to reverse the evolving pattern of activity, inducing a switch from what we term late stage activity (Swartzwelder et al. 1987;Lewis et al. 1989) (in the original description this was termed 'interictal' activity; Figure 6. Hippocampal, but not neocortical epileptiform activity, is suppressed by GABA B activation A, the GABA B agonist baclofen, when applied simultaneously with the wash-out of Mg 2+ ions, blocks any developing hippocampal activity, but does not suppress the development of tonic-clonic-like events in neocortex. B, baclofen also blocks the hippocampal activity after it has started, thereby reversing the late stage pattern, and initiating the tonic-clonic-like events that characterize the neocortical pattern. C, enlarged views of the discharges at the times indicated in B. [Colour figure can be viewed at wileyonlinelibrary.com] J Physiol 597.7 see Methods, Terminology) into tonic-clonic events. These experiments were performed on thick (600 μm) rat brain slices, but since the activity generalized throughout the slice, the authors of the previous studies did not relate this to the source of activity. We repeated these experiments therefore on 400 μm mouse brain slices to investigate whether the effect was location specific ( Fig. 6; in keeping with these prior studies, we recorded from CA3 in these experiments). We found that bath application of the GABA B agonist baclofen (10 μM) did indeed reverse the late-stage pattern (4 out of 4 slices), suppressing entirely the hippocampal bursting, and with the reappearance of tonic-clonic events in neocortex (Fig. 6B). Furthermore, if baclofen were applied from the start of the recording (when washing out Mg 2+ ions), the tonic-clonic epileptiform events took longer to establish (0 Mg 2+ latency = 609 ± 31 s (n = 5 slices); baclofen with 0 Mg 2+ latency = 1573 ± 176 s (n = 4); unpaired t test, P = 0.0005), but once that happened were maintained for the entire duration of the recordings, and hippocampal discharges never initiated (Fig. 6A). These results further support our conclusion that the different patterns of epileptiform bursting appear pathognomonic of the territories from which they originate. This region-specific difference also has a parallel in another under-appreciated feature of brain slice models, which is that application of the K + channel blocker 4-aminopyridine has the exact opposite regional specificity to the 0 Mg 2+ model, inducing epileptiform discharges in hippocampal territories significantly in advance of that in neocortex ( Fig. 7; neocortical latency = 605 ± 22 s (n = 9 slices); CA3 latency = 487 ± 17 s (n = 9); paired t test, P = 0.04). Notably, the early hippocampal activity in this instance does not typically entrain the neocortical territories, indicating that this entrainment requires changes in the local excitability. Later, though, hippocampal entrainment does occur (Fig. 8). Entrainment between hippocampal territories and neocortex was not altered by removal of the entorhinal pole (neocortical latency: intact = 45.1 ± 8.2 ms, disconnected = 39.3 ± 2.8 ms, n = 8, paired t test, P = 0.44), but as with the 0 Mg 2+ model, entrainment was broken by physically separating the hippocampal territories from neocortex ( Fig. 8B and D), leading once again to a slowing of the neocortical rhythm (pre-cut, 0.57 ± 0.10 Hz; post-cut, 0.05 ± 0.01 Hz; n = 9; paired t test, P = 0.0011; Fig. 8C-E), and an increase in the hippocampal rhythm (pre-cut, 0.56 ± 0.10 Hz; post-cut, 0.74 ± 0.13 Hz; n = 9, paired t test, P = 0.0301). Consistent with these changes, there was also a significant difference between the isolated neocortex and isolated hippocampal CA territories (post-cut, n = 9; paired t test, P = 0.0011, Fig. 8C-E). There was also a trend towards an increase in duration of events in neocortex (pre-cut, 1.23 ± 0.11 s; post-cut, 6.23 ± 1.61 s; n = 9; paired t test, P = 0.0161; Fig. 8E), but not in the hippocampus (pre-cut, 1.17 ± 0.11 s; post-cut, 0.97 ± 0.06 s; n = 9; paired t test, P = 0.106; Fig. 8E). The isolated neocortex showed significantly longer events than the isolated hippocampus (post-cut, n = 9; paired t test, P = 0.0147). In all these features, the 4-aminopyridine and the 0 Mg 2+ models were comparable. Finally, we examined the effect of baclofen on the activity patterns in 4-aminopyridine (Fig. 9). When baclofen was applied from the start of the recording, simultaneous with the wash-in of 4-aminopyridine, full ictal activity evolved in the neocortical structures, and in 2 of 7 slices (28.6% of slices) it persisted throughout the recording, with no transition to late-stage activity patterns (Fig. 9A). Baclofen delayed the onset of hippocampal discharges ( Fig. 9C; latency in 4-AP = 487 ± 51 s (n = 9), in 4-AP with baclofen = 807 ± 73 s (n = 7); In summary, we conclude that certain electrophysiological transitions arise from switches in the source of the pathological discharges, and reflect brain region-specific differences in the propensity to support epileptiform discharges, and the electrophysiological signatures of these discharges. These switches between the focal sources can occur spontaneously, presumably reflecting local or cellular changes in the network excitability, but can also be influenced pharmacologically, indicating brain region-specific differences, too, in their drug sensitivity. Discussion We have provided demonstrations for several key principles of epileptic pathophysiology. The first is that transitions in the pattern of activity can reflect shifts in Figure 9. Baclofen has a smaller effect on 4-aminopyridine-induced activity than it does on 0 Mg 2+ -induced activity A, prolonged (>1 h) recording of a brain slice bathed in both baclofen and 4-aminopyridine (4-AP) from the start of the recording. B, recording of a brain slice in which baclofen was applied only after the late-stage activity pattern was reached. The activity was partially reversed, although unlike the 0 Mg 2+ there was continued hippocampal discharges. C, baclofen, when applied from the start of the experiment, the source of discharges. A second key finding is that brain regions differ in how epileptic discharges manifest in the local circuits. A third principle is that when cortical networks have raised levels of excitability, they can be entrained through non-synaptic paths. These conclusions arose because there appear to be very considerable differences between the patterns of epileptic discharges in neocortex and the CA territories. Entorhinal cortex appears to follow the neocortical pattern, while the different CA territories appear broadly similar, although we do not discount the possibility that there may yet be subtle differences between these. It is important to realize that these are not the only changes underlying the development of epileptic activity (Whittington et al. 1995;Fujiwara-Tsukamoto et al. 2007;Ellender et al. 2014). However, these various experiments do provide evidence of the interesting interplay between areas that are driving the pathology (the source, or ictal focus) and the susceptibility of secondary territories to be recruited. Of course, these acute brain slice preparations clearly do not incorporate all facets of the epileptic condition, but the substrates for all three of these principles do exist in vivo, and so, we would argue, all are likely to be relevant in spontaneously occurring seizures in humans. The main focus of our studies was the marked transition from early tonic-clonic activity to a late-stage pattern of repeated, spike and wave discharges, occurring every 2-10 s typically. This builds upon previous work done mainly using rat brain slices (Swartzwelder et al. 1986b;Mody et al. 1987;Anderson et al. 1990;Dreier & Heinemann, 1990, 1991Bragdon et al. 1992;Morrisett et al. 1993;Zhang et al. 1995;Dreier et al. 1998), but we extend this in two important ways. The first is that, with the development of various mouse models carrying genetic mutations associated with human epileptic conditions (Yu et al. 2006;Asinof et al. 2015), our studies provide important confirmation that the evolving activity patterns in these models follow the same pattern in mice as they do in rats. This will facilitate a productive line of investigations regarding exactly how specific genetic mutations impact on network stability, using the 0 Mg 2+ and 4-AP models. Since these two models induce distinct, and yet highly characteristic, activity patterns in brain slices from normal cortex, we expect that when these same models are applied to slices from epileptic, transgenic animals, they would provide indications of exactly where in the network the transgene has its effect. We described this approach as a kind of 'stress-test' , to understand how genetic mutations alter network performance (Parrish & Trevelyan, 2018). The second advance has been to clarify that different brain territories sustain characteristic epileptic discharge patterns. The extension of cortical area involvement with different pharmacological sensitivities, as well as the potential for non-canonical seizure propagation, may both contribute to pharmaco-resistance (Heinemann et al. 1994a). The tonic-clonic pattern of discharges appears to be a feature only of discharges arising in neo-and entorhinal cortex, but not hippocampus; when such activity is seen in hippocampus, it appears to be relayed there from the entorhinal cortex. A similar result has been reported previously (Shi et al. 2014), which noted that brain slices containing only the dentate and CA territories did not sustain ictal-like events. In contrast, slices that also included entorhinal cortex and neocortical territories showed ictal-like events relayed into the hippocampus from the entorhinal cortex. Also, a study of resected human sclerotic hippocampal tissue found that sustained ictal-like events were almost never recorded in the CA territories (Reyes-Garcia et al. 2018). We have yet to explore the subicular and parasubicular territories. The hippocampal activity starts very late in the 0 Mg 2+ model, but very early in the 4-AP model. To the best of our knowledge, this key difference in these two very widely used models, which illustrates the principle about differing network susceptibility to seizures, has not been reported previously. The hippocampal discharges entrain the other territories in the late 0 Mg 2+ model, but not in the early 4-AP activity, illustrating that the entrainment requires an increase in susceptibility (excitability) in the follower territories. Thus, while we emphasize that the explicit explanation of the transition is a shift in the source of the discharges, this must be underpinned by changes at the local network/cellular level, which alter the excitability of the networks. Another important point, with clinical relevance, is that sudden changes in the local pattern of activity can be indicative of a shift in the source of the pathological driver. Thus, unexplained sudden transitions in discharge patterns may be indicative of multiple foci, an issue of great importance when considering surgical approaches to management. Previously we showed that a change in the direction of propagation of individual discharges is a marker of the passage of the ictal wavefront (Trevelyan et al. 2007;Smith et al. 2016). Our current study now shows that sudden changes in the pattern of activity recorded in neocortex reflects the appearance, or cessation (suppressed by GABA B ), of a different pacemaker source, in this case within the hippocampal territories. The late stage activity, which our studies indicate is a primarily hippocampal pattern, has been termed 'interictal' activity by many observers, relating this to the clinical distinction between clinically manifest seizures, which presumably involve some motor territories in the brain, and epileptic electrophysiological discharges that are virtually clinically silent. The implication is that interictal discharges are restricted to areas that are less 'eloquent' , but that disregards what may be more subtle effects on brain function. Indeed, increasing evidence exists now about the potential impact of interictal discharges on memory (Binnie et al. 1987;Kleen et al. J Physiol 597.7 2010); such effects, in tandem without an explicit motor component, are entirely consistent with a hippocampal discharge. An important on-going debate has centred on the clinical significance of these events, specifically with regard to treatments predicated largely on the EEG findings. However, the clear demonstration that these are susceptible to GABA B agonists provides a means to examine this. GABA B agonists have been considered for treating epilepsy previously, but gave mixed results as assessed by seizure control (Terrence et al. 1983). We suggest, however, that it might be considered as adjunctive therapy to more conventional anti-epileptics, with the aim of reducing interictal activity with a presumptive hippocampal origin, and thereby ameliorating memory dysfunction comorbidity. This is also consistent with recent work indicating that focal targeting of dentate function can impact on both memory issues and seizure severity (Liou et al. 2018;Scharfman, 2018). Baclofen therapy is not entirely straightforward, because at different doses it appears to induce divergent effects on the hippocampus (Dugladze et al. 2013), but it might be possible to calibrate the dose using EEG monitoring in individual patients. Finally, we provide a proof of principle demonstration of entrainment of epileptiform discharges at a distance, through a non-synaptic mechanism. This is not mediated through diffusion of [K + ] o , since any rises of [K + ] o appear to remain very local to the site of neuronal activity. Rather the entrainment is likely to arise through volume conduction of the field potential. Given the size of field fluctuations recorded even outside the skull during seizures, it is reasonable to presume that such entrainment across brain territories might also occur in spontaneous seizures, giving rise to complex patterns of spread. Of course, we stress that this demonstration of a non-canonical mode of spread does not downgrade the clear importance of conventional, synaptically mediated spread. A notable feature of this pattern of spread is that we only see it in a very particular situation, spreading into tissue that is already hyperexcitable, with a history of repeated epileptiform discharges. Thus, the specific instances of non-synaptic spread occur only in what we have termed 'late-stage' epileptiform activity, in the 0 Mg 2+ model; it does not occur with the early hippocampal discharges in 4-AP, nor in the early neocortical discharges in 0 Mg 2+ . However, the fact that separating the neocortex and hippocampus influences this late stage activity in both directions (the neocortex shows a significant slowing of the rate of discharges, whereas the hippocampal rate increases significantly) indicates that the interactions are indeed bilateral in hyperexcitable networks. This suggests, first, that the neocortical discharges, which tend to last longer than the hippocampal ones, may impose additional refractoriness, and second, that the critical determinant of spread is that the follower network is 'primed' for activation. We follow Jefferys's nomenclature (Jefferys, 1995) in avoiding the use of the term 'ephaptic spread' , since he reserves this term for activation of juxtaposing cells (it derives from the Greek word 'to touch'), whereas the effect we describe clearly occurs at a distance. We suggest that this occurs through a distant field effect. Even though this might be considered a relatively weak effect, there is, however, an important precedent for this result, whereby epileptiform discharges can be entrained by minimal activation in an already hyperexcitable network: this is the demonstration that bursts of action potentials of a single pyramidal cell can entrain these discharges in disinhibited hippocampal networks (Miles & Wong, 1983). In conclusion, we have demonstrated several key principles of network interactions in epileptic pathophysiology. Although the precise nature in which they will be manifest may be slightly different in a chronically epileptic subject, these phenomena are highly likely to be relevant also in vivo, and may inform our interpretation of clinical electrophysiology.
2019-01-26T14:02:52.676Z
2018-10-05T00:00:00.000
{ "year": 2019, "sha1": "a5213ca20127c8853940f0020932b2b85d1a3a44", "oa_license": "CCBY", "oa_url": "https://physoc.onlinelibrary.wiley.com/doi/pdfdirect/10.1113/JP277267", "oa_status": "HYBRID", "pdf_src": "PubMedCentral", "pdf_hash": "a5213ca20127c8853940f0020932b2b85d1a3a44", "s2fieldsofstudy": [ "Biology", "Medicine" ], "extfieldsofstudy": [ "Biology", "Chemistry", "Medicine" ] }
119719992
pes2o/s2orc
v3-fos-license
Polynomial Root Isolation by Means of Root Radii Approximation Univariate polynomial root-finding is a classical subject, still important for modern computing. Frequently one seeks just the real roots of a real coefficient polynomial. They can be approximated at a low computational cost if the polynomial has no nonreal roots, but for high degree polynomials, nonreal roots are typically much more numerous than the real ones. The challenge is known for long time, and the subject has been intensively studied. The Boolean cost bounds for the refinement of the simple and isolated real roots have been decreased to nearly optimal, but the success has been more limited at the stage of the isolation of real roots. We obtain substantial progress by applying the algorithm of of 1982 by Schoenhage for the approximation of the root radii, that is, the distances of the roots to the origin. Namely we isolate the simple and well-conditioned real roots of a polynomial at the Boolean cost dominated by the nearly optimal bounds for the refinement of such roots. We also extend our algorithm to the isolation of complex, possibly multiple, roots and root clusters staying within the same (nearly optimal) asymptotic Boolean cost bound. Our numerical tests with benchmark polynomials performed with the IEEE standard double precision show that our nearly optimal real root-finder is practically promising. Our techniques are simple, and their power and application range may increase in combination with the known efficient methods. Introduction Assume a univariate polynomial of degree n with real coefficients, p(x) = n i=0 p i x i = p n n j=1 (x − x j ), p n = 0, (1.1) which has r real roots x 1 , . . . , x r and s = (n − r)/2 pairs of nonreal complex conjugate roots. In some applications, e.g., to algebraic and geometric optimization, one seeks only the r real roots, which make up just a small fraction of all roots. This is a well studied subject (see [EPT14,Section 10.3.5], [PT13], [SMa], and the bibliography therein), but the most popular numerical packages of subroutines for root-finding such as MPSolve 2.0 [BF00], Eigensolve [F02], and MPSolve 3.0 [BR14] approximate the r real roots about as fast and as slow as all the n complex roots. It can be surprising, but by combining some well known but well ignored algorithms for the approximation of the root radii, that is, the distances of the roots to the origin, Dandelin's classical root-squaring iteration [H59], with root proximity tests and Newton's iteration, and by properly exploiting the geometry of the complex plane, we accelerate the solution by a factor of n/r. This acceleration is dramatic in the cited important applications. We assume that near every real root there are no other roots of the polynomial p(x), but we can weaken this assumption (see Remark 2.7), and unlike [PT13], we do not need to assume that any initial approximations to the real roots are available. We confirm the efficiency of our techniques with the estimates for their Boolean complexity and with the results of our numerical tests, in which the number of iterations required for convergence of our algorithms grew very slowly as we increased the degree of the polynomials from 64 to 1024. Our techniques are very simple, and we point out their further modifications that may produce efficient complex polynomial root-finders under some mild lower bounds of their distances from each other. We organize our paper as follows. In the next section we cover the auxiliary results and our techniques as well as our main algorithm, together with the arithmetic cost estimates. This can be viewed as a high level description, preparing more detailed analysis in Section 3, where we estimate the Boolean complexity of our main algorithm. Section 4, the contribution of the third author, presents the results of our numerical tests. In Section 5 we very briefly comment on an extension to the approximation of all complex roots. Real Polynomial Root-finding by Means of the Root-radii Approximation Hereafter "flop" stands for "arithmetic operation". O B (·) andÕ B (·) denote the Boolean complexity up to some constant and polylogarithmic factors, respectively. Some Maps of the Variables and the Roots Some basic maps of polynomial roots can be computed at a linear or nearly linear arithmetic cost. Theorem 2.1. (Root Inversion, Shift and Scaling, cf. [P01].) (i) Given a polynomial p(x) of (1.1) and two scalars a and b, one can compute the coefficients of the polynomial q(x) = p(ax + b) by using O(n log(n)) flops. This bound decreases to 2n − 1 multiplications if b = 0. (ii) Reversing a polynomial inverts all its roots involving no flops, that is, p rev (x) = x n p(1/x) = n i=0 p i x n−i = p n n j=1 (1 − xx j ). Note that by shifting and scaling the variable, we can move all roots of p(x) into a fixed disc, e.g., D(0, 1) = {x : |x| ≤ 1}. Theorem 2.2. (Dandelin's Root Squaring, cf. [H59].) (i) Let a polynomial p(x) of (1.1) be monic. Then (ii) One can evaluate p(x) at the k-th roots of unity for k > 2n and then interpolate to q(x) by using O(k log(k)) flops overall. Remark 2.1. Recursive root-squaring is prone to numerical stability problems because the coefficients of the iterated polynomials very quickly span many orders of magnitude. Somewhat surprisingly, the Boolean complexity of the recursive root-squaring process is still reasonable if high output precision is required [P95], [P02], and we confirm and strengthen this observation with our new study. Note also that one can avoid the numerical stability problems and perform all iterations with the standard IEEE double precision by applying a special tangential representation and renormalization of the coefficients and the intermediate results proposed in [MZ01]. In this case the computations involve more general operations than flops, and in terms of the CPU time the computational cost per iteration has the same order as n 2 flops. Isolated Discs, Root Radii, Distances to the Roots, the Proximity Tests, and Counting the Roots in a Disc In this subsection we estimate the distances to the roots of p(x) from the origin and a fixed complex point as well as the number of roots in an isolated disc. We can use the following result if we agree to perform computations with extended precision (see Remark 2.4). Proof. (Cf. [S82], [P00,Section 4].) At first fix a sufficiently large integer k and apply k times the root-squaring of Theorem 2.2, which involves O(kn log(n)) flops. Then apply the algorithm of [S82] to approximate all root radii r (k) j = r 2 k j , j = 1, . . . , n, of the output polynomial p k (x) within a factor of 2n by using O(n) flops. By taking the 2 k -th roots, approximate the root radii r 1 , . . . , r n within a factor of (2n) 1/2 k , which is 1 + c/n d for k of order log(n). Remark 2.2. Alternatively we can approximate the root radii by employing the Gerschgörin theorem to the companion or generalized companion matrices of a polynomial p(x) [C91], by applying the heuristic method of [B96], used in the packages MPSolve 2000 and 2012 [BF00], [BR14], or by recursively applying Theorem 2.6, to be stated later, although neither of these techniques support competitive complexity estimates. The following two theorems define the largest root radius r 1 of the polynomial p(x). Both theorems can be immediately extended to the approximation of the smallest root radius r n because it is the largest root radius of the reverse polynomial p rev (x) = x n p(1/x) (cf. Theorem 2.1). Moreover, by shifting a complex point c into the origin (cf. Theorem 2.1), we can turn our estimates for the root radii into the estimates for the distances to the roots from the point c. Approximation of the smallest distance from a complex point c to a root of p(x) is called the proximity test at the point. One can perform such a test by applying Theorems 2.4 or 2.5. Alternatively, for proximity tests by action at a point c or at n points, one can apply Newton's iterations and estimate the distance to the roots by observing convergence or divergence of the iterations. Theorem 2.5 and these iterations can be applied even where a polynomial p(x) is defined by a black box subroutine for its evaluation rather than by its coefficients. The algorithm supporting the following theorem given by [R87,Lemma 7.1] (cf. also [S82,Theorem 14.1]) can be applied for proximity test, but it produces more information than just the distance to the closest root: it computes the number of roots in an isolated disc. Definition 2.1. A disc D(X, r) is said to be γ-isolated for a polynomial p(x) and γ > 1 if it contains all roots of the polynomial lying in the disc D(X, γr). In this case we say that the disc has the isolation ratio at least γ. We say that a root x of a polynomial is σ-isolated relative to a complex point c for σ > 0 if |y − x| > σ|x − c|. If c = 0 we call this root just σ-isolated. Theorem 2.6. [R87,Lemma 7.1] It is sufficient to perform FFT at n ′ = 16⌈log 2 n⌉ points (using 1.5n ′ log(n ′ ) flops) and O(n) additional flops and comparisons of real numbers with 0 in order to compute the number of roots of a polynomial p(x) of (1.1) in a 9-isolated disc D(0, r). Remark 2.3. The algorithm of [R87] supporting Theorem 2.6 only uses the signs of the real and imaginary parts of the n output values of FFT. For some groups of the values, the pairs of the signs stay invariant and can be represented by a single pair of signs. Can this observation be exploited in order to decrease the computational cost of performing the algorithm? Corollary 2.1. It is sufficient to perform O(hn log(n)) flops and O(n) comparisons of real numbers with 0 in order to compute the number of roots of a polynomial p(x) of (1.1) in an s-isolated disc D(0, r) for s = 9 1/2 h and for any positive integer h. Proof. Every root-squaring of Theorem 2.2 squares all root-radii and the isolation ratii of all discs D(0, r). Suppose h repeated squaring iterations map a polynomial p(x) into p h (x), for which the disc D(0, 1) is 9-isolated. Then we can compute the number of roots of p h (x) in this disc by applying Theorem 2.6, which is the same as the number of roots of p(x). Remark 2.4. In view of Remark 2.1, one must apply the slower operations of [MZ01] or high precision computations in order to support even a moderately long sequence of root-squaring iterations, but in some cases it is sufficient to apply Corollary 2.1 for small positive integers h. Note that 9 1/2 h is equal to 1.3160... for h = 2, to 1.1472... for h = 3, to 1.0710... for h = 4 and to 1.0349... for h = 5. Convergence of Newton's Iteration The following theorem states that Newton's iteration (2.1) converges to an isolated root of a polynomial p(x) of (1.1) with quadratic rate globally, that is, right from the start. A Real Root-finder Based on the Root-radii Approximation Algorithm 2.1. Real root-finding by means of root radii approximation. Input: two integers n and r, 0 < r < n, two real constants c and d, and the coefficients of a polynomial p(x) of equation (1.1) whose all real roots x 1 , . . . , x r are c/n d -isolated (cf. Remark 2.7). Output: approximations to the real roots x 1 , . . . , x r of the polynomial p(x). Computations: 1. Compute approximationsr 1 , . . . ,r n to the root radii of a polynomial p(x) of (1.1) within relative error bound 1 3 c/n d+1 (see Theorem 2.3). (This defines 2n candidates points ±r 1 , . . . , ±r n for the approximation of the r real roots x 1 , . . . , x r .) 2. At all of these 2n points, apply one of the proximity tests of Section 2.2, to select r approximations to the r real roots of the polynomial p(x). Apply Newton . , concurrently at these r points, expecting to refine quickly the approximations to the isolated simple real roots. One can ensure numerical stability of computations at Stage 1 by applying the techniques of [MZ01] for root squaring iteration. Can we accelerate the computations by applying the algorithm of Theorem 2.6 and observations of Remark 2.3? Remark 2.5. (Refinement by means of Newton's iteration.) For every h, h = k, k −1, . . . , 0, we can apply concurrently Newton's iteration (2.1)), at the r approximations x (h) j , j = 1, . . . , r, to the r real roots of the polynomial p h (x). We can perform an iteration loop by using O(n log 2 (r)) flops, that is, O(nl log 2 (r)) flops in l loops (cf. [P01, Section 3.1]), and include these flops into the overall arithmetic cost of order kn log(n) for performing the algorithm. We can perform the proximity tests of Stage 2 of the algorithm by applying Newton's iteration at all 2n candidate approximation points. Having selected r of them, we can continue applying the iteration at these points, to refine the approximations. Remark 2.6. (Handling the Nearly Real Roots.) Proximity tests at Stage 2 can produce more than r candidates for being real roots because nearly real roots and real roots are hard to distinguish in numerical computations with rounding errors. Nevertheless at Stage 3 we can distinguish them if Newton's iterations (2.1) do not converge to nonreal roots because by assumption the real roots are sufficiently well isolated and approximated to ensure the assumptions of Renegar-Tilli's Theorem 2.7 for global quadratic convergence. If Newton's iterations converge to a nonreal root globally with quadratic rate as well, then we should soon see that this root is nonreal unless it is extemely close to the real axis, and then can be counted as a real root in most of applications. Remark 2.7. One can weaken the assumption about the isolation of real roots (e.g., one can handle approximation of multiple real roots and their clusters) by applying recursively Theorem 2.6 or Corollary 2.1 for generalized proximity tests or by applying Newton's iteration to the derivative or a higher order derivative of the polynomial p(x). On the Boolean complexity of Algorithm 2.1 In this section we find it more convenient to denote the input polynomial f rather than p. We need the following lemma from [PT13,PT14c] on polynomial multiplication. Let C denote the product AB and let K = 2 k ≥ 2d + 1 for a positive integer k. Write λ = ℓ+2τ 1 +2τ 2 +5.1 lg K +4. Assume that we know the coefficients of A and B up to the precision λ, that is, that the input includes two polynomials A and B such that A− A ∞ ≤ 2 −λ and B− B ∞ ≤ 2 −λ . Then we can compute in O B (d lg d µ(ℓ + τ 1 + τ 2 + lg d)) a polynomial C such that C − C ∞ ≤ 2 −ℓ . Moreover, C ∞ ≤ 2 τ1+τ2+2 lg K for all i. We need the following result from [S82,Theorem 19.1] . Theorem 3.1. Let f be polynomial of degree n with all its roots in the unit disc. Let f be a λapproximation, that is, f − f ∞ ≤ 2 −λ . Then the roots of f , α 1 , . . . , α n , and the roots of f can be numbered such that, for j ∈ [n], |α j − α j | ≤ 2 −λ/n+2 lg(n)/n+2 . If the roots are bounded by r, then a term lg(r) should be added in the exponent. The Complexity of Root Radii Approximations Given a polynomial f (x), Dandelin's root-squaring operator is the following map, where y = x 2 (cf. Theorem 2.2). By using the representation with the square roots we obtain the output polynomial of the same degree as f . We need the following lemma, which bounds the propagation errors and the height of the polynomials computed in a sequence of Dandelin's iterations. Lemma 3.2. Let f ∈ C[x] be a polynomial of degree d such that f ∞ ≤ 2 τ Let the polynomial f k be output in k Dandelin's root-squaring iterations applied to f , and let N = 2 k . Let Moreover Proof. We prove the upper bound on the height by induction. For k = 1, we perform the multi- where h(1) = 2τ + 2 lg d + 4, which agrees with our bound. Assume that the claimed bound holds up to k − 1. At step k we perform the multiplication f k−1 (−x) f k−1 (x). By the induction hypothesis, f k−1 ∞ ≤ 2 h(k−1) where h(k − 1) = 2 k−1 τ + (2 k − 2) lg(d) + 4 · 2 k−1 − 4. By applying Lemma 3.1 we obtain the desired bound. Next we prove the bound on the approximation errors. Let E(k) be the approximation output by the k-th iteration. For example, f − f ∞ = f 0 − f 0 ∞ ≤ 2 E(0) and E(0) = −λ. Notice that E(k) = E(k − 1) + 4 h(k − 1) + 6 lg d + 15. The solution of this recurrence relation provides us with the error bound. The approximation of all the root radii of a polynomial corresponds to solving the following task, for all s ∈ [n], where n is the degree of the polynomial (see [S82,MP13]). Task S. Given a positive ∆ and an integer s ∈ [n] find a positive r such that r/(1 + ∆) < r s < (1 + ∆)r. Assume that we have solved Task S for a λ-approximation f to a polynomial f such that f − f ∞ ≤ 2 −λ , and now would like to extend this solution to the solution of Task S for the polynomial f . The following lemma links the approximation errors of this extension with its Boolean cost and the value of λ. have degree n such that f ∞ ≤ 2 τ and have all its roots lying inside a disc in the complex plane centered at the origin with radius 2 τ . Assume that we are given a Then we can solve Task S for f and all s ∈ [n] by using f , with 1/∆ ≤ d O(1) in O B (n 3 +n 2 τ +nℓ). Proof. We wish to solve Task S for f . At first assume that 1 + ∆ ≥ 2n and compute an r such that provided that λ is small enough. The application of the perturbation theorem (Theorem 3.1) to f and f results in the bounds where α j are the roots of f and α j are the roots of f . Now recall that ρ = λ/n − lg(n)/n − 2 − τ and deduce that Next assume that the left or the right inequality of (3.2) does not hold, show a contradiction for a sufficiently small value of λ, and estimate for which larger values λ these inequalities still hold. (3.5) Both estimates (3.4) and (3.5) should hold in order to imply the desired bounds for r s . Next we bound r. Notice that r 2(1+∆) ≤ r s ≤ 2 τ for all s since by assumption all the roots of f lie in the unit disc. Thus lg r ≤ 1 + lg(1 + ∆) ⇒ − lg r > −1 − lg(1 + ∆). By combining this inequality and (3.4) deduce that λ > 2n + lg n, and so if λ > 2n + lg n, then a solution to Task S for f and 1 + ∆ ≥ 2n implies a solution to Task S for f , under (3.2). Next we apply Dandelin's root-squaring iterations in order to decrease the assumed bound on ∆. Every iteration squares the root radii, and so the bound for 1 + ∆ < 2n fulfilled after k iterations implies the bound for 1 + ∆ < (2n) 1/2 k beforehand. Assume that we have applied k iterations, where k is such that 2 k = N . Let f k be the resulting polynomial. Since we work with approximations, it holds that f k − f k ∞ ≤ 2 −λ+O(N τ +N n) = 2 −λ k according to Lemma 3.2. So it suffices to consider a λ such that λ > O(N τ + N n) + 2n + lg n. A term c · n should be added, for a small constant c, in order to compensate for the computation of the logarithms, the divisions and the n-th roots. This does not alter the asymptotic bound. The number of flops in every Dandelin's iteration loop is O(n(lg(n)) 2 ) = O(n) (cf. Theorem 2.2). Thus we can solve Task S in O B (n 3 + n 2 τ ) for a given s. Recall that the estimate of Theorem 2.3 on the number of flops applies to the solution of Task S for all s, that is, to the approximation of the radii of all the n roots of f . Hence, for this task the bound is O B (n 3 + n 2 τ ) as well. Remark 3.2. If the input polynomial f is known exactly, for example, if it has rational coefficients, then we can omit ℓ in the bounds of the previous lemma. Remark 3.3. The shift of the variable by σ = 2 l implies the growth of the coefficient length τ by O(nl), and we can extend our cost and error bounds for the approximation of the distances of the roots from the point s accordingly. The complexity of the Newton iterations Having computed sufficiently good approximationsr j to the root radii r j , we can apply Newton's iterations to all 2n candidates ±r j , j = 1, . . . , n in order to approximate the r real roots to a sufficient accuracy. If we assume that the intervals containing the real roots have a "proper" isolation ratio, then the cost of the application of Newton's operator is given by the following proposition, which is a slight modification of Lemma 10 and Remark 11 in [PT13,PT14c], see also [PT14b]. Proposition 3.1. Let f ∈ C[x] be of degree n such that f ∞ ≤ 2 τ . Assume that we are given a The maximum number of bits needed by Newton iterations is O(L + nτ + ℓ), and the total complexity of the Newton step is O B (n 2 τ + nL + nℓ). The same asymptotic bound holds if we apply Newton operator to approximate all the roots simultaneously because each Newton step consists of an evaluation of the polynomial and its derivative. We can perform all these operations simultaneously by using multipoint evaluation [PT14a,Lemma 21] at the same asymptotic Boolean cost [PT14c,Theorem 14]. Newton's iterations converge with quadratic rate right from the start provided that the root and its initial approximation lie in the same 3n-isolated disc (see Theorem 2.7). How can we test whether this assumption holds for a given polynomial f and a real interval I? We can choose among various known proximity tests (see Section 2.2), and in our case, under the assumption that the real roots are have no other roots of p(x) in certain neighborhoods, a natural strategy is to just apply the Newton operator. In this way we compute a sequence of real inclusion intervals (a h .. b h ), for h = 0, 1, . . . , where (a 0 .. b 0 ) = I and b h > a h for all h. We verify the inclusion property by checking whether f (a h )f (b h ) < 0 and either observe that h bisection steps decrease the width of the isolating interval by a factor of 2 h or otherwise conclude that the assumption on the isolation ratio is certainly violated. This test by action requires negligible extra cost. Remark 3.4. The paper [PT14a] provides a simple recipe for increasing the isolation ratio of a disc from 1 + 1/ log 2 (n) to 3n or even to cn d for any pair of real constants c and d at very low arithmetic and Boolean cost. Tests for Real Root-finding with Algorithm 2.1 Our tests show that Algorithm 2.1 works quite well on polynomials without clustered roots. At the first stage of this algorithm, we combined the root-radii estimate in [BR14] (based on the algorithms of [B96] and [BF00]) with the numerically stable variant of Dandelin's root-squaring iteration from [MZ01], which exploits tangential representation and renormalization of coefficients. We run numerical tests on polynomials of two types below, having degree n = 64, 128, 256, 512, 1024. We run all computations with the IEEE standard double precision and estimated the output errors by comparing our results with the outputs of MATLAB function "roots()": I. p(x) = p 1 (x)p 2 (x), where p 1 (x) is the r-th degree Chebyshev polynomial, r = 8, 12, 16, p 2 (x) = x n−r − 1. The following tables display the number of iteration and the error bounds when we applied Algorithm 2.1 to polynomials of these two types. In many cases the number of iterations was small, and then reliable results can be expected even without renormalization. In such cases application of FFT-based polynomial convolution would decrease the quadratic arithmetic complexity of an iteration to O(n log(n)). If our estimates showed that the norm of the root lied in the range [α, β] and if the root is real, then it must lie in one of two intervals: [−β, −α] and [α, β]. For each of them we made a search for a subinterval where the polynomial changed its sign. When we found r such subintervals, we output the number of iterations required for this, then applied five Newton's iterations initiated at the r midpoints, compared their outputs with the roots computed by MATLAB root-finding function "roots()", and output the maximum error bound. Conclusions One can extend our algorithm to the approximation of all complex roots of the polynomial p(x) by approximating the n distances to the n roots from two or three selected complex points. This would define the inclusion domains for all roots, namely the intersections of the pairs or triples of the narrow annuli defined by the approximations of the distances from two or three selected complex points to the n roots. The isolation assumption should be extended to exclude the proximity of any complex root to any other root (although we can weaken this assumption a little, in view of Remark 2.7), and in order to select the n roots, we should examine up to to 2n 2 or 2n 3 intersections of the pairs or triples of the narrow annuli, computed based on Theorem 2.3.
2015-06-15T01:34:42.000Z
2015-01-22T00:00:00.000
{ "year": 2015, "sha1": "2b3631f7665c661e245395a7ce2e231400c801d9", "oa_license": null, "oa_url": null, "oa_status": null, "pdf_src": "Arxiv", "pdf_hash": "24781e1a34913f3188be0910709a900fc9726cf7", "s2fieldsofstudy": [ "Mathematics", "Computer Science" ], "extfieldsofstudy": [ "Mathematics" ] }
7961210
pes2o/s2orc
v3-fos-license
The KCTD family of proteins: structure, function, disease relevance The family of potassium channel tetramerizationdomain (KCTD) proteins consists of 26 members with mostly unknown functions. The name of the protein family is due to the sequence similarity between the conserved N-terminal region of KCTD proteins and the tetramerization domain in some voltage-gated potassium channels. Dozens of publications suggest that KCTD proteins have roles in various biological processes and diseases. In this review, we summarize the character of Bric-a-brack,Tram-track, Broad complex(BTB) of KCTD proteins, their roles in the ubiquitination pathway, and the roles of KCTD mutants in diseases. Furthermore, we review potential downstream signaling pathways and discuss future studies that should be performed. Introduction The human potassium (K + ) channel tetramerization domain (KCTD)family of proteins consists of 26 members that share sequence similarity with the cytoplasmic domain of voltage-gated K + channels(Kv channels) [1][2][3]. The KCTD proteins have relatively conserved N-terminal domains and variable C-termini. Comparative analyses of the conserved N-terminal sequence suggest the presence of a common Bric-a-brack,Tram-track, Broad complex (BTB) domain, which is also known as the POZ domain. The BTB domain is a versatile protein-protein interaction motif that facilitates homodimerization or heterodimerization. A variety of functions have been identified for the BTB domain-containing KCTD proteins. These functions include transcriptional repression [4,5], cytoskeleton regulation [6], tetramerization and gating of ion channels [7,8], and interaction with the cullin E3 (Cul3) ubiquitin ligase complex [9,10]. In this review, we will summarize the homology between KCTD family members and some of the key features of KCTD proteins. We will also discuss the roles of mutant KCTDs in disease. BTB domain and homology between KCTD family members The human genome includes approximately 400 BTB domain-containing proteins. The BTB domain is a highly conserved motif of about 100 amino acids and can be found at the N-terminusof C 2 H 2 -type zinc-finger transcription factors and in some actin-binding proteins [11]. BTB domain-containing proteins include transcription factors, oncogenic proteins, ion channel proteins, and KCTD proteins [2,[12][13][14]. Many BTB domain-containing proteins contain one or two additional domains, such as kelch repeats, zinc-finger domains, FYVE (Fab1, YOTB, Vac1, and EEA1) fingers which is a novel zinc finger-like domain found in several proteins involved in membrane trafficking, or ankyrin repeats [15]. These special domains provide unique characteristics and functions to the BTB proteins. The BTB domain facilitates protein-protein interactions between KCTD proteins to allow selfassembly or with non-BTB-domain-containing proteins to promote oligomerization [15]. The X-ray crystal structure of KCTD5 also revealed assemblies of five subunits while tetramers were anticipated [16]. A variety of functional roles of KCTD proteins have been identified by different signal pathways, including sonic hedgehog (Shh) [17][18][19], Wnt/beta-catenin [20], FGF [1], and GABA signaling [21][22][23][24]. Alignment of the amino acids in the potassium tetramerization domains of all known KCTD proteins demonstrates that most KCTD proteins can be divided into seven groups by amino acid sequences. The A-group contains KCTD9, KCTD17, KCTD 5, and KCTD 2. The B-group contains KCTD10, KCTD13, and TNFAIP1. The C-group contains KCTD7 and KCTD14. The D-group contains KCTD8, KCTD12, and KCTD16. The E-group contains KCTD11, KCTD21, and KCTD6. Members of the F-group include KCTD1 and KCTD15. And the final group is the G-group, which contains KCTD3 and SHKBP1 and BTB10. KCTD20, KCTD18, KCTD19, and KCTD4 do not belong to these seven groups ( Figure 1). The evolutionary tree of the KCTD family proteins is similar to the group that Skoblov M et al. built [25]. We also suggest that homologous KCTD members may share similar functional roles in proliferation, transcription, protein degradation, regulation of Gprotein coupled receptors and other molecular or biological processes. KCTD proteins as adaptor molecules BTB-domain-containing KCTD proteins may act as adaptors for interactions between the Cul3 ubiquitin ligase and its substrates. Thus, BTB KCTD proteins may facilitate successful ubiquitination of substrate proteins [26]. Cul3 is one of seven human cullin proteins (Cul1,Cul2,Cul3, Cul4A,Cul4B,Cul5, and Cul7). Most cullins form complexes with substrate proteins by interacting with the BTB domains of adaptor proteins [3]. Thus, the BTB domain is important for the process of ubiquitination and protein degradation. Ubiquitination involves a three-step enzymatic cascade, which is initially activated by ubiquitinactivating enzyme(E1). The substrate is then transferred to ubiquitin-conjugating enzyme(E2) and is finally linked with ubiquitin ligase(E3) [27]. Various cellular functions, including cell proliferation, differentiation, apoptosis, and protein transport, involve protein ubiquitination and deubiquitination [28]. Bioinformatics and mutagenesis analyses have demonstrated that the best-characterized member of the KCTD family, KCTD11/REN, is expressed as two alternative variants, sKCTD11 and lKCTD11. Despite the fact that both variants possess a BTB domain in the N-terminus, only the lKCTD11 form has a complete BTB domain. Intriguingly, this has not disturbed the cul3-binding activity of sKCTD11. KCTD11/REN also mediates histone deacetylase (HDAC1) ubiquitination and degradation via cullin binding, resulting in reduced Hh/ Gli signaling [18]. The KCTD21 and KCTD6 have also been found to have the same features as KCTD11 [29]. Thus, KCTD21 and KCTD6 may also facilitate protein degradation and reduced cellular signaling due to associations with ubiquitin ligases. KCTD5 and KCTD7 have also been shown to function as substrate-specific adaptors for cullin3-based E3 ligases [3,30,31]. In addition, KCTD7 has been shown to increase potassium conductance due to increased proteasome degradation of an unidentified substrate [30]. Thus, several members of the KCTD family function as critical adaptor molecules for ubiquitinmediated protein degradation. This function ultimately results in the modulation of important downstream signaling pathways and biological processes. As can be seen from Figure 1, cullin is fairly widely interaction with the family of KCTD proteins. In the future, this novel substrate of KCTD will help to understand the function of the complex of CUL3 -BTB. KCTDs and disease KCTD proteins have essential roles in proliferation, differentiation, apoptosis, and metabolism. Improper regulation of KCTD genes has been associated with various diseases, including medulloblastoma [32], breast carcinoma [33], obesity [34,35], and pulmonary inflammation [36]. Many studies show associations between mutations in individual KCTD genes or allelic loss of KCTDs with specific diseases. For example, a homozygous mutation (R99X) in exon 2 of KCTD7 has been described in progressive myoclonic epilepsy (PME) [37]. A second homozygous missense mutation (R94W) in exon 2 of KCTD7 has also been found in PME [38]. In addition, a heterozygous missense mutation (R84W) and a large heterozygous deletion of exons 3 and 4 of KCTD7 have also been reported in patients with PME [30,31]. Allelic deletion of human KCTD11 at chromosomal location 17p13.2 has been found in medulloblastoma [19,39]. In addition, gene copy number variants (CNVs) of KCTD13 mapping to chromosomal location 16p11.2 are considered to be major genetic causes of macrocephaly and microcephaly. Overexpression of KCTD13 induces microcephaly, whereas suppression of the same locus results in a macrocephalic phenotype [40]. Missense mutations in KCTD1occur in Scalp-ear-nipple (SEN) syndrome [41]. Single nucleotide polymorphisms (SNPs) of KCTD10 (i5642G-> C and V206VT-> C) are associated with altered concentrations of HDL cholesterol, particularly in subjects with high levels of carbohydrate intake [42]. KCTD mutants affect proliferation, differentiation, apoptosis, and metabolism in different tissues. For example, the CNVs of KCTD13 affect the balance of proliferation and apoptosis in neuronal progenitor cells. In addition, deletions in KCTD11 abrogateinhibition of Shh signaling at the outer to inner external granule layergranule cell progenitor (EGL GCP) transitions by affecting expression of Gli1 and Gli2 [19]. Deletions in KCASH, KCTD21, or KCTD6 block interactions with ubiquitination Ref. [42] Others Influence EPO production KCTD2 Production of erythropoietin (EPO) was significantly inhibited when CEBPG, KCTD2, and TMEM183A were knocked down Ref. [44] Live injury of HBV-ACLF KCTD9 The overexpressed KCTD9 activates NK cell in peripheral blood and liver in HBV-ACLF, which contributes to liver injury Ref. [45]; Ref. [46] Chronic Tinnitus KCTD12 Risk modifier Ref. [22] Scalp-ear-nipple(SEN) syndrome KCTD1 missense mutation in KCTD1 causes SEN syndrome Ref. [41] enzymes, preventing degradation of HDAC1. This leads to increased acetylation of Gli1 and increased Hh/Gli signaling, which drives uncontrolled proliferation and development and progression of medulloblastoma [17,39]. Not only mutant KCTD could cause diseases, but also the change of KCTD expression involved in different diseases [22,[43][44][45][46]. All of the diseases related with KCTD proteins have been list in a Table 1 to make the family more convenient for further study. Conclusion There are some features of KCTDs that have not been reviewed in this article. For example, KCTD8, -12, -12b, and-16 form functional oligomers with the GABAB receptor, resulting in the modulation of important signaling pathways [21][22][23][24]47]. In addition, the PDIP1 family members (KCTD10, KCTD13, and TNFAIP1) are tumor necrosis factor-a-inducible proteins that can stimulate the activity of DNA polymerase in DNA replication and repair pathways [48]. Furthermore, interactions between KCTD1, KCTD15, and AP-2 represses the transcriptional activity of AP-2a [13], Finally, KCTD1 has been shown to interact with PrP C [49]. In the review, we summarize the BTB characteristics of the KCTD proteins, their roles in the ubiquitination pathway, and the relevance of KCTD mutations in various diseases. The review highlight the extraordinary possibility of the interaction of cullin-KCTDs to target substrates for ubiquitin-dependent degradation. If BTBcontaining KCTD proteins can assemble into Cul3-based complexes, we estimate KCTD proteins can recruit substrates into ubiquitin system. We specifically discuss the role of KCTD1 in the ubiquitination pathway via interaction with cul3. We also hypothesize that KCTD1 mediate prion protein into ubiquitination signal pathway, and deregulation of the KCTD1 mediated prion protein ubiquitination might be both a cause and result of prion disease. Furthermore, we speculate that members of the same subgroups may have similar roles in biological processes or molecular signaling pathways. We believe that further investigations into the functions of individual KCTD family members are warranted, particularly within the context of specific diseases as described here. Competing interests The authors declare that they have no competing interests. Authors' contributions ZP, YX, and GS co-wrote this review. All authors read and approved the final manuscript.
2017-06-28T01:39:27.967Z
2013-11-24T00:00:00.000
{ "year": 2013, "sha1": "3bce0cf184a858293442d878dbfa5be5be107f66", "oa_license": "CCBY", "oa_url": "https://cellandbioscience.biomedcentral.com/track/pdf/10.1186/2045-3701-3-45", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "671920a6a7ea19c92728966d6165f1fe8c1a9482", "s2fieldsofstudy": [ "Biology" ], "extfieldsofstudy": [ "Biology", "Medicine" ] }
269882431
pes2o/s2orc
v3-fos-license
Accuracy of marine gravimetric measurements in terms of geodetic coordinates of land reference benchmark Highlights Abstract Introduction In recent years, efforts to create high-resolution and precision maps of gravity field functionals have been made in various research facilities around the world.The results of these efforts are dedicated to application in ultra-precise inertial navigation (INS) [1][2][3].The precision and resolution requirements for gravity measurements are largely dependent on the purpose of these measurements: geological survey or inertial navigation.In this article, we focus on gravity data collection, which is utilised in inertial navigation systems.The compensation of gravity disturbances plays an important role in the improvement of positioning accuracy in algorithms of inertial navigation.The requirements for gravity data quality are highest in that case since the few mGal's of error in gravity value can be attributed to a horizontal position error of a few meters after the time span of minutes.It should be noted that all errors in INS have a tendency to accumulate (growth is logarithmic).Because of that, the mentioned few meter inaccuracies will quickly rise to the level potentially dangerous for marine navigation safety.In case satellite navigation in the Baltic Sea area is successively disrupted, the reliability of alternative navigation systems will Eksploatacja i Niezawodność -Maintenance and Reliability Vol. 26, No. 3, 2024 play a more important role in providing safety on the sea. The presented manuscript describes the influence of precision and reliability of land reference points on threedimensional marine gravity data.Considering the current theoretical accuracy of ship-borne gravity campaigns (the uncertainty 1 mGal or lower), the consistency and reliability of pier binding are crucial. The issue of linking measurements at the point of gravimeter placement on the ship to the absolute value of gravity in the port pier is presented in the literature [4,5].However, the description of the verification of the coordinate values of the gravimetric matrix points from which the gravity acceleration values are transferred to points on the harbour pier is omitted in the literature.From this perspective, the presented analysis results are an important contribution to obtaining high-quality gravimetric measurements [6].Gravimetric measurements in sea areas are most often performed with the use of relative gravimeters, whose measurements need to be linked/connected to points located onshore [7] The gravity force values of these points are transferred from the points of the national gravimetric control network and constitute the basis for the analysis and calculations [8]. While conducting gravimetric measurements in marine areas, the team encountered a case of an incorrectly stated gravity reference station catalogue value.The study of this case allows us to present a real case in which the error of the land reference point's ellipsoidal height can influence the transferred gravity value.Despite the fact that for relative measurements, the pair of relative CG5 (Fig. 1A Experience in performing marine gravity campaigns shows that the error resulting from incorrect referencing parameters during offshore measurements is not detectable.It is because it does not create a uniform offset in the campaign data, as one may expect, but it spreads nonuniformly over the campaign area. That situation is particularly difficult when data is planned to be used in the products dedicated to usage in ultra-precise inertial navigation which is sensitive to regional field disturbances [9]. To the best of our knowledge, the methodological validation of the reference point of the marine gravity campaign, as performed by our team, has not yet been described in the literature.We show that it is worth checking the other values assigned to the gravity absolute reference points to eliminate errors resulting from this.Such a procedure should be a rule, especially when data collected during marine campaigns is dedicated to being used in ultra-precise inertial navigation systems. The determination of 3D coordinates values of land reference network points and especially the precise determination of the ellipsoidal height offset was possible due to the application of long static GNSS measurements on the absolute gravity network point (Fig 1B ) and on three other geodetic class reference points simultaneously.The technique of data acquisition during measurements and its postposing analysis was presented.Geodetic measurement instrument improvements ensure higher accuracy when verifying horizontal and vertical reference systems [10]. The quality of the results obtained for the three-dimensional position for benchmark 5403 (POLREF-GORA DONAS) has been achieved due to the utilisation of data collated by nearby ground-based augmentation system (GBAS) stations [11,12].Eksploatacja i Niezawodność -Maintenance and Reliability Vol. 26, No. 3, 2024 [13,14].This technique has not only increased the accuracy of positioning measurements but, on top of that, made it possible to increase the frequency of measurements.In addition, the GNSS technique makes it possible to obtain the data necessary to eliminate interference resulting from the Eötvös effect, Harrison effect and crosscoupling from recorded signals [15].It should be noted that in the 1970s, measurements made with an accuracy of 2 mGal (1 mGal = 10 -5 m/s^2) were considered truly accurate.Nowadays, gravimetric measurements at sea are feasible with an error of less than 1 mGal [16].Achieving such accuracy requires a control check of all measurement stages, including a catalogue value of the reference station. Materials and methods The marine gravimetric measurements team of the Gdańsk University of Technology performed during the preparation of the measurement campaign, and a two-day relative gravity survey was carried.The measurements were intended to determine the force of gravity at a point located on the quay in the port of Gdynia, which was a reference point in the campaign.Cartography.Two Scintrex CG5 relative gravimeters were used to transfer the gravity (Fig. 1).A two-day relative gravity survey was carried out with two CG5s, starting from the benchmark 5403 (POLREF-GORA DONAS) to the point located on the quay in the port in Gdynia, where ORP "HEWELIUSZ" was moored, back and forth along the same route.As a mid-station, the tidemeter point of Gdynia port was used.Thanks to this action, we tied our relative measurement twice and obtained four measurement ranges.The same action was adopted when transferring values from the 363 (GDANSK ABS) (ID 5418234342.000)point.In this case, two intermediate points were established due to the distance.The values listed in the catalogues are assumed to be actual values, determined with a supposed accuracy, and their validity is rarely checked [17]. Gravimetric measurements carried out in The difference between the values transferred from these points at the Gdynia shore point was 0. Due to the design of satellite navigation systems, time measurement stability has a crucial impact on accuracy [21]; thus, a measurement strategy was designed to address this issue.After analysing GNSS measurement methods such as realtime measurement [27][28][29], DGNSS measurements [30,31], and Precise Point Positioning [32][33][34], the static method [35], most often used to establish geodetic networks and geodynamic measurements, was selected. Eksploatacja i Niezawodność -Maintenance and Reliability Vol. 26, No. 3, 2024 Satellite observations were made with receivers placed over the points.The measurement sessions lasted several hours.This method allowed for high accuracy of a long baseline (which has a coordinate difference vector).The locations of points in global geodetic coordinates were obtained in post-processing using Leica Geo Office v8.4 and proprietary scripts.The calculation process also used data obtained from observations made at points of known locations (GBAS) [36]. As part of the measurement campaign, GNSS signals were recorded at four points, located in Rozewie, on Góra Donas, and in Gdańsk and Gdynia, as presented in Figure 3.A similar post-processing procedure was performed using HxGN SmartNet reference stations.The reference stations Tczew, Reda, Nowy Dwór Gdański, Lębork, Kartuzy Jastrzębia Góra, Hel, and Gdańsk were used.Data for reference stations in the universal Rinex data format and data from the service providers of the base station antennas were collected.Precise ephemerides from the first stage were used.The post-processing was carried out using the same parameters as in the case of the ASG EUPOS network.As a result of the post-processing, 128 baselines were counted.The control process was conducted with the "GPS Loop Misclosure" option, resulting in the detection of 104 meshes, in which 45 vector components did not meet the assumed criteria.Strict adjustment was performed using the least squares method, which resulted in the received coordinates of points measured in the global geodetic reference frame.Figure 3 shows the baselines obtained in the calculation process using the ASG EUPOS system (Figure 3b) and HxGN SmartNet (Figure 3c). Results An analysis of the results of the alignment process was carried out to examine the accuracy achieved.A fundamental aspect of this process is the degree of redundancy in the observed network as a result of the measurement.An excessive number of observations has a direct impact on the quality of the adjustment, the achievable accuracy, and, thus, the results obtained.The graphs below show the redundancies in the adjustment process using the ASG EUPOS (Fig. 4a) and HxGN SmartNet (Fig. 4b) systems.The differences between the coordinates calculated using data from the ASG EUPOS and HxGN SmartNet systems were very small differences, ranging from 0,05 to 0,21 m in ellipsoidal height, 0,03 to 0,020 m in Latitude and 0,02 to 0,013 m Longitude. Discussion After the measurement, calculation and control process, IGS (International GNSS Service) [44].Post-processing of GNSS data was carried out using original scripts.Points with height accuracies ranging from 0.01 m to 0.12 m were assumed to be adequate for the processing of gravimetric measurements. The presented case allowed us to consider the issue of using gravimetric data to compensate for ultra-precise inertial navigation.Until the end of the 20th century, gravity data mainly used ultra-precise inertial navigation systems (INS) in defence applications.In the 21st century, due to the intensive evolution of autonomous vehicles, these systems began to be used to support this technology [45,46].The constant improvement in inertial sensors reliability [47] significantly The relation between gravity on the geoid and on the earth's surface, according to [49], can be approximated by the Taylor expansion 1. Where 0 is the gravity on the geoid, is the gravity on the earth's surface at the point at the height over the geoid.The is the vertical gradient of the gravity. For small values of , the linear term in equation 1 is sufficient, and the rest of the terms can be neglected.If we assume that there are no masses (or existing mass can be neglected) between the geoid and the point of measurement, equation 1 can be rewritten into equation 2. Where is the free-air reduction defined by the equation 3. By the assumption that the is the normal free air gradient, the free air anomaly can be defined by equation 4. Assuming that the difference in the ellipsoidal height is 0.321 m of the gravimetric control point used to transfer the gravity to the ship's mooring berth and assuming that the freeair gravity gradient is 0.3086 mGal/m, the spread in extreme cases in the area covered by the measurements is 0.346 mGal. Difference peak-to-peak amplitude is approximately three times higher than the estimation based on a simple multiplication of height offset times standard gradient value.This difference does not create a uniform offset over the entire area, and the distribution of the free-air anomaly values is strongly differentiated. The explanation of this effect can be attributed to the way in which the grids of free-air gravity are made.The commonly employed procedure is to perform the interpolation of the Conclusion For gravity data that are intended to be used in ultra-precise inertial navigation systems, the data should be as accurate as possible.When the data is recorded with a relative gravimeter, each stage of field measurements must be analysed in terms of the possibility of making a measurement error.Measurement ) gravimeters was used, it has no influence on the reduction of systematic error of gravity network reference point.The only yet important effect of such a procedure was lowering the uncertainty of gravity difference estimation between the point of gravity absolute network and reference point on the pier, in close proximity to the moored measurement vessel.By relating the pier reference point with a few gravity absolute network control points the final gravity value for the reference point was established.The spirit levelling between the reference pier point and the marine gravimetric sensor on the vessel board was performed to ensure the control of ellipsoidal heights estimated for the marine gravimeter by the vessel's GNSS positioning system.The gravity disturbance along the measurement vessel trajectory has been calculated based exclusively on the value of the Gdynia pier reference point.The marine gravimeter utilised for the measurements at sea was MGS-6 Micro-g LaCoste & Romberg.In this communication, it is presented how, in the marine area covered by gravimetric measurements, the erroneous parameter of the reference point propagates to the results of the recorded signals.The influence of the bias in the reference point value has a complex effect on the values on the measurement lines because it interacts with other campaigns (which are vastly bonded to independent piers values) during the data processing and line leveeing. Fig. 1 . Fig. 1.Gravimeters and GNSS receivers at points of the measurement span: A -POLREF-GORA DONAS, B -Gdynia mareograph.Acquisition of gravimetric data for the development of highprecision and high-resolution maps of gravity acceleration field functions required precise positioning of the vessel during the campaign.Positioning was obtained by recording raw GNSS data from receivers installed on the vessel and data from reference stations along the Polish coast and along the southern coast of Sweden.It must be noted the application of the Global Navigation Satellite System technique has revolutionised maritime gravimetric measurements[13,14].This technique has It is extremely difficult to determine the offset error of gravimetric registration signals with such accuracy during a sea campaign.This article is organised as follows: section 2 describes the input data, details of the field measurement campaign conducted to determine the values of the parameters of the national control point network and the data processing methods used.Section 3 presents the results of the accuracy calculations: a comparison of the results between two independent systems, ASG-EUPOS and HxGN SmartNet, and a description of how control point values affect the distribution of gravimetric values recorded in the study area.Section 4 presents a summary and conclusions. the eastern part of the Southern Baltic Sea area are linked to a point in the port of Gdynia.The gravity value was transferred from the closest absolute base points of the higher-order gravimetric control network, point 5403 (POLREF-GORA DONAS) and point 363 (GDANSK ABS) (ID 5418234342.000).The benchmarks included in the measurements are concrete poles sunk 1 m into the ground.On the surface of the poles, there are concrete slabs 0.6 x 0.6 x 0.12 m in which metal pins are mounted.The value at the control network points was obtained from the National Register of Basic Geodetic, Gravimetric and Magnetic Networks, managed by Poland's Head Office of Geodesy and Eksploatacja i Niezawodność -Maintenance and Reliability Vol. 26, No. 3, 2024 3 mGal.It was decided to carry out a land campaign to check the validity of the 5403 POLREF-GORA DONAS point, which was used as the starting point.During the measurement campaign, it was decided to compare the points 0301 EUREF = EUVN-PL04 ROZEWIE, 5403 POLREF-GORA DONAS, and mareograph points in the ports of Gdynia and Gdańsk.An ellipsoidal height was adopted for comparisons at individual points because this height is used in the post-processing of gravimetric data recorded during sea campaigns.Reference values need to be validated whether they are still up to date.One of the control measurement techniques is the GNSS [18] measurement technique, which involves the use of the American Global Positioning System (GPS), the Russian Global'naya Navigatsionnaya Sputnikovaya Sistema (GLONASS), the European Galileo system, the Chinese BeiDou system, or the Japanese Quasi Zenit Satellite System (QZSS).These systems permit the determination of the position of a measured element in three-dimensional space by assigning coordinates in a global geodetic reference frame or after performing a special transformation into a local reference frame [19,20].Taking into account the characteristics of the reference point, the techniques and technologies for satellite measurements were selected to achieve the highest accuracy. GPS and GLONASS were selected from among the currently operating GNSS systems.Ground-based systems were also used in the measurements, specifically, HxGN SmartNet and ASG-EUPOS, included in the Ground-Based Augmentation System (GBAS).ASG-EUPOS was launched in 2008 and is run by the Head Office of Geodesy and Cartography.Thus, it provides the official implementation of the European Terrestrial Reference System 1989 (ETRS89) in Poland and provides observations of the four satellite systems GPS, GLONASS, Galileo and BeiDou[22].The ASG-EUPOS system includes 107 Polish stations and 22 stations located in the territories of neighbouring countries to ensure full coverage of services in border areas.The equipment used to build the network comes from various manufacturers; the most common are Leica and Trimble receivers.Through ASG-EUPOS, it is possible to access the following services: NAWGEO, KODGIS/NAWGIS, and POZGEO/POZGEO D[23,24].HxGN SmartNet is a commercial network of reference stations and its role is similar to that of the ASG-EUPOS system (Fig.2).The main difference is the station density and single use of in-house hardware only.The network includes 172 reference stations in Poland and 21 stations in neighbouring countries.Using HxGN SmartNet, we have access to RTK, RTK-RTN, VRS, and post-processing services[25,26]. Fig. 2 . Fig. 2. The area of the Southern Baltic Sea is covered by gravimetric measurements established in the port of Gdynia.Distribution of ASG-EUPOS reference stations (red) and HxGN SmartNet (blue). Fig. 3 . Fig. 3. Distribution of the reference stations of two service providers in relation to the measurement points (a) and the baselines obtained in the calculation process based on the ASG-EUPOS (b) and HxGN SmartNet (c) systems.The point located in Rozewie is the EUREF-POL network point, the point on Góra Donas is the POLREF point, and the mareograph points are in the ports of Gdynia and Gdańsk.The measurements started on 27 June 2019 at 17:30 and continued until 00:00.At approximately 9 pm, the measurement was stopped and restarted, resulting in two measurement sessions, lasting approximately 3h each.The locations of the control points were determined by calculating the coordinates in the global geodetic reference frame and then comparing them with catalogue values.To check the current catalogue data, post-processing was conducted using two independent reference station systems, ASG-EUPOS and HxGN SmartNet.Firstly, post-processing was performed using ASG-EUPOS reference stations (Gdańsk, Elbląg, Kościerzyna, Redzikowo, Władysławowo, Starogard Gdański).Observation data (in the universal Rinex data format) for reference stations were obtained using the POZGEO D service designed for this purpose.Data for reference station antennas were also collected to check and possibly correct the Rinex files.Errors in the type and parameters of the reference station antennas of Gdańsk, Elbląg, Kościerzyna, and Redzikowo were found.Corrections Fig. 4 . 5 .Fig. 5 . Fig. 4. Redundancy based on the following systems: (a) ASG Eupos, (b) HxGN SmartNet.The graphs in Fig. 4 indicate that in the calculation process, a higher relative frequency of baselines with redundancy numbers of 90%-100% was observed for the HxGN SmartNet stations, compared to only 33% relative frequency for the same redundancy class in the case of the ASG EUPOS system.Many elements influence the redundancy factor.Taking into account the analysis factors, the baselines between the reference stations and the points observed, the geometry and interdependencies between these points and base stations, the number of baselines obtained, control" GPS loop misclosure" allowed some baselines eliminated because they do not meet certain accuracy criteria.The mentioned elimination of baselines leads to different redundancy of measurement points.The Leica Geo Office program and proprietary scripts were used to determine the accuracy of the calculations.Statistical a detailed analysis of the results and a comparison of the catalogue values with the values of control measurements was carried out.At three points (EUREF Rozwie, mareograph of Gdynia, mareograph of Gdansk), slight discrepancies between the measured values (0.02 m) were noted within the limits of measurement and calculation errors.On the other hand, at point 5403 (POLREF-GORA DONAS), during the first and second measurement sessions, a significant difference was noted between the ellipsoidal heights from the catalogue and of the control measurement.The catalogue value states a height of 201.653 m, whereas the height based on the ASG EUPOS reference stations network was 201.977 m in session 1 and 201.970 m in session 2. The values based on the network of HxGN Smartnet reference stations were 201.972 m in session 1 and 201.976 m in session 2. The average difference in the ellipsoidal height from the two campaigns in relation to the data from the ASG EUPOS reference stations network was 0.3205 m, and the average difference in relation to the data from the HxGN Smartnet reference stations network was 0.321 m.After averaging the results obtained from both systems, differences in the ellipsoidal height of 0.321 m were used.The value of the discrepancy of the ellipsoidal height is so significant that it appeared essential to check how it is transferred to gravimetric measurements recorded in the sea campaign.Gravimetric signals recorded during the measurement campaign carried out on the ORP "Heweliusz" ship from 07 to 10 June 2021 were used as a reference in the analysis.The measurement campaign was carried out in the eastern part of the Southern Baltic Sea.During the implementation of the campaign, gravimetric measurements were made on planned measurement profiles.Analyses were carried out consisting of performing distribution of free-air anomalies located in the area covered by the gravimetric measurement campaign for two values -the ellipsoidal height of the POLREF-GORA DONAS benchmark.For the first distribution, benchmark values g = 981405.9225mGal and H = 201.653m were assumed, and for the second distribution, g = 981405.9225mGal and H = 201.974m were taken.Subsequently, the differences between these values were calculated, as shown in Figure 7. Fig. 7 . Fig. 7. ORP "Heweliusz" measurement campaign carried out from 07 to 10 June 2021: (A) measurement profiles, (B) free-air anomaly difference distribution.Used in the analyses of the EGG2015 quasigeoid full resolution file (1.0' x 1.0').The values refer to GRS80 and the zero tide system.The entire data analysis was performed assuming the accuracy of measurements in individual phases of the campaign obtained by the Gdańsk University of Technology team.The analyses included offset measurements of the gravimeter, IMU, and GNSS antenna made on the ship using a tachymeter with the development of this technology.INS gravity compensation is related to the accuracy and resolution of the horizontal gravity disturbance.In this context, the requirements for the resolution and precision of gravity data in marine areas are important to consider.It should be noted that the error of such data should not exceed 1 mGal[48].Let us consider our case in the context of meeting such a requirement: how precisely does the accuracy of the gravity at a reference point located on land determine the distribution of this value over the areas covered by the marine survey campaign?This distribution is illustrated using the example of free-air anomalies. 5 ) scattered gravity anomaly data by least-squares collocation (Kriging) using a 2nd-order Markov covariance model.The basic principle of Kriging is the estimation of the unknown value ̂ in the grid point 0 basing on the N values ( ) of known points scattered in space with equation 5. ̂( 0 ) = ∑ ( ) =1 (Where are the weights.The weights are to be estimated based on the semivariogram [49] constructed from all measurement data.Because of that, the strength of the dependence of the final grid value depends on the statistical properties of all data points in the considered area.Due to the continuity of the gravitational field, a gird of free-air anomaly is never constructed from data collected during a single campaign.The data from other campaigns is used to densify/pad the current one.In such a case, the unremoved offset between campaigns will create a nonoptimal semivariogram, further increasing the error from what could be suspected based on the knowledge about the offset of the pier reference point.Eksploatacja i Niezawodność -Maintenance and Reliability Vol. 26, No. 3, 2024 teams are not able to check all control points used as references, so they must assume catalogue values to be reliable.In the case discussed, transferring the gravity values from two points of the geodetic control network, a divergence in the results of 0.3 mGal was observed.It was decided to check the catalogue parameters of one of the control points.Analyses of the measurement results showed differences between the catalogue and actual values.Applying this data to the gravimetric naval campaign showed how the results were affected.The following conclusions are based on the results of the analyses presented in the article.•Measurementcampaigns carried out with dynamic gravimeters at sea are costly projects.For this reason, special care should be taken when establishing the reference to the national geodetic networks for 3D positions and gravity.If possible, the values obtained for reference points determined on the port should be independently verified using references to the largest possible number of points from the absolute gravity network.•Theoccurrence of a gravity value offset in one measurement campaign may be difficult to notice when the number of intersection points with other campaigns is small.It should be noted that the case discussed in the article concerns a campaign that begins and ends in one port and is related to one point.Assuming the diligence of the team's work, the offset values introduced by errors in linking are small.However, if they are not removed, they lead to non-uniform anomaly deviations in the study area, mainly related to the impact of data from neighbouring measurement campaigns on the values obtained in the resulting grid.This may be one of the sources of errors in the distribution of functionals of the gravity field.Accurate determination of these functionals is crucial for ultraprecise inertial navigation.It was noted that, according to[2], the gravity error of magnitude 0.3 mGal after one Schuler period will generate the horizontal error in INS estimated position up to 4.5m.Therefore, the problem of the existence of small gravity offsets in the links should not be underestimated for future applications of the data in inertial navigation.•Assuming the diligence of the team's work, the offset values introduced by errors in linking are small.However, if they are not removed, they lead to anomaly deviations in the study area, mainly related to the impact of data from neighbouring measurement campaigns on the values obtained in the resulting grid.The demonstrated effect of changing the vertical coordinate of the gravimetric reference point in relation to the distribution of the free-air correction did not show an equal distribution trend.
2024-05-19T16:00:46.869Z
2024-05-15T00:00:00.000
{ "year": 2024, "sha1": "01adc5a22e64c78047ede16d6b70503abe44826b", "oa_license": "CCBY", "oa_url": "https://ein.org.pl/pdf-188592-109992?filename=Accuracy%20of%20marine.pdf", "oa_status": "GOLD", "pdf_src": "Anansi", "pdf_hash": "32867ea8b2a8269bded57369d949baf468767536", "s2fieldsofstudy": [ "Engineering", "Environmental Science" ], "extfieldsofstudy": [] }
236702780
pes2o/s2orc
v3-fos-license
Complex event processing system for IoT greenhouse : Greenhouse is an important part of facility agriculture and a typical application scenario of modern agricultural technology. The greenhouse environment has the characteristics of nonlinearity, strong coupling, large inertia, and multiple disturbances. There are many environmental factors and it is a typical complex system [7] . In smart greenhouses, control commands are mostly triggered by complex events with multi-dimensional information. In this paper, by building the aggregation structure of complex events in the greenhouse, the technology is applied in the greenhouse as a whole. The core innovations of this paper are as follows: through the analysis of the information transmission process in the greenhouse, combined with the characteristics of the scene, a CEP information structure with predictive modules is formed, which is conducive to the popularization and application of CEP technology in the agricultural field. Pointed out the importance of extreme conditions in the prediction of the greenhouse environment for model evaluation. By improving the loss function in the machine learning algorithm, the prediction performance of a variety of algorithms under this condition has been improved. Applying CEP technology to intelligent greenhouse control scenarios, a set of practical complex event processing systems for greenhouse control has been formed. introduction With the widespread application of Internet of Things technology in the field of agricultural production, greenhouse data monitoring and signal control technology has been relatively mature [1,2] . However, due to the coupling and hysteresis of greenhouse regulation [3] and the diversity of environmental requirements of crops. Leading to highly reliable and easy-to-use automatic control methods is still the biggest difficulty in current greenhouse research [4,5] Comparison of different routes of greenhouse control technology: Fixed value control is one of the most widely used greenhouse control technologies [5][6] , but its accuracy is low, energy consumption is large, and the oscillation is obvious [3] . In addition to fixed value control, there is direct decoupling control, due to the influence of factors such as the difficulty of obtaining model parameters and the difficulty of observing some variables, the accuracy of decoupling is low [7] ; fuzzy neural network, which requires more accuracy In the original training set of, the application scenario is affected by the limitations of the optimization function, and the traditional algorithm is easy to fall into the local minimum [8] ; the expert system, which integrates expert-level knowledge in special fields for inference control [9,10] , The degree of intelligence in agriculture is greatly improved, and the system functions are diversified. However, it is difficult to adjust manually in different environments, resulting in poor versatility and inaccurate control [7] . Most of the above control methods are analyzed and adjusted according to one or some characteristics of the greenhouse, and the effect of single-use is not good, and they are often mixed in practice to improve the robustness of the system. However, as the information dimension increases, the utilization rate of the system decreases, neural networks and direct decoupling methods gradually become bottlenecks, and the difficulty of coordination between rules in expert systems is also greatly increased. It is difficult to build some types of traditional control methods as the core Form a unified and coordinated control system. The complexity of control also makes it difficult for workers to effectively participate in greenhouse regulation and control. The superiority of CEP: It transforms into EPA (Event Processing Agent) for event reception and generation based on human original control experience according to a specific process. Each EPA is relatively independent and has a flexible structure. As a whole, it performs refined control and scheduling by identifying specific greenhouse scenarios, and then integrates knowledge in different fields, reduces operating errors, and has great advantages in achieving effective early warning and human-machine coordination to the goal of unmanned management. Current status and problems to be solved: In agriculture, Bertha et al. have proposed a complete set of methods for building the peripheral system of a complex event processing engine [20] , Li et al. also made a detailed analysis of the spatiotemporal event model in agriculture [14] , Deng They also made relevant attempts for the use of non-CEP-structured time automata in greenhouses [16] , but the above research did not involve the production of complete event processing rule sets, and only made rules for certain special environmental parameters, and did not form A set of universal and complete event aggregation structure and CEP rule set production method that can give full play to the advantages of complex event processing technology. Related technology CEP (complex event processing technology) is derived from active database technology [12] and is an important method to solve the problem of multiple sequential event stream processing [13] . It can make full use of information from different angles to make decisions on the current situation and has powerful asynchronous decoupling. And situation analysis capabilities. The decision-making method mainly depends on the connection method of EPA (Event Processing Agent) and the realization of EPA aggregation logic. There are very few applications of complex event processing in the field of greenhouse control. Analysis of Complex Events in the Greenhouse The focus of research on the use of CEP in greenhouses is to define a common EPA connection method and the type of events to be transmitted, that is, the event aggregation structure. There are four main information objects in the greenhouse information: controller, worker, environment, and processing engine. The relationship is shown in Figure 1: The main information roles and information relationships of greenhouses With the CEP engine as the core, its message generation can be divided into four main processes: the issuance and feedback of control commands, the generation of environmental requirements, the judgment of the control effect of the controller, and the scheduling of control methods. Feedback is analyzed in detail. Control the basic process of publishing The greenhouse mainly includes the following stages: ① When the current environment state is inconsistent with the control target, the processing engine will receive the new environmental change demand, which is recorded as End; ② According to the environmental situation, the processing engine sends a series of control commands to the controller. Denoted as Aci; ③ After the controller receives Aci, it returns the state of the controller to the processing engine, denoted as CSI; at the same time, the controller tries to execute instructions from the processing engine, which will have an actual impact on the environment, denoted as Actual Control; ④At the same time, The processing engine informs workers of the current system working status, such as environmental warning information. This process is denoted as Ewi. From the above analysis, the UML sequence diagram of the basic process of feedback control is obtained, as shown in Figure 2: UML sequence diagram of the basic process of feedback control For the final system implementation, UML sequence diagrams need to be converted into UML state diagrams. Record the control method as CS: Method. From this, the UML state diagram of the basic process of feedback control can be obtained, as shown in Figure 3: For a control system with multiple control methods, the scheduling and operation of other control methods need to be considered among the control methods. If the identification of the called control method is recorded as Cmt, for the CEP engine, the causal relationship generated by the event, that is, the aggregation relationship can be represented by the event state transition relationship diagram, as shown in Figure 4. The color of each type of information in the figure represents the source of the information. The specific source is shown in the legend on the left. The arrow represents the factors that need to be considered when generating this type of information. Hereinafter, this event state transition relationship is referred to as the aggregation structure of events, which is processed by the CEP engine The structure of the EPA required for these conversions is called the event processing structure. Considering that the above model judges whether the information transmission is successful, the above model needs to be further improved by introducing the new event process. The generation of the other three environmental requirements, the judgment of the control effect of the controller, and the scheduling of the control method can be deduced according to this idea. Build information aggregation structure According to the analysis and derivation of the information sequence structure and relationship of the four major information objects, the issuance and feedback of the control commands in the greenhouse, the generation of environmental demands, the judgment of the control effect and the control method scheduling, the obtained structures can be combined to form a complete The information aggregation structure, that is, the event model in the greenhouse is shown in Figure 5. In the figure, the starting point and endpoint of all kinds of information aggregation are a kind of information received or sent by the system, which is a complete information system structure. Fig 5. Greenhouse event model The statistics of all events in the model can be divided into three types: input atomic events provided by the outside world to the engine, complex events finally output by the engine, and complex events representing the required intermediate information, as shown in Tables 1 to 3 Table1. Complex events output by the engine Integrate all the above information processing units to obtain the event processing structure used to implement the CEP engine, as shown in Figure 6. The complex event processing structure of the greenhouse 4 Implementation framework of CEP engine. Implementation plan of event aggregation execution unit The execution unit that can realize event aggregation is also called EPA. It uses automata technology to build EPA. It uses jumps between states to identify the content of the event and the relationship that exists. When entering a new state, it will execute an action that needs to be performed in the state and modify it. Local variables. Among them, jumps between states are divided into three categories, which are triggered by specific events, timers, and boundary conditions. When the EPA jumps to the next state, it will perform a set of actions that need to be performed in the state. Among them, actions include three types of modifying local variables using the event information just received, generating new events, and executing specific instructions. Implementation scheme of event delivery transceiver unit The transceiver unit for event transmission between EPAs can form a network of multiple EPAs to communicate with each other. Each EPA will respond to input events and pass the output events to other EPAs. This kind of structure constitutes a kind of EPA communication network called Event Processing Network (EPN), which is the key to the realization of complex event aggregation. To enable accurate communication between EPAs, the transceiver unit is required to complete the accurate delivery of events. In this paper, the system uses Kafka middleware with message subscription function to implement event delivery and reception. For the CEP engine, the realization of delivery needs to ensure that all kinds of messages collected and generated by the event processing agent in the system can be packaged into events. Planting plan selection In this paper, lettuce is selected as the object of planting plan selection, structure construction, and simulation experiment. The typical sensor group and controller group existing in the glass solar greenhouse of Zhuozhou Intelligent Agriculture Laboratory of China Agricultural University are used as the collection and control of the simulation experiment. equipment. Organize the planting plan of lettuce by collecting crop planting books, literature, and web materials in [17][18][19] in the references. The planting plan is shown in Table 4 Table4. Lettuce Greenhouse complex event model Each event flow from the beginning to the end in the figure corresponds to a specific agricultural issue. The system needs to use existing response strategies to construct an event processing unit, analyze the current situation based on the sequence of events that have occurred, and transmit the analysis results to other units. Regarding the environment, it is necessary to observe the changing trend of the greenhouse (EPA2), according to the crop planting plan, find the most favorable greenhouse environment for crop growth (EPA5, EPA6, EPA3), and consider the mutual influence relationship existing in the greenhouse to refine the regulation. Target (EPA7). In terms of control, the system needs to monitor whether the connection and operation of the controller are normal (EPA8), consider the impact of the environment on the controller effect (EPA1, EPA4), how to schedule the controller and generate early warnings (EPA9, EPA10, EPA11, EPA12) And whether the instruction is executed smoothly (EPA13). The following is only a detailed explanation of the construction ideas of the greenhouse change trend EPA. Greenhouse change trend judgment model EPA2 is used to obtain the changing trend of environmental factors from environmental information. Specifically, EPA2 will record the current environmental event and the last environmental event processed to obtain the change value of the environmental factor and the time difference between the two events, and obtain the historical or future environmental factor change speed, that is, the changing trend. EPA2 information structure Take temperature as an example. In the actual scene, its changing trend has three types of states: rising, falling, and stable, which represents the rising, falling, and stable trend of temperature in a certain period. To judge this kind of trend, it is necessary to obtain the values of environmental factors throughout a period of time, and the state transition is mainly determined by the difference between the two. Therefore, three states can be defined in EPA2, which are steady (E), rising (R), and falling (F). Three types of boundary conditions are defined. The difference between the predicted value and the true value is greater than the threshold (tr), and the true The difference between the predicted value and the predicted value is greater than the threshold (tf) and the absolute value of the difference between the predicted value and the true value is less than the threshold (te). For example, when the EPA state is at F and the boundary condition te is established, the EPA state will jump to E and generate an Etd (tem, EQUAL) event. The specific process is shown in Figure 8: To sum up This paper solves the key problem of the greenhouse control engine construction in the CEP system, provides a complete greenhouse event aggregation scheme, realizes the greenhouse complex event processing system based on the automata algorithm and the expert system rules, and well integrates the current greenhouse Predicting, formulating planting plans, decoupling, human-machine coordination control plans and their scheduling strategies, early warning and Internet of Things monitoring and other common areas of knowledge and methods, fully combining the advantages of various control methods to control the characteristics of greenhouses, and coordinating with each other. As a whole, it has good interpretability and an easier optimization strategy, which realizes the efficient control of man-machine coordination
2021-08-03T00:05:56.392Z
2021-01-01T00:00:00.000
{ "year": 2021, "sha1": "c0df713b8c59aeef22bfbbc816dab6489a1c4c7e", "oa_license": "CCBY", "oa_url": "https://www.e3s-conferences.org/articles/e3sconf/pdf/2021/43/e3sconf_icsce2021_01048.pdf", "oa_status": "GOLD", "pdf_src": "Adhoc", "pdf_hash": "317b6bedd84b8f8af6c32713d9eae3ef67e294e7", "s2fieldsofstudy": [ "Engineering", "Environmental Science", "Computer Science", "Agricultural and Food Sciences" ], "extfieldsofstudy": [ "Computer Science" ] }
159037887
pes2o/s2orc
v3-fos-license
Postnatal diagnosis of de novo complex der(8) in a boy with prenatal diagnosis of recombinant chromosome 8 syndrome Key Clinical Message Recombinant chromosome 8 syndrome is caused by duplication of 8q and deletion of 8p. A fetus with anomalies was misdiagnosed with this syndrome based on an amniocyte karyotype. Postnatal chromosomal microarray and other studies identified a de novo derivative chromosome 8. For fetal anomalies, detailed genetic studies may be required. | INTRODUCTION The recombinant chromosome 8 syndrome (Rec8 syndrome) is caused by duplication of 8q22.1-qter and deletion of 8pter-p23.1 and is derived from meiotic recombination of a parental pericentric inversion 8 chromosome. Here, we report a newborn male who was prenatally diagnosed with Rec8 syndrome based on a 450 G banding karyotype of amniotic cells undertaken because of fetal anomalies. After birth, findings inconsistent with Rec8 syndrome including neural tube defect and atypical facial features prompted chromosomal microarray analysis which revealed a heretofore unreported complex rearrangement of chromosome 8 including 8q and 8p duplications. Parental karyotypes were normal, and thus, the rearrangement is de novo. The purpose of this report was to alert providers to the possibility of in utero misdiagnosis of Rec8 syndrome as well as present phenotypic details of this unique patient. Our findings support the hypotheses of others that 8q and 8p duplications are associated with cardiac defects. Prenatal chromosomal microarray analysis in addition to cytogenetic studies would have yielded the correct diagnosis and should be considered for evaluation of fetal anomalies. The recombinant chromosome 8 syndrome (Rec8 syndrome) is a recognizable pattern of malformation caused by duplication of 8q22.1-qter and deletion of 8pter-p23.1. In all cases, the Rec8 chromosome is derived from meiotic recombination of a parental pericentric inversion 8 chromosome. 1 The Rec8 syndrome (OMIM 179613) was first described by Fujimoto et al, 2 and a detailed natural history of 42 affected individuals was reported by Sujansky et al 3 in 1993. The molecular breakpoints of the inversion 8 chromosome (inv8) were defined by Graw et al 4 and allow the disorder to be distinguished from other recombinant 8 disorders. 5 The Rec8 phenotype includes characteristic facial features (wide face, hypertelorism and/or telecanthus, thin upper lip, infraorbital creases, thick upper gingival frenulum), cleft lip and/or palate, complex congenital heart disease (particularly conotruncal defects), urogenital anomalies (cryptorchidism, urinary tract anomalies), and universal severe psychomotor delay. [6][7][8] We are reporting on a newborn male who was prenatally diagnosed with Rec8 syndrome based on a 450-band karyotype of amniotic cells undertaken due to recognition of fetal anomalies. After birth, multiple findings were inconsistent with Rec8 syndrome, including a neural tube defect | 899 OREN Et al. and atypical facial features. This prompted a chromosomal microarray analysis, which revealed an unreported complex rearrangement of chromosome 8. Parental karyotypes were normal, signifying this rearrangement to be de novo. The goal of this patient report is to alert providers to the possibility of misdiagnosing Rec8 syndrome as well as present the phenotypic details of this unique patient. | CLINICAL REPORT The male patient was a 2.65 kg, borderline small for gestational age infant born to a 27-year-old G2P0010 Mexican mother and nonconsanguineous 50-year-old Mexican father via vertex vaginal delivery at 39-week gestation. The pregnancy was complicated by multiple fetal anomalies including myelomeningocele, hydrocephalus, and cardiac defects, as noted during prenatal care in Mexico. There were no known teratogenic exposures, and family history was negative for congenital anomalies. Other than frequent fetal ultrasounds, no other fetal imaging studies, such as fetal MRI, were obtained. Decreased fetal movement at 37-week gestation prompted evaluation at our center, and subsequent amniocentesis revealed an abnormal male karyotype interpreted as Rec8 at 450band resolution. After delivery, Apgar scores were 8 and 9 at 1 and 5 minutes, respectively. Birth growth parameters were 5th centile for weight, 3rd centile for length, and 24th centile for head circumference. Initial physical examination revealed a large anterior fontanel measuring 12 × 7.5 cm, a prominent occiput, upsloping and shallow orbits, bilaterally broad pinnae, small fingernails and toenails, abnormal toes with the third toes crossing under other toes bilaterally, and a flat open lower thoracolumbar defect of the spine measuring 8.5 × 5.5 cm. (Figure 1 for clinical photographs taken at 34 days of age). He had no movements of the lower extremities but had appropriate movement of the upper extremities. Radiological studies revealed myelodysplasia with associated vertebral body segmentation of the lumbar spine, multiple bilateral dysplastic ribs, and double manubrial ossification center. An echocardiogram showed a double outlet right ventricle, large paramembranous ventricular septal defect, mildly hypoplastic left ventricle, and moderate mitral stenosis. Ultrasound study of the head revealed hydrocephalus of the lateral and third ventricles with decompressed fourth ventricle and findings consistent with Chiari 2 malformation including interdigitation of the falx, Luckenschadel skull, and enlarged massa intermedia. Renal ultrasound showed mild left pelviectasis, and voiding urethrocystography showed atonic neurogenic bladder. He was able to urinate spontaneously. On day of life 1, a ventriculoperitoneal shunt was placed and the neural tube defect debrided and closed. He developed pulmonary hypertension and required high flow nasal cannula respiratory support for the first two weeks of life. Additionally, he required g-tube placement at 3 weeks of life. The infant showed significant global developmental delay and was discharged home at 34 days of life on hospice care in view of the poor cardiac prognosis. A developmental evaluation at 15 months showed his growth parameters were weight 7.25 kg (−4 SD), length 75 cm (10th centile), and head circumference 49.5 cm (90th centile). He was bottle fed and taking pureed foods. Developmental skills were at the 6-to 7-month level for cognition, speech, and fine motor skills. Gross motor skills remained at the 1-to 2-month level with minimal head control secondary to his enlarged head. | Prenatal chromosome study Metaphase spreads were obtained from amniotic cells using standard procedures. GTW banding with resolution of 450 bands was obtained. No other types of banding studies were performed prenatally. | Postnatal genetic studies Chromosome microarray analysis was performed on peripheral blood DNA isolated according to established protocols, using FDA-cleared Affymetrix Cytoscan Dx microarray (ThermoFisher Scientific, USA). This microarray contains over 2.69 million probes with an interprobe distance of 1148 base pairs. Single long continuous absence of homozygosity (AOH) larger than 10 Mb or total autosome AOH proportion larger than 3% was reported. High-resolution chromosome analysis of the patient's peripheral blood lymphocytes at 550 bands was performed as well as parental chromosome studies. Karyotype reporting was expressed in accordance with the 2016 International System for Human Cytogenetic Nomenclature (ISCN) and the hg19 build of the human genome. Postnatal chromosome microarray analysis of peripheral blood showed partial 8p monosomy/partial 8p trisomy/partial 8q trisomy. Specifically, there was a 6.8 Mb deletion of 8pter-p23.1 (loss of 15 OMIM genes), a 28.5 Mb duplication of 8p23.1-p11.2 (129 OMIM genes), and a 26.0 Mb duplication of the 8q24.12-8qter segment that houses 111 OMIM genes. The 8pter-p23.1 deletion observed in this patient is identical to the 8p deletion observed in Rec8 patients; however, the 8q duplicated segment starts at band 8q24.1 instead of 8q22 and the 8p duplication segment is typically not observed in Rec8 patients. A high-resolution chromosome study at 550-band resolution from the infant's peripheral blood was performed at 8 months of age and showed results consistent with the microarray analysis: that is, a complex unbalanced chromosome arrangement with a derivative chromosome 8 in all cells examined. The unbalanced chromosome complement had a loss of the segment from the 8pter to 8p23.1 and a duplication of the segment from 8qter to 8q24.1 and an inverted duplication of 8p11.2 to 8p23.1. The extra copy of the 8qter to 8q24.1 is attached to the inverted segment of 8p11.2 to 8p23.1. Therefore, the patient's correct karyotype is 46,XY,der(8)(8qt er->8q24.1::8p11.2->8p23.1::8p23.1->8qter)dn ( Figure 2). Chromosomal analyses of parents were normal 46,XX and 46,XY, and hence, the der(8) chromosome in the child is de novo and is not derived from a parental inv(8). | DISCUSSION This patient has a novel complex der(8) chromosome which is de novo. He was prenatally misdiagnosed as having Rec8 syndrome based on a low-resolution amniocyte karyotype obtained because of the multiple fetal anomalies including a myelomeningocele, hydrocephalus, and cardiac defects. Postnatal dysmorphology evaluation identified nontypical facial features for Rec8, including upsloping orbits, broad pinnae, and absence of prominent lateral nasal folds. Other nontypical features included a lumbar myelomeningocele and associated hydrocephalus/Chiari 2 malformation, dysplastic ribs, and overfolded toes. Subsequent chromosomal microarray analysis and high-resolution karyotype correctly identified a unique der (8) chromosome that resembles the Rec8 chromosome but differs by having inverted dup(8p) in addition to the dup(8q) and del(8p) (Figure 3). The 8q breakpoints in this patient also differ from typical Rec8: 8q24.1 and 8q22.1, respectively. OREN Et al. Regarding genotype-phenotype correlation, the presence of the inverted duplication/deletion 8p has perhaps increased risk of cardiac defects, as the GATA4 gene (OMIM 600576), located in 8p23.1, is involved with congenital heart defects including double outlet right ventricle and ventricular septal defects. 9 In addition, some genes in the deleted 8p23.2pter region, such as ARHGEF10 (OMIM 608136), CSMD1 (OMIM 608397), and DLAGAP2 (OMIM 605438), are directly related to neurological conditions such as developmental delay, language abnormalities, autism, and epilepsy. The gene CLN8 (OMIM 607837) has been associated with central nervous system development, which may play a role in the severe neural tube defect in our patient. Concurrently, large interstitial duplications of 8p >20 Mb have been linked to severe brain anomalies and intellectual disabilities, as genes responsible for brain development such as FGFR1 (OMIM 136350) are present in this region and participate in the neural crest cell migration. 10 Currently, there is too little information to speculate on the origin of the de novo der(8) chromosome in this patient. As the karyotypes of both parents were normal, the child's chromosome abnormality is not the result of meiotic recombination of a parental inversion chromosome (as is the case with Rec8 syndrome) and the mechanism leading to it is unknown. There are very few reported individuals with invdup8p in addition to dup8q and del8p. The most similar patient was reported by Sánchez-Casillas et al 11 and had the same 8p deletion, a larger 8q duplication, and a much smaller 8p duplication. Her phenotype was significantly milder in growth and developmental delays, and she had no congenital heart disease. The phenotypes of individuals with invdup-del8p only (ie, absence of 8q dup) do share features with both Rec8 and our patient, specifically widely spaced eyes, broad nose, intellectual disability, and congenital heart defects. 12 In summary, we present a patient with a novel der(8) chromosome including invdupdel8p and dup8q whose phenotype has some overlap with Rec8 and other patients with similar cytogenetic findings, but who has a neural tube defect previously unreported in der(8) patients. He was misdiagnosed prenatally as having Rec8 syndrome based on a low-resolution amniocyte karyotype, with failure to recognize that the presence of a neural tube defect made Rec8 unlikely. Prenatal chromosomal microarray analysis in addition to cytogenetic studies would have yielded the correct diagnosis and should be considered for evaluation of fetal anomalies. 13
2019-05-22T13:31:54.395Z
2019-03-25T00:00:00.000
{ "year": 2019, "sha1": "63f788fc1d58ab1c67467f8cc41dc14aae775c14", "oa_license": "CCBYNCND", "oa_url": "https://onlinelibrary.wiley.com/doi/pdfdirect/10.1002/ccr3.2109", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "63f788fc1d58ab1c67467f8cc41dc14aae775c14", "s2fieldsofstudy": [ "Medicine", "Biology" ], "extfieldsofstudy": [ "Medicine" ] }
141042441
pes2o/s2orc
v3-fos-license
Socioeconomic Impacts of Forest Fires upon Portugal : An Analysis for the Agricultural and Forestry Sectors Recent forest fire activity has resulted in several consequences across different geographic locations where both natural and socioeconomic conditions have promoted a favorable context for what has happened in recent years in a number of countries, including Portugal. As a result, it would be interesting to examine the implications of forest fire activity in terms of the socioeconomic dynamics and performance of the agroforestry sectors in the context of those verified in the Portuguese municipalities. For this purpose, data from Statistics on Portugal was considered for output and employment from the business sector related to agricultural and forestry activities, which were disaggregated at the municipality level, for the period 2008–2015. Data for the burnt area was also considered in order to assess the impact of forest fires. The data was analyzed using econometric models in panel data based on the Keynesian (Kaldor laws) and convergence (conditional approaches) theories. The results from the Keynesian approaches show that there are signs of increasing returns to scale in the Portuguese agroforestry sectors, where the burnt area increased employment growth in agricultural activities and decreased employment in the forestry sector. Forest fires seem to create favorable conditions for agricultural employment in Portuguese municipalities and the inverse occurs for forestry employment. Additionally, some signs of convergence were identified between Portuguese municipalities for agroforestry output and employment, as well for the burnt areas. However, signs of divergence (increasing returns to scale) from the Keynesian models seem to be stronger. On the other hand, the evidence of beta convergence for the burnt areas are stronger than those verified for other variables, showing that the impacts from forest fires are more transversal across the whole country (however not enough to have sigma convergence). Introduction Forest fires and their consequences came to be in several countries such as Portugal as a result of a set of factors related to the social, economic, environmental and natural contexts, which create favorable conditions or environments for these occurrences.They are realities towards which everyone may contribute, in some way, in order to mitigate them.In fact, damage caused by forest fires has increased over the last few decades across the globe [1]. In these frameworks, the scientific community may and should bring contributions, namely through new approaches and insights that allow for the prevention of the negative consequences from forest fires and allow us to understand the several impacts after their occurrence.Computational Literature Insights In countries like Portugal and Spain, recent years have shown that the consequences of forest fires may be dramatic, with an inclusive loss of human lives and firms.The implications of forest fire activity in the social and economic context in certain geographic regions is enormous [11,12] and some areas will require several years for normality to be restored.In fact, the occurrence and severity of forest fires, as well as their relevant impact on various levels of human life, has become a concern for several stakeholders and policymakers in a variety of global locations [13]. The negative consequences of forest fires are, indeed, relevant across different areas of human life, inclusively affecting the normal conditions for human health [14].In some contexts, forest fires are the main agent of disturbances having a more random spatial spreading [15], making it more difficult to predict the consequences.One of the main challenges for national authorities in some countries is to predict the implications on human survival from forest fires in more isolated regions. To mitigate the potentially devastating consequences of forest fires, prevention is especially important, in particular through the reduction of the available fuel load [16].However, it is also crucial to ensure that resources are immediately available for firefighting [17], and because some natural elements, such as changing winds, may render other existing factors irrelevant [18].Adjusted forest fire management with balanced prevention and suppression approaches are crucial to reduce the negative impacts of these agents on forest disturbance. In some circumstances the resilience of certain plants may make all the difference in reducing the impacts of forest fires, namely in the new context of climate change [19,20].This question should be considered by the several agroforestry stakeholders, namely by policymakers, to design adjusted plans for forest management.In any case, the relationships between climate change, global warming and forest fires seem to be, in some cases, correlated [21]. During the post-fire period, it is important to quickly assess the implications [22,23] as a means to readily promote adjusted policies that will reduce these negative impacts from fires and support a quick recovery of the socioeconomic contexts and affected ecosystems. In any case, avoiding the occurrence of forest fires altogether seems to be the most important factor and prevention plays a fundamental role.Fuel load reduction is important, but it is also crucial to make an adjusted assessment of the risks and critical periods which contribute to forest fire occurrence [24]. Methodology Explanation From the Keynesian theory, we drew from the Verdoorn [25] and Kaldor's [26,27] developments, primarily those related to the Kaldor coefficient.The Kaldor coefficient, which we obtained from running regressions with employment growth as a function of output growth, captures the dynamic effects of economies of scale where we expect a coefficient value between 0 and 1, where values closer to 0 indicate stronger, increasing returns to scale.In this study, this relationship was enlarged using a variable related to forest fire severity (i.e., burnt area) as a way of assessing the effect of forest fires on the socioeconomic dynamics of the Portuguese municipalities.The enlargement of this Kaldor equation and its application for use in several economic sectors, including agricultural activities, has already been implemented in other work, such as Martinho [28,29].Indeed, the Verdoorn and Kaldor developments are increasingly relevant in the current global economic context and bring insight to both regional development [30][31][32][33], of labor contexts [34,35] and the relationship between manufacturing and economic development [36]. In relation to the convergence theory, we considered the approaches related to absolute (unconditional) and conditional convergence [37][38][39][40][41][42][43].The idea behind absolute convergence is that all countries or regions tend to converge to the same steady-state of economic development (for example, gross domestic product per capita).In turn, the conditional convergence approach defends the notion that convergence is dependent on the condition/state of the countries or regions.In this case, countries with similar contexts (for example, similar capital human accumulation) can converge at the same steady-state, defending the existence of different steady-states and the concept of clubs of convergence, meaning groups of countries that converge to the same level of development.In general, the convergence trends in this theory are analyzed through determining the sigma convergence (measured through the coefficient of variationaverage standard deviation by the mean) and beta convergence (coefficient of regression, expected negative for convergence).In practice, the sigma convergence analyses, for a given variable, the tendency of convergence/divergence over a period of time and across the several municipalities (for the case presented in this study).If there is a decreasing/increasing trend, over the period considered, in the coefficient of variation there is convergence/divergence.The beta convergence reveals the annual rate of convergence.In this way, the convergence theory argues that the beta convergence (annual convergence) is necessary, but not sufficient to guarantee sigma convergence (convergence over the period).These concepts have several applications in current times [44].The implementation of approaches related to convergence theory in the agroforestry sector has been performed in several studies, for example, in Martinho [45] and others [46,47]. Data Description Figure 1 shows that after 2011, there was continual growth in the agricultural business output (averaged across the Portuguese municipalities) after a relatively stable trend from 2008.The average growth of the forestry business output reveals a decreasing trend until 2012 and a strong increase after 2013.It's important to stress that these values for the output from the Portuguese agroforestry sectors were deflated with the consumer prices index disaggregated at NUTS (Nomenclature of territorial units for statistics) 2 level (the finest disaggregation available).We used the consumer price index from unprocessed food to deflate the municipality agricultural output and the consumer price index without housing to deflate the municipality forestry output. development.In general, the convergence trends in this theory are analyzed through determining the sigma convergence (measured through the coefficient of variationaverage standard deviation by the mean) and beta convergence (coefficient of regression, expected negative for convergence).In practice, the sigma convergence analyses, for a given variable, the tendency of convergence/divergence over a period of time and across the several municipalities (for the case presented in this study).If there is a decreasing/increasing trend, over the period considered, in the coefficient of variation there is convergence/divergence.The beta convergence reveals the annual rate of convergence.In this way, the convergence theory argues that the beta convergence (annual convergence) is necessary, but not sufficient to guarantee sigma convergence (convergence over the period).These concepts have several applications in current times [44].The implementation of approaches related to convergence theory in the agroforestry sector has been performed in several studies, for example, in Martinho [45] and others [46,47]. Data Description Figure 1 shows that after 2011, there was continual growth in the agricultural business output (averaged across the Portuguese municipalities) after a relatively stable trend from 2008.The average growth of the forestry business output reveals a decreasing trend until 2012 and a strong increase after 2013.It's important to stress that these values for the output from the Portuguese agroforestry sectors were deflated with the consumer prices index disaggregated at NUTS (Nomenclature of territorial units for statistics) 2 level (the finest disaggregation available).We used the consumer price index from unprocessed food to deflate the municipality agricultural output and the consumer price index without housing to deflate the municipality forestry output. Relative to agroforestry employment, Figure 2 shows that this variable increased strongly after 2012 following a stable tendency from 2008.It seems that the Portuguese economic crisis had a positive effect on the agroforestry municipal dynamics, both in terms of output and employment.In fact, the agroforestry sector in Portugal has a great potential for growth, however, due to several factors, some of which are historic, sometimes this sector is forgotten about along with its potential for a more sustainable social, economic and environmental development.The creation of a common forest policy in the European Union interconnected with common agricultural policy could potentially bring about more interesting contributions for agroforestry performance.Relative to agroforestry employment, Figure 2 shows that this variable increased strongly after 2012 following a stable tendency from 2008.It seems that the Portuguese economic crisis had a positive effect on the agroforestry municipal dynamics, both in terms of output and employment.In fact, the agroforestry sector in Portugal has a great potential for growth, however, due to several factors, some of which are historic, sometimes this sector is forgotten about along with its potential for a more sustainable social, economic and environmental development.The creation of a common forest policy in the European Union interconnected with common agricultural policy could potentially bring about more interesting contributions for agroforestry performance. Results Obtained through the Keynesian Developments The results presented in Table 1 which were obtained using the Kaldor equation through panel data methodologies for the agricultural sector, highlight the strong increasing returns to scale observed.In fact, the Kaldor coefficient (the coefficient for the output growth) is close to 0 (this could also be obtained by measuring the difference between 1 and the Verdoorn coefficient) which, as previously explained, is a sign of robust scale economies.On the other hand, this Table shows that the amount of burnt area may have had a favorable effect on employment growth within the agricultural sector.This is a curious result that needs further analysis in future research.In fact, it will be important to understand in future studies just how the burnt areas contribute towards Results Obtained through the Keynesian Developments The results presented in Table 1 which were obtained using the Kaldor equation through panel data methodologies for the agricultural sector, highlight the strong increasing returns to scale observed.In fact, the Kaldor coefficient (the coefficient for the output growth) is close to 0 (this could also be obtained by measuring the difference between 1 and the Verdoorn coefficient) which, as previously explained, is a sign of robust scale economies.On the other hand, this Table shows that the amount of burnt area may have had a favorable effect on employment growth within the agricultural sector.This is a curious result that needs further analysis in future research.In fact, it will be important to understand in future studies just how the burnt areas contribute towards Results Obtained through the Keynesian Developments The results presented in Table 1 which were obtained using the Kaldor equation through panel data methodologies for the agricultural sector, highlight the strong increasing returns to scale observed.In fact, the Kaldor coefficient (the coefficient for the output growth) is close to 0 (this could also be obtained by measuring the difference between 1 and the Verdoorn coefficient) which, as previously explained, is a sign of robust scale economies.On the other hand, this Table shows that the amount of burnt area may have had a favorable effect on employment growth within the agricultural sector.This is a curious result that needs further analysis in future research.In fact, it will be important to understand in future studies just how the burnt areas contribute towards increases in agricultural employment growth.In any case, some of the burnt areas may indeed be used for agricultural activities, thus improving the social contribution of the farming sector.The results from the forestry sector outlined in Table 2 reveal that although the economies of scale are slightly weaker in the forestry sector (coefficient around 0.041) than in the agricultural sector, they are, nevertheless, strong.In turn, this Table reports that, as expected, the burnt area has a negative impact on forestry employment growth (however, with a level of significance at 10%).Indeed, the main directly affected activities with forest fires are those related with the forestry sector.This reduction in forestry employment as a consequence of forest fires should be taken into account by the policymakers, namely because of the social and economic problems verified in the most affected areas (often located in rural and unfavorable Portuguese regions).The strong increasing returns found for the agroforestry sectors are unexpected considering the Kaldor developments, however it is namely the way the agricultural sector has performed over recent years which has promoted an important increase in its dynamics and this explains in part these results obtained. Results for the Absolute and Conditional Convergence In regards to sigma convergence, Figure 4 shows that the agricultural output reveals a divergence tendency over the period considered and across the Portuguese municipalities, with signs of convergence in 2011, 2013, and 2014.This reveals that the agricultural output over the period taken into account agglomerated in some Portuguese municipalities.In turn, the forestry output shows a trend of convergence over 2008-2015, presenting that the output from forestry activities does not follow a trend of agglomeration in some municipalities (the spatial distribution is more homogeneous). In regards to sigma convergence, Figure 4 shows that the agricultural output reveals a divergence tendency over the period considered and across the Portuguese municipalities, with signs of convergence in 2011, 2013, and 2014.This reveals that the agricultural output over the period taken into account agglomerated in some Portuguese municipalities.In turn, the forestry output shows a trend of convergence over 2008-2015, presenting that the output from forestry activities does not follow a trend of agglomeration in some municipalities (the spatial distribution is more homogeneous). Relative to agroforestry employment, the tendencies of divergence/convergence are the opposite of those verified for the output.Indeed, agricultural employment shows signs of convergence and forestry employment shows evidence of divergence.These trends for the agroforestry output and employment confirm the signs of increasing returns found before for the relationships between these two variables. Finally, the burnt area, in general, shows signs of divergence over the examined period, showing that the impact of forest fires are concentrated in some municipalities, with the exception of some convergence in 2010, 2011, 2013, and 2015 (2008 and 2011 were the years with less burnt area, as shown in Figure 3).In regards to beta convergence, the results for the coefficient of convergence are presented in Tables 3-7 for the agricultural (output and employment) and forestry (output and employment) sectors and for the burnt area.In each case (output, employment, and burnt area) we ran, only, one regression to analyze the conditional convergence.The results are not presented here for the absolute convergence, because the results for the coefficient of convergence are similar (to those presented for the conditional convergence) and to avoid presenting an exaggerated number of tables.For analyzing output and employment, we considered the burnt area (in logarithms or in growth) as a conditional variable.In our analysis of the burnt area, the conditional variable was Relative to agroforestry employment, the tendencies of divergence/convergence are the opposite of those verified for the output.Indeed, agricultural employment shows signs of convergence and forestry employment shows evidence of divergence.These trends for the agroforestry output and employment confirm the signs of increasing returns found before for the relationships between these two variables. Finally, the burnt area, in general, shows signs of divergence over the examined period, showing that the impact of forest fires are concentrated in some municipalities, with the exception of some convergence in 2010, 2011, 2013, and 2015 (2008 and 2011 were the years with less burnt area, as shown in Figure 3). In regards to beta convergence, the results for the coefficient of convergence are presented in Tables 3-7 for the agricultural (output and employment) and forestry (output and employment) sectors and for the burnt area.In each case (output, employment, and burnt area) we ran, only, one regression to analyze the conditional convergence.The results are not presented here for the absolute convergence, because the results for the coefficient of convergence are similar (to those presented for the conditional convergence) and to avoid presenting an exaggerated number of tables.For analyzing output and employment, we considered the burnt area (in logarithms or in growth) as a conditional variable.In our analysis of the burnt area, the conditional variable was forestry employment growth (to deeper analyze the relationships between forest fires and forestry dynamics).3 (representing conditional convergence for the agricultural output) shows that there are statistically significant and relevant signs of convergence (negative coefficient of regression and around 0.048).Comparing this result with that obtained before for the sigma convergence (Figure 4) it is noted that the beta convergence is not enough to have sigma convergence (the beta convergence is a necessary condition, but not sufficient, to have sigma convergence).The conditional variable (burnt area logarithm) is not statistically significant, revealing that the convergence seems to be absolute in this analysis. Table 4 shows that the signs of convergence are stronger for agricultural employment (coefficient of regression negative and around 0.055) than those found for agricultural output, which are in line with the results found for the sigma convergence (Figure 4).The Table reveals that in this case, the convergence is conditional, and the burnt area logarithm has a positive impact on agricultural employment growth (confirming the results described earlier for the Keynesian analysis).In fact, these results confirm the findings obtained before for the Keynesian analysis, showing that the burnt area does not have any impact on the agricultural output, but improves agricultural employment. The convergence analysis for the forestry output (Table 5) presents strong signs of convergence (around −0.121) and shows that the effect of the conditional variable (burnt area logarithm) is negligible and close to zero.These results reveal that, as with for agricultural output, the convergence for forestry output seems to be absolute (considering the negligible value for the coefficient of the burnt area).These stronger signs of convergence for the forestry output confirm the results obtained before for the sigma convergence. The convergence indications for forestry employment (Table 6) are weaker than those found for the forestry output, which is in line with the results outlined in Figure 4, and the conditional variable is not statistically significant.It is worth stressing that the results obtained up to this point show that forest fires (through the burnt area) have little impact on the performance and dynamics of the Portuguese agroforestry sectors, considering the results obtained for the burnt area coefficient in the several estimations.The only evident and statistically significant impact from the burnt area is a positive effect on agricultural employment. Table 7 shows that there are strong signs of convergence for the burnt area across the Portuguese municipalities, however they are not enough to have sigma convergence (Figure 4).On the other hand, the convergence in the burnt area is conditional toward forestry employment growth, showing that the burnt area growth is negatively influenced by employment growth from the forestry sector.This seems to indicate that bringing more people to forest areas may be an interesting solution in reducing the severity of forest fires.This is an important finding that should be considered by the several stakeholders, namely by the national authorities.It is important to bring new activities for the agroforestry sector and here the policies related with innovation and entrepreneurship may provide an important contribution. Discussions The objective of this study was to analyze the socioeconomic implications of forest fires in the Portuguese municipalities' agroforestry business sectors over the period 2008-2015, by considering data from Statistics Portugal for output, employment, and burnt area.The statistical information was analyzed with panel data methodologies, namely those derived from Keynesian and convergence theories. The data analysis shows that, in general, after 2011/2012, output and employment increased on average at a persistent trend in the agroforestry sectors of the Portuguese municipalities.The Portuguese economic crisis has also borne its influence here.In fact, some of the unemployment generated in other sectors, namely in the industry and construction, found solutions within the agroforestry sectors [48].On the other hand, over the period we considered, the trend for the average burnt area is essentially white noise, demonstrating neither an increasing or decreasing trend and showing the great irregularity in the impact of forest fires in the Portuguese context.The forest fires are agents of landscape disturbances with a great degree of unpredictability [15]. The Keynesian analysis, using the Verdoorn-Kaldor developments, reveals that there are strong increasing returns in the Portuguese agroforestry sectors, which are higher in the agricultural sector.These findings are in line with other works [32].On the other hand, considering the enlarged Kaldor equation, the burnt area logarithm positively influences employment growth in the agricultural sector and negatively influences the forestry sector.Forest fires seem to have a positive impact on the number of people employed in the Portuguese agricultural sector.The modernization of the agricultural sector seen in recent years seems to have had a positive impact on the dynamics and performance of the sector, creating the right circumstances to absorb the workforce released by other diminishing sectors [48]. The convergence research, considering both the absolute and conditional approaches, was performed using the sigma and beta convergence concepts, following, for example, He et al. [46] and Spirkova et al. [47].The sigma convergence shows signs of a convergence trend for forestry output and agricultural employment.The beta convergence analysis (coefficient of regression) shows that the convergence is more absolute than conditional, with the exception of agricultural employment convergence where, again, the burnt area logarithm (conditional variable) positively influences agricultural employment growth.On the other hand, burnt area growth is negatively influenced by forestry employment growth, showing the importance of increased forestry activities in forest fire prevention. Additionally, it is worth noting the strong trends of agglomeration (strong increasing returns to scale from the Keynesian analysis) in the agricultural and forestry sectors over the period considered and across the Portuguese municipalities, showing that the Portuguese agroforestry activities became more concentrated in some municipalities in this period [49].On the other hand, there are signs of beta convergence (not as strong as the evidence of agglomeration) that are not enough to guarantee sigma convergence.In practice it seems that there are strong signs of divergence in the output and employment from the Portuguese agroforestry sectors, as well as in the burnt area.This may be a consequence of land abandonment and of desertification of the interior of Portugal [10]. The study presented here brings interesting insights into the understanding of forest fires, for policymakers and the perceptions of several stakeholders.It is important to bring new and multidisciplinary approaches for the forest fires contexts around the world, namely to improve regional resilience.For this it is important to increase economic dynamics in the rural area in a sustainable way, and to create more employment to attract younger people. Conclusions and Political Implications As a final remark, one should highlight that the impact of forest fires (burnt area) on the Portuguese agroforestry dynamics and performance seems too moderate.In fact, forest fires seem to improve agricultural employment with coefficients of around 0.069 (from the Keynesian approach) and 0.022 (form the convergence theory).On the other hand, forest fires show some evidence of reducing forestry employment, presenting a coefficient of −0.015 (from the Keynesian theory), though with a statistical significance at 10%.The impact on agroforestry output seems to be residual or negligible. In terms of political implications, it is important to bring about more activities for the forestry sector in order to reduce the risk of forest fires and to improve the dynamics of this sector (the results show lower scale economies relative to the agricultural sector).On the other hand, it will be important to continue agricultural modernization and its performance to better absorb the labor force released by other sectors.The creation of a common forest policy interconnected with a common agricultural policy, incorporating innovation and entrepreneurship strategies could be an interesting approach.To avoid desertification and land abandonment there should be a priority in the design of new policies.Considering the dimensions of the consequences of forest fires, the related policies must reconsider the relevance given to the socioeconomic impact on the agroforestry sector, where the negative impacts seem to be residual.In any case, the Portuguese agroforestry sector does have several problems, however, they are not a direct consequence of forest fires. These insights open up new fields of research.In this way, for future studies, it will be important to understand the main factors which create the conditions for forest fires to inadvertently promote agricultural employment.Migration of the workforce from forestry to agriculture may provide an explanation.On the other hand, it will be interesting to further investigate the strong impact of the burnt area on agricultural employment, rather than agricultural output.Considering the negative socioeconomic implications of forest fires evidenced by the literature [11,12], it will also be important to identify the main factors that allow for negative impacts in the agroforestry sector from the Portuguese municipalities to be weak. Figure 1 . Figure 1.Average output for the agroforestry sectors across the Portuguese municipalities over the period 2008-2015. Figure 1 . Figure 1.Average output for the agroforestry sectors across the Portuguese municipalities over the period 2008-2015. Sustainability 2019 , 14 Figure 2 . Figure 2. Average employment for the agroforestry sectors across the Portuguese municipalities over the period 2008-2015. Figure 3 Figure 3 reveals that over the period 2008-2015, the years of 2010 and 2013 showed the greatest average burnt area.On the other hand, the years 2008 and 2014 showed the least severe average forest fire activity over the period in the Portuguese municipalities.As referred to before in the literature review, forest fire occurrences and severities have more of a time and spatial random distribution.This random behavior makes it more difficult to predict the forest fires' occurrences and consequences. Figure 3 . Figure 3. Average burnt area across the Portuguese municipalities over the period 2008-2015. Figure 2 . Figure 2. Average employment for the agroforestry sectors across the Portuguese municipalities over the period 2008-2015. Figure 3 14 Figure 2 . Figure 3 reveals that over the period 2008-2015, the years of 2010 and 2013 showed the greatest average burnt area.On the other hand, the years 2008 and 2014 showed the least severe average forest fire activity over the period in the Portuguese municipalities.As referred to before in the literature review, forest fire occurrences and severities have more of a time and spatial random distribution.This random behavior makes it more difficult to predict the forest fires' occurrences and consequences. Figure 3 Figure 3 reveals that over the period 2008-2015, the years of 2010 and 2013 showed the greatest average burnt area.On the other hand, the years 2008 and 2014 showed the least severe average forest fire activity over the period in the Portuguese municipalities.As referred to before in the literature review, forest fire occurrences and severities have more of a time and spatial random distribution.This random behavior makes it more difficult to predict the forest fires' occurrences and consequences. Figure 3 . Figure 3. Average burnt area across the Portuguese municipalities over the period 2008-2015. Figure 3 . Figure 3. Average burnt area across the Portuguese municipalities over the period 2008-2015. (a) Coefficient of variation for agricultural and forestry output (b) Coefficient of variation for agricultural and forestry employment Sustainability 2019, 11, x FOR PEER REVIEW 8 of 14 (c) Coefficient of variation for burnt area Figure 4 . Figure 4. Sigma convergence across the Portuguese municipalities over the period 2008-2015. Figure 4 . Figure 4. Sigma convergence across the Portuguese municipalities over the period 2008-2015. Table 1 . Kaldor equation enlarged regression, for the agricultural sector, with the employment growth as a function of the output growth across the Portuguese municipalities. Table 2 . Kaldor equation enlarged regression for the forestry sector, with the employment growth as a function of the output growth across the Portuguese municipalities. Table 3 . Conditional convergence regression for agricultural output with the logarithms difference as a function of the logarithm in the previous year, across the Portuguese municipalities. Table 4 . Conditional convergence regression for agricultural employment with the logarithms difference as a function of the logarithm in the previous year across the Portuguese municipalities. Table 5 . Conditional convergence regression for forestry output with the logarithms difference as a function of the logarithm in the previous year, across the Portuguese municipalities. Table 6 . Conditional convergence regression for forestry employment with the logarithms difference as a function of the logarithm in the previous year, across the Portuguese municipalities. Table 7 . Conditional convergence regression for the burnt area with the logarithms difference as a function of the logarithm in the previous year, across the Portuguese municipalities.
2019-02-06T04:10:23.409Z
2019-01-13T00:00:00.000
{ "year": 2019, "sha1": "506e35eff0fc371a9899a1b639faa6e14fae21aa", "oa_license": "CCBY", "oa_url": "https://www.mdpi.com/2071-1050/11/2/374/pdf?version=1547353160", "oa_status": "GOLD", "pdf_src": "ScienceParseMerged", "pdf_hash": "506e35eff0fc371a9899a1b639faa6e14fae21aa", "s2fieldsofstudy": [ "Environmental Science", "Economics", "Agricultural and Food Sciences", "Geography" ], "extfieldsofstudy": [ "Economics" ] }
248514605
pes2o/s2orc
v3-fos-license
Machine learning-based in-hospital mortality prediction of HIV/AIDS patients with Talaromyces marneffei infection in Guangxi, China Objective Talaromycosis is a serious regional disease endemic in Southeast Asia. In China, Talaromyces marneffei (T. marneffei) infections is mainly concentrated in the southern region, especially in Guangxi, and cause considerable in-hospital mortality in HIV-infected individuals. Currently, the factors that influence in-hospital death of HIV/AIDS patients with T. marneffei infection are not completely clear. Existing machine learning techniques can be used to develop a predictive model to identify relevant prognostic factors to predict death and appears to be essential to reducing in-hospital mortality. Methods We prospectively enrolled HIV/AIDS patients with talaromycosis in the Fourth People’s Hospital of Nanning, Guangxi, from January 2012 to June 2019. Clinical features were selected and used to train four different machine learning models (logistic regression, XGBoost, KNN, and SVM) to predict the treatment outcome of hospitalized patients, and 30% internal validation was used to evaluate the performance of models. Machine learning model performance was assessed according to a range of learning metrics, including area under the receiver operating characteristic curve (AUC). The SHapley Additive exPlanations (SHAP) tool was used to explain the model. Results A total of 1927 HIV/AIDS patients with T. marneffei infection were included. The average in-hospital mortality rate was 13.3% (256/1927) from 2012 to 2019. The most common complications/coinfections were pneumonia (68.9%), followed by oral candida (47.5%), and tuberculosis (40.6%). Deceased patients showed higher CD4/CD8 ratios, aspartate aminotransferase (AST) levels, creatinine levels, urea levels, uric acid (UA) levels, lactate dehydrogenase (LDH) levels, total bilirubin levels, creatine kinase levels, white blood-cell counts (WBC) counts, neutrophil counts, procaicltonin levels and C-reactive protein (CRP) levels and lower CD3+ T-cell count, CD8+ T-cell count, and lymphocyte counts, platelet (PLT), high-density lipoprotein cholesterol (HDL), hemoglobin (Hb) levels than those of surviving patients. The predictive XGBoost model exhibited 0.71 sensitivity, 0.99 specificity, and 0.97 AUC in the training dataset, and our outcome prediction model provided robust discrimination in the testing dataset, showing an AUC of 0.90 with 0.69 sensitivity and 0.96 specificity. The other three models were ruled out due to poor performance. Septic shock and respiratory failure were the most important predictive features, followed by uric acid, urea, platelets, and the AST/ALT ratios. Conclusion The XGBoost machine learning model is a good predictor in the hospitalization outcome of HIV/AIDS patients with T. marneffei infection. The model may have potential application in mortality prediction and high-risk factor identification in the talaromycosis population. Introduction Talaromyces marneffei (formerly known as Penicillium marneffei) is a thermally dimorphic fungus. Invading a variety of tissues and organs, it can cause a fatal deeply disseminated fungal infectiontalaromycosis that primarily occurs in tropical or subtropical regions of Asia. Since the global outbreak of HIV/AIDS in the 1980s [1], talaromycosis has gradually increased in prevalence, accounting for 6.4-11% of HIV-related admissions in Vietnam [2,3], 3.3% in Thailand [4], 16.1% in Guangxi [5], China, and 17.3% in Guangdong [6], China. Currently, due to immunosuppressive therapy for autoimmune diseases, malignancies and increased international travel and migration, an increasing number of cases are being reported among HIVnegative patients. Furthermore, cases outside of traditional endemic regions have been reported, such as in Wuhan [7], Beijing [8], Shanghai [9], and Hong Kong [10], China. Due to the inability to make an early diagnosis, the in-hospital mortality of talaromycosis patients can be as high as 16.7-30%, despite antifungal therapy [11][12][13]. By the end of 2018, the cumulative number of talaromycosis cases was estimated at 288,000 (95% CI: 146,000-613,800), with 87,900 (95% CI: 37,200-204,300) cumulative deaths [14]. Thus, talaromycosis is a tropical infectious disease with high morbidity and mortality and is a serious threat to regional health. Thuy Le, Linghua Li, and other experts call for talaromycosis to be recognized as a neglected tropical disease that urgently needs to be taken seriously despite the perpetuation of the condition by a cycle of poverty, stigma, and global neglect [15]. In China, 40-56.6% of the cases of talaromycosis are reported in Guangxi [11,16]. Guangxi is a province with a high burden of AIDS patients, where the number of cumulative reports ranks second in China. By the end of October 2020, Guangxi had more than 97,000 HIV-infected people, which accounts for 9% of the total infected people in China. More than 30,000 patients have died of AIDS-related opportunistic infections in Guangxi [17]. Our previous study found that the proportion of HIV/AIDS-related deaths due to talaromycosis increased from 11.5% in 2012 to 16.1% in 2015, and was the most important leading cause of in-hospital HIV/AIDS-related death in Guangxi (AHR = 1.8-4.51), which represented a major public health problem [5,18]. Although T. marneffei infection has a high prevalence and in-hospital mortality rate, the risk factors influencing in-hospital death of patients with talaromycosis are still unclear in Guangxi and relevant studies to guide clinical work are lacking. Although several studies have reported the factors influencing the death of hospitalized patients, including occupation, antiviral treatment, and clinical complications, they could not be completely used as clinical prognosis predictors. In addition, the current research still has limitations, such as insufficient sample sizes, confounding factors. In recent years, the application of artificial intelligence in the medical field has become a hot spot, and various machine learning algorithms have shown their potential to be applied to large-scale biomedical and patient datasets. Moreover, Machine learning methods might overcome some of the limitations of current analytical approaches to risk prediction by applying computer algorithms to large datasets with numerous, multidimensional variables, capturing high-dimensional, nonlinear relationships among clinical features to make data-driven death outcome predictions. Machine learning models based on clinical features have been used in many applications in cancer and tumor prognosis prediction, such as in lung cancer and breast cancer [19,20]. The application of death prediction in infectious diseases is also becoming a trend, typically regarding the prediction of mortality risk and prognosis of COVID-19 patients [21][22][23]. Similarly, assessing dengue severity risk factors has been reported [24]. The in-hospital mortality rate of patients with talaromycosis is high, yet there is no machine learning model for predicting T. marneffei treatment outcome. Therefore, we would like to develop an optimal machine learning-based risk predictive model by fitting daily laboratory measures and clinical indicators, which will guide clinicians to adjust treatment plans for patients with talaromycosis with different symptoms in a timely manner, as it may have a positive significance for reducing death. Ethics statement This study was approved by the Human Research Ethics Committee of Guangxi Medical University (Ethical Review No. 20210099). Datasets To develop the machine learning models, we used a cohort of 1927 hospitalized adult patients (� 18 years old) with talaromycosis and gathered information from the hospital's electronic medical records system. This large-scale observational cohort study was conducted in the Fourth People's Hospital of Nanning, which is the largest tertiary hospital specializing in infectious diseases in Guangxi and the province's largest treatment center for HIV/AIDS. The present study included all HIV/AIDS patients admitted to the Fourth People's Hospital of Nanning from January 2012 to June 2019. Individuals who were HIV/AIDS patients with talaromycosis were identified by the hospital electronic medical records system. For those with multiple admissions, data from the latest admission were preferentially included, and the laboratory data we included were the results of the blood test collected for the first time when the patient was admitted to the hospital before the patient has started formal treatment. The endpoint of our observation was the time of discharge of the patient, and we stopped observation if the patient died during this period. The inclusion criteria were as follows: (1) positive enzyme-linked immunosorbent assay (ELISA) and confirmatory western blotting were used to determine HIV infection; (2) samples of T. marneffei infection-T. marneffei were isolated and cultured from blood, skin tissue, bone marrow, lymph nodes, and/or other bodily fluid samples (mycelia at 25˚C and yeast-like structures at 37˚C) and indicated compliance with the diagnostic criteria. Patients with complete absence of laboratory results were excluded from the analysis. The study design and grouping are shown in Fig 1. The sample size was calculated based on the equation, as follows: ffi ffi ffi ffi ffi ffi ffi ffi ffi ffi ffi ffi ffi ffi , where Z α represents the standard normal distribution bound, α was set as 0.05, Z α was set as 1.96, and Z β = 1.282. Generally, the number of exposed groups was designed to be equal to the number of control groups, according to the data previously reported in the literature, the mortality rate of AIDS patients without comorbid T. marneffei was p 0 = 0.076, and the mortality rate of AIDS patients with comorbid T. marneffei was p 1 = 0.175 [5], p = 1/2(p 0 +p 1 ), q 0 = 1-p 0 , q 1 = 1-p 1 , q = 1-p. The sample size was chosen as 233 based on the equation. The number of cases in the two groups were 256 and 1671, respectively, which met the sample size requirement. We also collected as many samples as we could base on our ability to meet the minimum sample size requirement to ensure statistical efficacy. In fact, all the samples we could find were included. Definitions of various complications and coinfections Fever was defined as a single oral temperature � 38.3˚C (armpit temperature � 38.0˚C), or oral temperature � 38.0˚C (armpit temperature � 37.7˚C) lasting more than 1 hour. The diagnosis of pneumonia includes bacterial pneumonia, viral pneumonia, pulmonary mycosis (including Pneumocystis pneumonia) and pneumonia caused by other factors, but does not include pulmonary tuberculosis pneumonia, classified as tuberculosis [25]. The definition standard of anemia is as follows: male levels of hemoglobin 120 g/L, female hemoglobin levels of 110 g/L [26]. The definition of meningitis includes purulent meningitis, cryptococcosis meningitis, and viral meningitis, but does not include tuberculosis meningitis, which is classified as tuberculosis [27]. Coinfections were confirmed according to the diagnostic criteria of chronic hepatitis (hepatitis B, or hepatitis C) and oral candida infection found in infectious diseases [28]. The diagnostic criteria of residual complications or coinfections were defined based on the standard of Internal Medicine [26]. Study outcomes The patients were classified into two groups according to outcomes-the good outcome (survival) and bad outcome (death) groups when discharged. Model construction and validation The patients were randomly split into two datasets: a training cohort (70% of patients), which was used to train the four machine learning models and tune their parameters, and a testing cohort (30% of patients), which was used to test the models and to finetune the hyperparameters. We used bootstrapping as an internal verification method for 2000 trails of random sampling for four machine learning classifiers (logistic regression, eXtreme Gradient Boosting (XGBoost), K-nearest neighbors (KNN), and support vector machine (SVM)) to generate four models for the prediction of outcome. Performance evaluation Model performance was assessed according to the sensitivity, specificity, accuracy, area under the receiver operating characteristic curve (AUC) and other learning metrics (F1_score (F1), mAP, and RP curve (recall, precision)). A best-performing model based on a combination of performance evaluation metrics was used as the final model. Feature importance For clinical complications/coinfections, the variables with p < 0.05 were selected after Pearson's chi-square test. The laboratory measures with p < 0.05 were selected after t-test or Mann-Whitney U test. To determine the major predictors of study outcome in our patient population, the importance of each permutation feature was measured from the final model. Information gain ranking was used to evaluate the worth of each variable by measuring the entropy gain with respect to the outcome. The importance of each feature was quantified by calculating the decrease in the model's performance after permuting its values. The higher its value was, the more influential the feature. To determine whether the features had a greater impact on the final model, the importance of each permutation feature was measured by the final model. According to the information gain ranking criteria for this study, we calculated the feature importance of all the variables. Statistical analysis Categorical variables are reported as counts (%), and continuous variables are reported as the means (SDs) or medians (IQRs). The presence of a normal distribution was verified by the Kolmogorov-Smirnoff test. We used the t-test to assess differences between parametric continuous variables, and the Mann-Whitney U test to assess differences for nonparametric variables. Categorical variables were analyzed using the chi-square test or Fisher's exact test. No correction for multiple testing was performed. A two-sided p < 0.05 was considered statistically significant. All analyses were performed with Statistical Package for the Social Sciences (SPSS) version 24.0 (SPSS Inc, Chicago, IL, USA) and Anaconda 3 (Python v 3.8.5). General characteristics of study participants In all, 1927 eligible patients with talaromycosis were included in this study between January 2012 and June 2019, and the outcome at the time of hospital discharge was defined as death (n = 256) or survival (n = 1671). The general characteristics of the patients are summarized in S1 Table. The median age of the 1927 patients with talaromycosis was 43 years (range: 18-86 years). In total, 82.3% (1585/ 1927) of patients were male, 59.5% (1147/1927) of patients were of Han nationality, 59.5% (1146/1927) of patients were married, 55.1% (1061/1927) of patients were farmers, and the median time of inhospital day was 20 (11-28) days. Significant differences in baseline characteristics were identified between the survival and death groups in nationality, marital status, occupation, and time of inhospital day (p < 0.05). The mortality of talaromycosis among hospitalized HIV/AIDS patients from 2012 to 2019 Among 1927 admitted patients, the total average mortality of talaromycosis among hospital- We assessed the median levels of some essential indicators in patients of the two groups of patients and compared them. The deceased patients seemed to have higher levels of urea, uric acid, phosphorus (P), chlorine (Cl), serum cystatin C (Cys-C), red blood cell distribution width (RDW-CV), and platelet distribution width (PDW), as well as lower levels of CD8 + T-cell count, triglycerides (TG), total cholesterol (CHOL) and platelets, In particular, the level of PLT in surviving patients (131 μmol/L) was more than twice as high as that in deceased patients (64.5 μmol/L), as detailed in Fig 3B. There are also some features of concern, such as elevated levels of aspartate aminotransferase, lactate dehydrogenase and white blood-cell counts; the specific comparison is shown in S3 Table. Discrimination of four machine learning prediction models The four prediction models constructed based on the top 15 most important variables had different predictive performances. Logistic regression had an AUC of 0.72 in the training cohort and 0.80 in the internal validation cohort. We also tested the KNN model (training/testing, AUC = 0.85/1.00, sen = 1.00/0.60, and sep = 1.00/0.95) and SVM (AUC = 0.91/0.70, sen = 0.82/0.47, and sep = 1.00/0.94) to predict patient outcome. The KNN model showed the worst discrimination ability and exhibited overfitting. In contrast, the XGBoost model showed the best discrimination ability; the model yielded an AUC of 0.98 in the training data, with a sensitivity of 0.71 and specificity of 0.99 when using a score of 0.5 as the cutoff value. In the validation of the testing sets, the sensitivity of the model was 0.69, while its specificity was 0.96, indicating that the model had a specific predictive ability. The ROC curves of the training data and testing data of the four models are shown in Fig 4A and 4B, and the ROC curve results of the XGBoost model were more ideal. In extremely unbalanced data (positive has fewer samples), PR curves may be more practical than ROC curves. After the data is learned by many models, if the PR curve A of one model completely wraps around the PR curve of another model B, it can be asserted that A outperforms B. If A and B cross, a comparison can be made based on the size of the area under the curve. Both equilibrium precision and recall are commonly used. Precision and recall indicators sometimes appear contradictory, so they need to be considered together, with the most common method being the F-measure (also known as the F-score, F1). Combined with the RP graph (Fig 4E and 4F), F1 combines the precision and recall results; the larger F1 is, the better we can assume the performance of the model. Although the training set F1 value is higher than the KNN and SVM values, it does not have a good ability to recognize imbalanced data in actual operation. The KNN has only three cases of better output results. Regarding the RP curve's ability to measure the performance, the XGBoost comprehensive performance results are stronger in these four models, and the F1 value is greater than 0.70. Considering the performance of both aspects, XGBoost is the better prediction model for this study. The effectiveness of the four models is summarized in Table 1. Explanatory assessment of model stability To better investigate the predictive significance of the XGBoost model to guide specific practice, we introduced the SHAP value to describe the impact of features on the outcome. For each predicted sample, the model generates a predictive value, and the SHAP value is the value assigned to each feature in that sample, which can reflect each feature's impact, and shows the impact whether positive or negative. As seen in Fig 5, septic shock and respiratory failure were the two most important features. They were essentially positively correlated with death, being the most closely related to death, with those who exhibit both features having a greatly increased risk of death compared to that of those who do not. Uric acid, urea, RDW-CV, Cys-C, BUN/CREA, PDW, and P also significantly affected death. The higher the value was for these features, the higher the risk of death, and the smaller the values of chlorine, total cholesterol, platelets, and calcium were, the higher the risk of death, especially regarding platelets and total cholesterol levels. With AST/ALT levels, there was a tendency for an increase in the death risk when the level downregulates slightly. The contribution of the CD8 + T-cell count value to the outcome was predominantly negative, and it was more pronounced when CD8 + T-cell count values were greater than a certain level. The impact of these indicators on the prediction results used in the discrimination of patient outcome can also be verified by analyzing the number of misclassified cases. Fig 4C and 4D show the confusion matrix of the model; 1149 out of 1348 patients in the training set were correctly predicted as anti-case patients (survived), 134 patients were correctly predicted as positive cases (died), and a total of 65 patients were misclassified (ACC = 95%). Of the 579 patients in the test set, 541 patients were correctly predicted, and 38 patients were misclassified (ACC = 93%). Among the misclassified cases in both the training and test sets, those whose actual prognosis was survival and misclassified as death (FP) had higher prevalence of respiratory failure and shock than that of patients who were correctly judged to be alive (survival). Conversely, those whose actual prognosis was death and who were judged to be alive (FN) had lower prevalence of both respiratory failure and shock than those of patients who were correctly judged to be dead (death) (Fig 6A and 6B, p < 0.05). The presence of a relative abnormality in an index of misclassified patients was also reflected in the laboratory characteristics. For example, higher urea levels and AST/ALT ratios levels and lower platelet levels were observed in patients classified as FP (compared to those correctly judged as survivors), and the opposite was true for patients classified as FN when these values were compared with those of patients who died (Fig 6C and 6D). Discussion We conducted a cohort study with a large sample size and obtained the latest in-hospital mortality rate of T. marneffei infections among HIV/AIDS inpatients in southern China. The number and in-hospital mortality of talaromycosis patients among HIV/AIDS admissions decreased from 45 and 18.4% in 2012 to 13 and 12.9% in the first half of 2019, respectively. Pneumonia, oral candidiasis, tuberculosis, and hypoproteinemia were common complications/coinfections in HIV/AIDS patients with T. marneffei infection, which is a finding similar to the results of Pang et al [29]. In this study, we used data on 1927 HIV/AIDS patients with T. marneffei coinfection at the time of admission to develop and test an Machine learning-based prediction model to predict the risk of death during patient hospitalization. Our XGBoost prognostic model exhibited good discrimination for the prediction of death during patient hospitalization. The clinically meaningful cutoff value of 0.5 was bounded by a sensitivity and specificity of approximately 70% for both the training and test sets. There was no decrease in model performance between the training data and test validation, which should allay most concerns about overfitting of the training data. Finally, robust hypothetical trade-offs in the occurrence of mortality events are observed for each patient according to the SHAP value of each feature. Specifically, septic shock and respiratory failure were the most important variables affecting death, and we also considered serum uric acid, urea, platelet, and AST/ALT levels as relatively important variables. The results of a recent prognostic model developed to predict outcomes in patients with HIV-associated tuberculosis were published [30]. Accurate prediction of patient death after coinfection with HIV/AIDS and T. marneffei still represents an unmet need. Our previous study developed a simple-to-use nomogram for predicting the survival of hospitalized HIV/AIDS patients [31], however, it did not involve laboratory measures, so it is not an optimally comprehensive evaluation of the specific conditions of patients. Thuy Le developed a prognostic model using Bayesian logistic regression to identify predictors of death [32]. In general, the value of models for prognostic evaluation of T. marneffei infection populations using available data is increasingly recognized as a very economical means to aid clinical practice, but thus far, there is a lack of relatively well-developed studies with large samples sizes and especially well-performing predictive models. Our XGBoost predictive model offers relatively high accuracy in detecting the risk of in-hospital death in a population of 28.7% patients (553/1927) treated with current standard ART therapies during the study period. There is growing evidence that respiratory failure, shock, urea levels, and platelet levels significantly impact adverse outcomes, such as death. A study in Vietnam found that urea levels were higher in fatal cases of patients with HIV/AIDS complicated by T. marneffei infection compared with those of nonfatal patient cases. Dyspnea is an independent predictor of in- Survival: correctly classified to be alive; Death: correctly classified to be dead; FN: those whose actual prognosis was death and were classified to be alive; FP: those whose actual prognosis was survival and were misclassified as death. hospital mortality [2]. Not coincidentally, another article reported that both respiratory difficulty and lower platelet count predict poor in-hospital outcome [33]. Infection shock accounts for 10.2% of the total causes of death among HIV patients with T. marneffei infection at the Beijing Ditan Hospital, ranking fourth [34]. Septic shock and respiratory failure are often manifestations of a patient's progression to cachexia. Patients with combined respiratory failure and shock are often clinically classified as high-risk patients, which also indicates that the prognosis for these patients may be relatively poor, in other words, they are more likely to die. Our study found that both were indicators of poor prognosis. We ranked the contribution of all the independent variables, the AST/ALT ratio was the highest in the feature contribution ranking, we found that patients who died had significantly higher AST/ALT ratios compared with those who survived (3.07 versus 1.96). The previous study has shown an elevated AST/ALT ratio in talaromycosis patients. [33]. Two other studies also showed abnormal changes in AST or ALT levels in HIV/AIDS patients with talaromycosis [35,36]. In fact, other fungal studies have also found this phenomenon, a study suggested that the mean ratio of AST to ALT in patients with disseminated histoplasmosis (A fungal disease) was higher than localized pulmonary disease and other endemic mycoses [37]. As we know, ALT is primarily distributed in the liver, kidneys, heart, and skeletal muscle, while AST is primarily distributed in the heart, liver, skeletal muscle, and kidneys. Given our results, AST/ALT ratio may be a predictor of death. Nevertheless, talaromycosis is a disseminated disease, the exact site of the damage, the cause of AST and AL changes, and the biological mechanism in talaromycosis, which deserves further research. Similarly, the association of platelets with the poor in-hospital outcome of talaromycosis has been reported previously. The platelet elevated level in the group of deceased patients (64.5×10 9 /L) was less than half of that in the survival group (131×10 9 /L). The lower the platelet levels are, the more likely the patient is to bleed and develop coagulation disorders, which is also consistent with the results of the misclassification case analysis. The higher the urea level is, the lower the levels of platelets, and the higher the AST/ALT ratio is, the more likely the surviving patient is judged to be deceased, and conversely, the patients with a high risk of death may be judged to be alive. Therefore, it is valuable to clarify the significance of these indicators for death to correctly identify and predict the prognosis of patients. The combination of chloride, calcium, and phosphorus levels points to the electrolyte status of the body, which may indicate electrolyte disturbances in patients at high risk of death. Cys-C levels, BUN/CREA, PDW, and RDW-CV are less clinically significant and may receive less attention, but they are also essential for model prediction. We also note that deceased patients showed higher CD4/CD8 ratios, and from the data of S2 Table we can clearly see that the median CD4 + T-cell count was 22 in survival patients and 21 in death patients (p > 0.05), while the median CD8 + T-cell count was 271,215, (p < 0.001). This result shows that the main feature underlying patient death was a higher CD4/CD8 ratio due to the lower CD8 + T-cell count, which suggests that CD8 + T-cell count is also important and that focusing on CD4 + T-cell count alone may not be enough to avoid death. This brings us to the question of how to reduce deaths in the HIV/AIDS with T. marneffei infection population, which is usually to change the method of treatment, including the change of drugs, the change and choice of treatment timing, recommendations for dosage of treatment drugs, etc. Notably, patients with T. marneffei infections have many similar clinical symptoms to patients who have many other infections, which makes early diagnosis of talaromycosis difficult, so special clinical attention needs to be paid to the early diagnosis of talaromycosis patients, and the earlier the diagnosis, the more deaths can be reduced. This study attempted to build four machine learning-based prognostic prediction models for HIV/AIDS patients with talaromycosis during hospitalization. Our XGBoost model stems from the exploration of 15 variables that are routinely assessed during the management of patients admitted to the hospital to identify which factors are more predictive of death in talaromycosis patients. This prediction machine learning model helps clinicians reduce talaromycosis deaths to some extent. We remind clinicians to differentially diagnose the symptoms caused by other opportunistic infections, such as tuberculosis, which is clinically and radiologically similar to T. marneffei, to mitigate the pneumonia arising from the combination of tuberculosis while treating pneumonia caused by T. marneffei, and then to take targeted treatment to reduce deaths. Although this is the only study, to our knowledge, to propose an in-hospital machine learning-based mortality prediction model for HIV/AIDS patients with T. marneffei infection in such a large sample of patients in China, our research should be interpreted considering some limitations. In fact, the time from onset to diagnosis, the antifungal treatment regimen, the time of fungal culture positivity, the types and number of the other comorbidities, the identities and timing of antifungal treatments, delays in diagnosis after admission, the severity of coinfections, the timing of antiretroviral therapy, etc., are more comprehensive information that we unfortunately, for various objective reasons, did not obtain. Second, our data were only from one hospital, and there was no external validation dataset for this hospital, which is the largest HIV/AIDS treatment center in Guangxi Province. The model we built could guide mortality prediction in this hospital. It is a remarkable fact that we did not have data from external validation. Our model was validated in internally and maintained a good and stable level of discrimination for the explored outcome. Finally, the data we used are cross-sectional data, but it is noted that the data can be updated in real time when truly applied to the clinic, and further efforts will have to continue to increase the sample size. In conclusion, we have developed and tested a XGBoost predictive model, an machine learning-based tool to predict the risk for death. This study showed that the machine learningbased approach in this setting is feasible and effective with potentially significant application in mortality prediction in HIV/AIDS with talaromycosis population.
2022-05-05T06:20:34.457Z
2022-05-01T00:00:00.000
{ "year": 2022, "sha1": "1a3c24059578f5da6446bcd8904b659ce2fc0f59", "oa_license": "CCBY", "oa_url": null, "oa_status": null, "pdf_src": "PubMedCentral", "pdf_hash": "875125c9f051b5e1754ef254b04baecb85585111", "s2fieldsofstudy": [ "Medicine", "Biology" ], "extfieldsofstudy": [ "Medicine" ] }
196687946
pes2o/s2orc
v3-fos-license
The phylogenetic range of bacterial and viral pathogens of vertebrates Many major human pathogens are multihost pathogens, able to infect other vertebrate species. Describing the general patterns of host–pathogen associations across pathogen taxa is therefore important to understand risk factors for human disease emergence. However, there is a lack of comprehensive curated databases for this purpose, with most previous efforts focusing on viruses. Here, we report the largest manually compiled host–pathogen association database, covering 2,595 bacteria and viruses infecting 2,656 vertebrate hosts. We also build a tree for host species using nine mitochondrial genes, giving a quantitative measure of the phylogenetic similarity of hosts. We find that the majority of bacteria and viruses are specialists infecting only a single host species, with bacteria having a significantly higher proportion of specialists compared to viruses. Conversely, multihost viruses have a more restricted host range than multihost bacteria. We perform multiple analyses of factors associated with pathogen richness per host species and the pathogen traits associated with greater host range and zoonotic potential. We show that factors previously identified as important for zoonotic potential in viruses—such as phylogenetic range, research effort, and being vector‐borne—are also predictive in bacteria. We find that the fraction of pathogens shared between two hosts decreases with the phylogenetic distance between them. Our results suggest that host phylogenetic similarity is the primary factor for host‐switching in pathogens. An important biological factor that is likely to limit pathogen host-switching is the degree of phylogenetic relatedness between the original and new host species. For a pathogen, closely related host species can be considered akin to similar environments, sharing conserved immune mechanisms or cell receptors, which increases the likelihood of pathogen "preadaptation" to a novel host. Barriers to infection will depend on the physiological similarity between original and potential host species (Poulin & Mouillot, 2005), factors that can depend strongly on host phylogeny. Indeed, the idea that pathogens are more likely to switch between closely related host species has been supported by studies in several host-pathogen systems (Davies & Pedersen, 2008;Faria, Suchard, Rambaut, Streicker, & Lemey, 2013;Streicker et al., 2010;Waxman, Weinert, & Welch, 2014). The likelihood of infection of a target host has also been found to increase as a function of phylogenetic distance from the original host in a number of experimental infection studies (Gilbert & Webb, 2007;Longdon, Hadfield, Webster, Obbard, & Jiggins, 2011;Perlman & Jaenike, 2003;Russell et al., 2009). Nevertheless, there are also numerous cases of pathogens switching host over great phylogenetic distances, including within the host-pathogen systems mentioned above. For example, a number of generalist primate pathogens are also capable of infecting more distantly-related primates than expected (Cooper et al., 2012). Moreover, for zoonotic diseases, a significant fraction of pathogens have host ranges that encompass several mammalian orders, and even nonmammals (Woolhouse & Gowtage-Sequeria, 2005). Interestingly, host jumps over greater phylogenetic distances may lead to more severe disease and higher mortality (Farrell & Davies, 2019). One factor that could explain why transmission into more distantly related new hosts occurs at all is infection susceptibility; some host clades may simply be more generally susceptible to pathogens (e.g., if they lack broad resistance mechanisms). Pathogens would therefore be able to jump more frequently into new hosts in these clades regardless of their phylogenetic distance from the original host. In support of this, experimental cross-infections have demonstrated that sigma virus infection success varies between different Drosophila clades (Longdon et al., 2011), and a survey of viral pathogens and their mammalian hosts found that host order was a significant predictor of disease status (Levinson et al., 2013). While an increasing number of studies have described broad patterns of host range for various pathogens (see Table 1), most report only crude estimates of the breadth of host range. There have been few attempts to systematically gather quantitative data on pathogen host ranges. As noted by Bonneaud, Weinert, and Kuijper (2019), most work on pathogen emergence has focused on viruses since these are the cause of many high-profile outbreaks (e.g., Ebolavirus), but it is plausible that the processes underlying emergence may be different in bacterial pathogens. We thus have little understanding of the overall variation in host range both within and amongst groups of pathogens. This has limited our ability to examine how pathogen host range correlates with the emergence of infectious diseases. Here, we address this gap in the literature by considering both bacterial and viral pathogens in the same data set. In calculating host range, rather than using a taxonomic proxy for host genetic similarity (e.g., broad animal taxonomic orders Kreuder Johnson et al., 2015;McIntyre et al., 2014;Woolhouse & Gowtage-Sequeria, 2005) or mammalian host order (Han, Kramer, & Drake, 2016; see Table 1) we use a quantitative phylogenetic distance measure. Such an approach has been used in a recent set of papers (Albery, Eskew, Ross, & Olival, 2019;Guth, Visher, Boots, & Brook, 2019;Olival et al., 2017) which all use an alignment of the mitochondrial gene cytochrome b to build a maximum likelihood tree, constrained to the order-level topology of the mammalian supertree (Fritz, Bininda-Emonds, & Purvis, 2009). As noted by Albery et al. (2019), this mammalian supertree has limited resolution at the species tips. Here, we therefore extend this approach in two ways: (a) we use a concatenated alignment of nine mitocondrial genes rather than one; and (b) we also consider nonmammalian vertebrates (see Section 2). We believe this represents the most extensive vertebrate host tree built to-date, and should give the most precise quantitative measure of the true genetic similarity between hosts. To summarise, here we combine a systematic literature review for bacterial and viral host ranges with a mitochondrial multigene host phylogeny (Figure 1). We compiled a database of 2,595 bacteria and viruses which infect 2,656 vertebrate host species. Our quantitative analysis represents by far the most comprehensive picture of known host-pathogen associations. | Pathogen species We focused on bacteria and viruses, as taken together they are the pathogen groups responsible for the majority of the burden of communicable disease in humans: the combined contribution of HIV/AIDS (viral), tuberculosis (bacterial), diarrheal diseases (predominantly bacterial and viral), lower respiratory diseases (predominantly bacterial and viral) and neonatal diseases (predominantly | Pathogen metadata We collected further metadata for each pathogen species. Where available, we used the ncbi Genome Report for a species (last downloaded: 12 March 2019) to include the mean genome size, number of genes, and GC content. We also annotated each pathogen for the presence of known invertebrate vectors (i.e., whether they can be vector-borne). For bacteria, we additionally included information on Gram stain, bacterial motility, spore formation, oxygen requirement, and cellular proliferation. These traits were collated primarily from the GIDEON Guide to Medically Important Bacteria (Berger, 2016), but where information was missing we also searched the primary literature. For viruses, we also included Baltimore classifications from the ICTV Master Species List (ICTV, 2015). For the analysis including host traits for direct comparison with Olival et al. (2017), we used their data set of the number of disease-related publications for species (Olival et al. searched ISI Web of Knowledge and PubMed using the scientific binomial AND topic keyword: disease* OR virus* OR pathogen* OR parasit*). | Pathogen-host interactions Our literature search was designed to be as exhaustive and systematic as possible ( Figure 1a). We used Google Scholar to conduct a literature search to verify if each bacterial or viral species was associated with a human or vertebrate animal host. Search terms consisted of the pathogen species name and the keywords: "infection", "disease", "human", "animal", "zoo", "vet", "epidemic" or "epizootic". At least one primary paper documenting the robust interaction (i.e., infection) of the bacteria or virus species with a host species needed to be found in our search for the association to be included in our database. In addition, several reputable secondary sources were used to further validate the identified pathogen-host interactions: the GIDEON Guide to Medically Important Bacteria (Berger, 2016); the Global Mammalian Parasite Database (Nunn & Altizer, 2005); and the Enhanced Infectious Diseases Database (EID2; Wardeh et al., 2015;eid2.liverpool.ac.uk/). We aimed to manually read all publications found with Google Scholar searches using our keyword search terms. However, as some pathogen species are extremely well-studied and manual review of all returned publications was not possible, we decided to read only the first ten pages of search results ordered by "relevance" (equivalent to a limit of 200 publications). Obviously, species with >200 results tend to be either well-studied pathogens (e.g., "Mycobacterium tuberculosis" + "infection": 62,900 results in 2016) or species with prolific host ranges (e.g., "Chlamydia psittaci" + "infection": 19,700 results in 2016). For these species we cannot claim to have captured all known hosts with our manual review; i.e., we may not have documented every single host species the pathogen has been recorded as infecting. However, we are confident that we managed to reasonably approximate the full taxonomic breadth of host range, since the first 10 pages of results for these well-studied pathogens usually contained specialized review papers listing the vertebrate host species in which infections had been documented. The majority of bacterial and viral pathogens in our database are known to cause disease symptoms in at least one of their host species. However, in order to be as comprehensive as possible, we considered as a pathogen any species for which there was any evidence of symptomatic adverse infections under natural transmission conditions (even if rare) including: cases where the relationship with the host is commonly asymptomatic, cases where the relationship is only symptomatic in neonatal or immunocompromised individuals, or cases where only a single case of infection had been recorded to date. Cases of deliberate experimental infection of host species were excluded from our database as we judged that these did not constitute natural evidence of a hostpathogen association. A minority of bacterial and viral species in our database have not, to date, been shown to cause any infectious symptoms in the host species they naturally infect. However, characterizing symptoms in wild animal populations is difficult. Furthermore, these pathogens are often very closely related to pathogens which are definitely known to cause disease in the same hosts. For example, Corynebacterium sphenisci was isolated in a single study from apparently healthy wild penguins (Goyache et al., 2003) but is related to species which are pathogens across vertebrate hosts e.g., Corynebacterium pseudotuberculosis, the causative agent of lymphadenitis. Therefore, we included all species apart from bacteria and viruses which we considered to be clearly nonpathogens i.e., well-studied commensal or mutualistic examples such as Lactobacilli in the human microbiome (Walter, 2008). Important invertebrate vector species were also documented in our database, but our main analysis was restricted to vertebrate hosts. | Host species The taxonomic status of each host species identified in the primary literature was brought up to date by identifying the current taxonomically valid species name using the ITIS Catalogue of Life (Rosokov et al., 2016) and the ncbi Taxonomy Database (www.ncbi. nlm.nih.gov/taxonomy). In some cases, hosts were not identified to the species level, but were retained in our database if they were identified to the family/order level and there were no other host species from the same family/order infected by the same pathogen species. In other cases, hosts were identified to the subspecies level (e.g., Sus scrofa domesticus) if these subgroups were economically and/or sociologically relevant. The full compiled database contained 13,671 associations ( Figure 1a), including invertebrate hosts (n = 305) as well as vertebrates (n = 2,913). However, we restricted our host-relatedness analysis to vertebrates for which we could construct a mitochondrial gene phylogeny ( Figure 1b). | Definition of zoonosis We classified a pathogen as zoonotic if it infected both humans and additional vertebrate animals, including those shared but not known to be naturally transmissible among different host species. This differs from the WHO's definition of zoonotic: "any disease or infection that is naturally transmissible from vertebrate animals to humans and vice-versa" (WHO, 2019). Our definition includes species that mostly infect their various hosts endogenously or via the environment (i.e., opportunistic pathogens) such as species in the bacterial genus Actinomyces. We chose this definition based on the observation that many new infectious diseases occur through crossspecies transmissions and subsequent evolutionary adaptation. Furthermore, pathogens could also evolve to become transmissible between host species. We did not classify a bacterial or viral species as zoonotic if it had only been recognized outside of human infection in invertebrate hosts. | Host phylogeny To infer a phylogenetic tree for all 3,218 vertebrate and invertebrate species, we relied on nine mitochondrial genes: cox2, cytb, nd3, 12s, 16s, nd2, co3, coi, and nadh4. Our strategy was as follows. First, we collected mitochondrial genes for species that had mitochondrial gene submissions present in the ncbi database. For species without a mitochondrial gene submission but where a whole genome was present, we extracted the genes by blasting the genes of a taxonomically closely related species and then extracting the gene from the resulting alignment. If no mitochondrial gene or whole genome submissions were available, we used the ncbi taxonomy to approximate the species using a closely related species (using either available genes or sequences extracted from genomes). Using this strategy and some manual filtering, we were able to obtain mitochondrial gene sequences for 3,069 species (including invertebrates). We merged these genes in their distinct orthologous groups (OGs) using OMA (Altenhoff et al., 2018). We used the nine largest OGs that had our expected nine genes as a basis for alignment to ensure that alignment was conducted on high quality related sequences. We aligned sequences for each OG separately using mafft (version 7) with the options '--localpair --maxiterate 1,000' (Katoh & Standley, 2013). We then used maxalign (version Mean PHB Frequency concatenated all OGs and inferred the phylogenetic tree using iqtree (version 1.5.5) with the options '-bb 1,000' and the HKY + R10 model as identified by ModelFinder part of the IQTREE run (Hoang, Chernomor, von Haeseler, Minh, & Vinh, 2018;Nguyen, Schmidt, von Haeseler, & Minh, 2015). We observed a strong similarity between cophenetic distances for the 551 mammals in our multimitochondrial gene tree that are also included in the cytb tree produced by Olival Guth et al., 2019), but there were some discrepancies ( Figure S1). We did not investigate these further, but suspect they may have stemmed from us not constraining our phylogeny to an existing order-level topology; that is, our phylogeny represents host genetic distances inferred solely from available mitochondrial gene sequences, agnostic to any other information. However, this tree appeared to be globally highly consistent with ncbi taxonomic ordering, with only a small minority of species disrupting monophyly of groups (n = 93, 3.1%). The apparently incorrect placement of these species could have several possible explanations, including: mislabelling in the database, poor sequence quality, or problems with the tree inference. After pruning the tree to include only vertebrate species (n = 2,656, Figure 1c), a reduced fraction of species disrupted monophyly of groups (n = 40, 1.5%). The analyses presented in the main text include species which disrupted monophyly. We found that the mean PHB for a pathogen (i.e., the average interhost distance) was correlated with the maximum PHB (i.e., the largest interhost distance) ( Figure S2), so there was little practical difference in using either measure. For simplicity, PHB refers to PHB mean unless otherwise stated. In fitting generalized additive models to predict zoonotic potential, models included different PHB quantities (see Section 2.8.2). | Statistical analysis Code for all analyses is available in our code and data repository (https://github.com/liamp shaw/Patho gen-host-range). Here, we give a brief overview of our statistical methods. | Descriptive analyses of pathogen traits and host range To summarise the data set and make the most use of the traits we manually compiled, we performed separate analyses of different pathogen traits and their associations with host range using a specialist/generalist distinction. We used Chi-squared tests to assess whether viruses and bacteria differed in their proportion of specialist pathogens or vector-borne pathogens. We used Wilcoxon rank sum tests to compare the distributions of GC content and genome size for specialists and generalists within viruses and bacteria. We used Chi-squared tests to compare the distribution proportion of generalists within subsets of bacteria based on lifestyle factors: motility, cellular proliferation, spore formation, and oxygen requirement. Reported p-values are not corrected for multiple testing. These results are best viewed as exploratory; some of the conclusions were not retained when looking at the partial effects of predictors in a best-fit GAM, controlling for the effects of other variables. | Generalized additive models (GAMs) We fitted GAMs to rank predictors using the approach of Olival et al. (2017) Note: A "specialist" pathogen infects only a single host, a "generalist" more than one. Generalists are categorised according to whether their hosts are within the same family (e.g., Bovidae), order (e.g., Artiodactyla), or across orders. Percentages are of the total pathogen species of each type (bacteria or virus). variables (e.g., different proxies for research effort, although in practice these metrics are highly correlated). Categorical and binary variables are fitted as random effects for each level of the variable. | Host traits associated with greater pathogen richness We used a data set of host traits for terrestrial wild mammal species compiled by Olival et al. (2017) to predict pathogen richness (bacterial and viral) per species. These host traits included a phylogenetic eigenvector regression (PVR) of body mass. As Olival et al. (2017) collected information for an analysis of only viral pathogens, we found there was better overlap for viruses in our data set (n = 613 host wild mammals) than for bacteria (n = 274). | Zoonotic potential For bacteria, GAMs could include terms for host range (PHB mean , PHB median , or PHB max ), research effort (ncbi pubmed, nucleotide, or sra results), motility, sporulation, being vector-borne, oxygen requirements, and Gram stain. We excluded cellular lifestyle (intra/extracellular) as a predictor due to low numbers, and excluded pathogens of unknown motility (n = 50) or sporulation (n = 17). For viruses, GAMs could include terms for host range, research effort, genome size (number of proteins and length), being vector-borne, and genome type (Baltimore classification). We excluded pathogens with unknown genome size (n = 253). We observed structure in some partial effect residuals in the best-fit GAMs: research effort for bacteria ( Figure 6b) and host range for viruses ( Figure 6d). This structure was driven by pathogen taxonomy, with families (orders) for bacteria (viruses) having different zoonotic potential; e.g., the Staphylococcaceae contain a high proportion of generalists. Attempts to include taxonomy as a categorical predictor produced best fit models which excluded all lifestyle factors (not shown), although host range and research effort were still the strongest predictors. | Shared pathogen analysis If we denote the set of pathogens seen at least once in a host taxon a as p a (where the taxon could be a species, genus, family etc), we define the fraction of shared pathogens between two taxa a and b as: Note that this definition is symmetric in a, b. It can therefore be compared with the (mean) phylogenetic distance between taxa using a Mantel test to determine the correlation. Another property we consider is the fraction of the pathogens seen in a host taxon which are also seen in a reference host species (e.g., humans). Taking the comparison of primates and humans as an example: in the database, the primates (excluding humans) are represented by 147 host species with 762 host-pathogen associations. The total number of pathogen species with at least one association with a primate species is p P = 222. Of these, 158 are also seen in humans (the total number of pathogen species with at least one association with humans is p H = 1,675). We can then define the fraction of pathogens of primates also seen in humans as We used an illustrative sigmoidal fit of the form y ∼ A(1 + e separately for bacteria and viruses to model the relationship between f and the mean phylogenetic distance of the order to humans. | A comprehensive database of pathogen associations for vertebrates Our database includes 12,212 associations between 2,595 verte- | Specialist pathogens are the most common category Approximately half of all pathogens infected only a single host species (n = 1,473, 56.8%; Table 2). | Multihost viruses have a more restricted host range than multihost bacteria Although the majority of pathogens infect just one host, and the total proportion of bacteria and viruses infecting multiple host orders was similar (30.1% vs. 33.7%), the distribution of generalists was significantly different between bacteria and viruses. Multihost viruses were more likely than bacteria to only infect a single host family ( Table S1). This restricted host range of multihost viruses was also apparent in the distribution of mean phylogenetic host breadth (PHB) for multihost pathogens (Figure 2). Bacteria generally had a more positively skewed distribution of mean PHB compared to viruses (Figure 2; median 0.520 vs. 0.409, p < .001 Wilcoxon rank sum test). Notably, these distributions were both above the median maximum phylogenetic distance between hosts from the same order, which was 0.323. The observation that bacteria had a more positively skewed distribution of mean PHB was reproduced when subsampling to exclude human hosts for both domestic and non-domestic hosts (see code repository). | Pathogen richness varies by host order Observed pathogen richness varied at the level of host order ( Figure 3). Considering only host species with an association with at least one bacterial species and one viral species, bacterial and viral richness were strongly correlated (Spearman's ρ = 0.57, p < .001). The proportions of these bacteria and viruses shared with humans were more weakly correlated (Spearman's ρ = 0.21, p < .001). We used a data set of host traits for wild mammals previously compiled by Olival et al. (2017) to find predictors of total bacterial and viral richness within a species using GAMs (Figure 4). More than 60% of total deviance was explained by the best-fit GAMs (Table 3a,b for bacteria and viruses respectively). The number of disease-related citations for a host species was the strongest predictor of the number of both bacterial and viral pathogens, accounting for ~80% of relative deviance. F I G U R E 3 Observed pathogen richness in mammals. Box plots of (a, c) proportion of zoonotic viruses/bacteria and (b, d) total viral/ bacterial richness per species, aggregated by order. Each point represents a host species, with the colour indicating the proportion of associations for a host species derived from observations of wild hosts (as opposed to in domestic or captive hosts of the same species). Lines represent median, boxes, interquartile range for each order. Data from 3,869 host-virus associations and 2,653 host-bacteria associations. The ordering of mammals is the same as in Figure 1 | Pathogen genome and host range We observed different distributions of pathogen genome GC content and genome size depending on whether a pathogen was a specialist or a generalist ( Figure S5). We had information | Genome composition Viruses with RNA genomes had a greater PHB than DNA viruses (median: 0.238 vs. 0). Subsetting further, +ve-sense single-stranded RNA viruses (Baltimore group V) had the greatest PHB ( Figure S6). DNA viruses typically have much larger genomes than RNA viruses. We therefore fitted a linear model for mean PHB using both DNA/RNA genome and genome size, with an interaction term. Having an RNA genome and a larger genome were both significantly associated with greater mean PHB in this linear model (t = 6.11 and t = 4.58 respectively, p < .001 for both, F I G U R E 4 Host traits which predict total viral richness (top row) and bacterial richness (bottom row) per wild mammal species. Each plot shows the relative effect of the variable in the best-fit GAM accounting for the effects of other variables (see Table 3 for numerical values). Shaded circles represent partial residuals and shaded areas represent 95% confidence intervals around the mean partial effect. (a, f) Number of disease-related citations per host species, (b) phylogenetic eigenvector representation (PVR) of body mass i.e., corrected for phylogenetic signal, (c, g) geographic range area of host species, (d, h) number of sympatric mammal species overlapping with at least 20% area of target species range, and (e, i) mammalian order (nonsignificant terms retained in best model shown in grey). For bacteria, PVR body mass was not included in the best-fit GAM | Pathogen factors affecting host range of bacteria We looked at the effect of bacterial lifestyle factors on the proportion of specialist and generalist pathogens ( Figure S7). Mycoplasmatales ). Motile bacteria were more likely to infect multiple F I G U R E 5 The partial effects of predictors in the best-fit GAM for predicting whether a bacterium (top row) or a virus (bottom row) is zoonotic. Each plot shows the relative effect of the variable in the best-fit GAM accounting for the effects of other variables (see Table 3 for numerical values). Shaded circles represent partial residuals and shaded areas represent 95% confidence intervals around the mean partial effect. Viruses: (a) Mean phylogenetic breadth of viral pathogen; (b) SRA records for viral pathogen; and (c) Significant categorical predictors. +ssRNA and −ssRNA are mutually exclusive as they come from the "genome type" variable. Bacteria: (d) Median phylogenetic breadth of bacterial pathogen; (e) PubMed records for bacterial pathogen; and (f) Significant categorical predictors. Facultatively anaerobic and microaerophilic are mutually exclusive as they come from the "oxygen" lifestyle variable. Gram stain and motility were included in the best-fit model but were not significant. Predictors are different for each best-fit GAM because the model term for e.g., phylogenetic breadth could be chosen from the mean, median, or maximum PHB for a pathogen | Spore formation Only a small number of bacterial pathogens were spore-forming (n = 91), and they did not have a significantly different number of generalists compared to nonspore-forming bacteria. | Predicting zoonotic potential from pathogen traits We fitted GAMs to predict whether or not a pathogen was zoonotic using pathogen traits and inspected the partial effects for each predictor in the best-fit model ( Figure 5). Best-fit GAMs could explain ~30% of total deviance (Table 3). We found that research effort and host range (excluding human hosts) were the two strongest predictors of zoonotic potential, together accounting for >70% of relative deviance. For bacteria, being facultatively anaerobic or microaerophilic were significantly associated with zoonotic potential (Table 3c; Figure 5c); for viruses, those with an RNA genome had greater zoonotic potential (Table 3d; Figure 5f). Vector-borne pathogens had greater zoonotic potential for both bacteria and viruses. | Pathogen sharing between hosts decreases with phylogenetic distance The proportion of total pathogens shared between host orders decreased with phylogenetic distance (Figure 6a). Comparing vertebrate host orders specifically to Homo sapiens showed that the closer F I G U R E 6 The fraction of shared pathogens between hosts decreases with interhost phylogenetic distance. For the definitions of fraction of shared pathogens between two taxa, see Section 2.8.5. (a) All pairwise comparisons between different host orders with at least 10 host-pathogen associations in the database. The blue line shows a smoothed average fit, produced with "loess". Only comparisons between host orders with at least 10 host-pathogen associations in the database are shown. (b) Fraction of shared pathogens between these host orders and Homo sapiens (as a fraction of total unique pathogens infecting at least one species in each order), showing data for bacteria (black) and viruses (red) together with a sigmoidal fit (thick line) for each pathogen type. Size of points indicates the number of unique host-pathogen associations for that order. an order was to humans, the greater the fraction of pathogens that were shared for both bacteria and viruses, with an approximately sigmoidal relationship (Figure 6b). The decrease in the fraction of shared pathogens was steeper for viruses than bacteria. | D ISCUSS I ON In this work, we have compiled the largest human-curated database of bacterial and viral pathogens of vertebrates across 90 host orders. To date, this represents the most detailed and taxonomically diverse characterization of pathogen host range. Using this database, we were able to conduct a detailed quantitative analysis of the overall distribution of host range (host plasticity) across two major pathogen classes (together bacteria and viruses comprise the majority of infectious diseases). We also examined the proportion of pathogens shared between host orders. We found that pathogen sharing was strongly correlated with the phylogenetic relatedness of vertebrate hosts. This finding corroborates and generalises the observation by Olival et al. (2017) for viral pathogens of mammalian hosts, as well as other studies using smaller taxon-specific data sets (Cooper et al., 2012;Davies & Pedersen, 2008;Streicker et al., 2010). This suggests that phylogeny is a useful general predictor for determining the "spillover risk" (i.e., the risk of cross-species pathogen transmission) of different pathogens into novel host species for both bacteria and viruses. Given the difficulty in predicting the susceptibility of cross-species spillovers (Parrish et al., 2008) and the restriction of most previous work to viruses, this finding is an important step in our understanding of the broad factors underlying and limiting pathogen host ranges. The underlying mechanisms by which phylogeny affects spillover risk still need to be more closely examined. Pathogens are likely to be adapted to particular host physiologies (e.g., host cell receptors and binding sites), which are expected to be more similar between genetically closer host species. One mechanism by which a pathogen may be able to establish a broader host range is by exploiting more evolutionarily conserved domains of immune responses, rather than immune pathways with high host species specificity. Such an association has been shown among viruses for which the cell receptor is known (Woolhouse, 2002). Interestingly, we found that the decrease in the fraction of shared pathogens with increasing phylogenetic distance was steeper for viruses than bacteria, which suggests that bacterial pathogens, on average, have higher host plasticity than viruses (i.e., a greater ability to infect a more taxonomically diverse host range). Future studies could examine whether host cell receptors for bacterial pathogens are more phylogenetically conserved compared with host cell receptors for viral pathogens. When examining the overall distribution of host ranges, we found a substantial fraction of both bacterial and viral pathogens that have broad host ranges, encompassing more than one vertebrate host order. The evolutionary selection of pathogens that have broad host ranges has been a key hypothesis underpinning the emergence of new zoonotic diseases (Cleaveland et al., 2001;Woolhouse & Gowtage-Sequeria, 2005), and mean PHB has previously been shown to be the strongest predictor of the zoonotic potential of viral pathogens (Olival et al., 2017). High pathogen host plasticity has also been found to be associated with both an increased likelihood of secondary human-to human transmissibility and broader geographic spread (Kreuder Johnson et al., 2015), both of which are traits linked to higher pandemic potential. Given these observations, it may be useful to more closely monitor those pathogens with the highest mean PHBs that have not yet been identified as zoonoses. Several traits were found to be significantly associated with bacterial and viral host ranges. For viruses, RNA viruses and larger genome size were independently associated with a broader host range. This is in line with RNA viruses appearing particularly prone to infecting new hosts and causing emerging diseases, something which has been attributed to their high mutation rate (Holmes, 2010). The positive association between viral genome size and host range might be due to pathogens specialising on a narrower range of hosts requiring a smaller number of genes to fulfil their replication cycle. For bacteria, motile and aerobic pathogens had a wider host range, with the largest number of hosts found for facultative anaerobes, perhaps suggesting a greater ability to survive both inside and outside hosts. Conversely, we did not find a strong association between genome size and host range in bacteria; in fact, specialists had slightly larger genomes on average compared to generalists. Since genome reduction through loss of genes is a well-recognised signature of higher virulence in bacteria (Weinert & Welch, 2017), this suggest that pathogenicity may be largely uncorrelated to host range in bacteria. It would be interesting to further explore these relationships for obligate and facultative pathogens in the future. We found a surprising lack of association between the expected "intimacy" of host-pathogen relationships (as judged with pathogen lifestyle factors) and host range. We identified more single-host bacteria than viruses, which was the opposite of what we predicted going into this study. One possibility is that bacterial pathogens may be more dependent on the host microbiome i.e., their ability to infect other host species may be more contingent on the existing microbial community, compared to viruses. However, we recognize that literature bias could contribute to this conclusion, particularly for RNA viruses which are more difficult to identify and diagnose than other infective agents. We also found that intracellular and extracellular bacteria had roughly the same number of hosts despite our expectation for intracellular bacteria to have a more narrow host range due to their higher expected intimacy with their host. However, it should be noted that information about cellular proliferation was only available for 18% (307 of 1,685) of all bacteria in the database, and this is a trait which can be difficult to unambiguously characterize (Silva, 2012). Previous studies of viral pathogens have shown that those that are vector-borne tend to have greater host ranges-whether measured by higher host plasticity (Kreuder Johnson et al., 2015) or higher mean PHB (Olival et al., 2017). We replicate this observation for both viruses and bacteria, suggesting a strong and consistent effect of being vector-borne for a pathogen. We also found that greater host range was associated with greater zoonotic potential for viral and bacterial pathogens, complementing previous work restricted to viruses (Olival et al., 2017). We controlled for research effort (number of publications, or number of SRA records) and found that it was a strong predictor of both greater pathogen richness within a host species, and the zoonotic potential of a pathogen. However, disentangling these factors is difficult. There could be increased research efforts to study known zoonoses to identify them in animals in order to establish possible "reservoirs", giving a biased picture. However, this could also partly be a consequence of the global distribution of humans and their propensity to transmit pathogens to both wild and domestic species. Although we attempted to control for research effort in our statistical analyses which did not depend on a binary "specialist" versus "generalist" distinction, the limitation of reflecting the current state of knowledge still applies to any literature-based review and cannot be avoided. We did not investigate in detail how geographical and ecological overlap between host species affects pathogen sharing. We found that greater sympatry with other mammal species (defined as ≥20% area of target species range) was a positive predictor of viral sharing but was negatively correlated with bacterial sharing (Figure 4). Olival et al. (2017) concluded ≥20% was the minimum threshold for viral sharing, but it may not be appropriate for bacteria. Future work using generalized additive mixed models (GAMMs) will be necessary to properly control for autocorrelation in pathogen sharing networks (Albery et al., 2019). However, we provide some preliminary thoughts here. Geographical overlap provides the necessary contact for host switching to occur (Davies & Pedersen, 2008), and some authors have claimed that the rate and intensity of contact may be "even more critical" than host relatedness in determining switching (Parrish et al., 2008). In support of this, "spillovers" over greater phylogenetic distances are more common where vertebrates are kept in close proximity in zoos or wildlife sanctuaries (Kreuder Johnson et al., 2015). Similarly, although multihost parasites generally infect hosts that are closely related rather than hosts with similar habitat niches (Clark & Clegg, 2017), ecology and geography have been found to be key factors influencing patterns of parasite sharing in primates (Cooper et al., 2012). While contact between two host species clearly provides a necessary but not sufficient condition for direct host switching, phylogenetic relatedness dictates the likely success of such a switch. Therefore, although the relative importance of phylogeny and geography may depend on the specific context, our observation of the strong dependence of pathogen sharing on phylogenetic distance across all vertebrates emphasises that host phylogenetic relatedness is the primary underlying biological constraint. We have substantially improved on previous efforts to assess (Graham, Storch, & Machac, 2018). Developing a parallel phylogenetic framework for pathogens to complement our host phylogenetic framework may be desirable, but challenging. An alignment of marker genes is tractable for bacteria (e.g., by using ribosomal proteins: Hug et al., 2016), but more problematic for viruses, which have probably evolved on multiple independent occasions (Krupovic & Koonin, 2017). Tracing the ancestors of viruses among modern cellular organisms could represent another route to see if their host distribution reflects their evolutionary past. Potentially, an alignment-free genetic distance method could be used instead; as thousands more genomes become available for both pathogens and their hosts, such a method may be the optimal way to incorporate all known genomic information at a broad scale. In conclusion, we have compiled the largest data set of bacterial and viral pathogens of vertebrate host species to date. This is an important resource that has allowed us to explore different factors affecting the distribution of host range of vertebrate pathogens. While we are still some way off having a clear overall understanding of the factors affecting pathogen-host interactions, our results represent a substantial step in that direction. The list of known pathogens is of course only the tip of the iceberg: this work was completed before the emergence of the novel coronavirus SARS-CoV-2 and the subsequent Covid-19 global pandemic. However, it is notable that the emergence of SARS-CoV-2 appears to have started with an otherwise unremarkable host jump from bats to humans, possibly via an as-yet-unidentified intermediate host (Andersen, Rambaut, Lipkin, Holmes, & Garry, 2020); although the specific pathogen was new, the pattern was not. Maintaining comprehensive data sets into the future is challenging but important, in order to ensure that all available knowledge is synthesized-rather than drawing conclusions only from well-studied pathogens, which probably represent the exceptions and not the norm. ACK N OWLED G EM ENTS We acknowledge financial support from the European Research Council (ERC) (grant ERC260801-BIG_IDEA to FB). DD and CD acknowledge the support of the Swiss National Science Foundation (grant 150654). We would also like to acknowledge the many public databases used in the construction of our own database and thank all creators. We thank Olival and colleagues for releasing comprehensive code for their reproducible analyses, allowing us to adapt it for our own purposes and easily compare results. AUTH O R CO NTR I B UTI O N S Using
2019-07-16T23:01:19.605Z
2019-06-13T00:00:00.000
{ "year": 2020, "sha1": "5a74bb86633d78761c4f98b8c08063bc2b1d381e", "oa_license": "CCBY", "oa_url": "https://onlinelibrary.wiley.com/doi/pdfdirect/10.1111/mec.15463", "oa_status": "HYBRID", "pdf_src": "Wiley", "pdf_hash": "bae95404c5ac8ac03b8e6caa6f8951583051051e", "s2fieldsofstudy": [ "Biology" ], "extfieldsofstudy": [ "Biology" ] }
9096469
pes2o/s2orc
v3-fos-license
Identification Of Outliers In Oxazolines AND Oxazoles High Dimension Molecular Descriptor Dataset Using Principal Component Outlier Detection Algorithm And Comparative Numerical Study Of Other Robust Estimators From the past decade outlier detection has been in use. Detection of outliers is an emerging topic and is having robust applications in medical sciences and pharmaceutical sciences. Outlier detection is used to detect anomalous behaviour of data. Typical problems in Bioinformatics can be addressed by outlier detection. A computationally fast method for detecting outliers is shown, that is particularly effective in high dimensions. PrCmpOut algorithm make use of simple properties of principal components to detect outliers in the transformed space, leading to significant computational advantages for high dimensional data. This procedure requires considerably less computational time than existing methods for outlier detection. The properties of this estimator (Outlier error rate (FN), Non-Outlier error rate(FP) and computational costs) are analyzed and compared with those of other robust estimators described in the literature through simulation studies. Numerical evidence based Oxazolines and Oxazoles molecular descriptor dataset shows that the proposed method performs well in a variety of situations of practical interest. It is thus a valuable companion to the existing outlier detection methods. INTRODUCTION Accurate detection of outliers plays an important role in statistical analysis. If classical statistical models are randomly applied to data containing outliers, the results can be deceptive at best. In addition , An outlier is an observation that lies an abnormal distance from other values in a data from a dataset and their identification is the main purpose of the investigation. Classical methods based on the mean and covariance matrix are not often able to detect all the multivariate outliers in a descriptor dataset due to the masking effect [1] , with the result that methods based on classical measures are inappropriate for general use unless it is certain that outliers are not present. Erroneous data are usually found in several situations, and so robust methods that detect or down weight outliers are important methods for statisticians. The objective of this examination is to provide an detection of outliers, prior to whatever modelling process is visualized. Sometimes detection of outliers is the primary purpose of the analysis, other times the outliers need to be removed or down weighted prior to fitting non robust models. We do not distinguish between the various reasons for outlier detection, we simply aim to inform the analyst of observations that are considerably different from the majority. Our procedures are therefore exploratory, and applicable to a wide variety of settings. Most methods with a high resistance to outliers are computationally exhaustive; not accordingly, the availability of cheap computing resources has enabled this field to develop significantly in recent years. Among other proposals, there currently exist a wide variety of statistical models ranging from regression to principal components [2] that can include outliers without being excessively influenced, as well as several algorithms that explicitly focus on outlier detection. There are several applications where multi-dimensional outlier identification and/or robust estimation are important. The field of Computational drug discovery, for instance, has recently received a lot of concentration from statisticians (e.g. the project Bioconductor, http://www.bioconductor.org). Improvements in computing power have allowed pharmacists to record and store extremely large databases of information. Such information likely to contain a fair amount of large errors, however, so robust methods is needed to prevent these errors from influencing the statistical model. Undoubtedly, algorithms that take a long time to compute are not perfect or even practical for such large data sets. Also, there is a further difficulty encountered in molecular descriptor data. The number of dimensions is typically several orders of importance larger than the number of observations, leading to a singular covariance matrix, so the majority of statistical methods cannot be applied in the usual way. As will be discussed later, this situation can be solved through singular value decomposition but it does require special attention. It can thus be seen that there are a number of important applications in which current robust statistical models are impractical. The Data Set The molecular descriptors of 100 Oxazolines and Oxazoles derivatives [30][31] based H37Rv inhibitors analyzed. These molecular descriptors are generated using Padel-Descriptor tool [32]. The dataset covers a diverse set of molecular descriptors with a wide range of inhibitory activities against H37Rv. A Brief Overview of Outlier Detection There are two basic ways to outlier detection -distance-based methods, and projection pursuit. Distance-based methods aim to detect outliers by computing a measure of how far a particular point is from the centre of the data. The common measure of "outlyingness" for a data point x ୧ ∈ ℝ ୮ , i = 1, … . . , n, is a robust version of the Mahalanobis distance, Where ܶ is a robust measure of location of the descriptor data set ܺ and ‫ܥ‬ is a robust estimate of the covariance matrix. Problems faced by distance-based methods include (i) obtaining a reliable estimate of in addition to (ii) how large RD ୧ should be before a point is classified as outlying. This focuses the devoted connection between outlier detection and robust estimation -the latter is required as part of the prior. Retrieving good robust estimators of ܶ and ‫ܥ‬ are essential for distance-based outlier detection methods. It is then crucial to find a metric (based on ܶ and C) set apart outliers from regular points. The final separation boundary commonly depends on userspecified penalties for misclassification of outliers as well as regular points(inliers). Robust Estimation as Main Goal A simple robust estimate of location is the coordinatewise median. This estimator is not orthogonally equivariant (does not transform correctly under orthogonal transformations), but if this property is important, the L1 median should be used alternatively, expressed as where ‖·‖ stands for the Euclidean norm. The L1 median has maximal breakdown point, and a fast algorithm for its computation is given in [3] . where ρሺ·ሻ is a non-decreasing function on ሾ0, ∞ሻ , and and b are tuning constants that can be as one chosen to support particular breakdown properties. It is commonly easier to work with ψ = ∂ρ ∂d ⁄ after all has a root where ρ has minimum. Distance based algorithms that follow robust estimation as a main goal -without explicit outlier detection -contain the OGK estimate [5] the minimum volume ellipsoid (MVE) and minimum covariance determinant (MCD) [6][7] MCD tries to find the covariance matrix of minimum determinant containing at least h data points, where h determines the robustness of the estimator; it should be at least ሺn + p − 1ሻ 2 ⁄ . The 78 MCD and MVE are instances of S-estimators with non-differentiable ρሺd ୧ ሻ because ρሺd ୧ ሻ is either 0 or 1. MCD shows good performance on data sets with low dimension but on larger data sets the computational stress can be restrictive -the accurate solution requires a combinational search. In the latter case good starting points need to be acquired, accommodating an approximately correct method. Equivariant procedures of acquiring these starting points, however, are based on subsampling methods and the number of subsamples needed to acquire an tolerable level of accuracy increases swiftly with dimension. Rousseeuw and van Driessen (1999) developed a faster version of MCD which was a considerable advancement, but is still quite computationally exhaustive. The OGK estimator [5] is located on pair wise robust estimates of the covariance. Gnanadesikan and Kettenring (1972) computed a robust covariance estimate for two variables ܺ and based on the uniqueness where σ is a robust estimate of the variance. The matrix build from these pair wise estimates will not necessarily be positive semi definite, so Maronna and Zamar (2002) carried out by performing an eigen decomposition of this matrix. After all the variables in eigenvector space are orthogonal, the covariances are zero and it is enough to get robust variance estimates of the data projected onto each eigenvector direction. The eigenvalues are then restored with these robust variances, and the eigenvector conversion is applied in reverse to give in a positive semidefinite robust covariance matrix. If the original data matrix is robustly scaled (every component divided by its robust variance), the OGK will be scale invariant. This method can be iterated, despite Maronna and Zamar (2002) find this is not consistently better. Maronna and Zamar (2002) in addition to find that using weighted estimates is to some extent better, in which case the observations are weighted according to their robust distances ݀ as scaled by the robust covariance matrix. They concern a weighting function of the form ‫ܫ‬ሺ݀ < ݀ ሻ where ‫ܫ‬ሺ·ሻ is the indicator function and ݀ is taken to be where χ ୮ ଶ ሺβሻ is the -quantile of the χ ୮ ଶ distribution. Observations thus admit full weight except that their robust distance d > d , in which case they admit zero weight. Maronna and Zamar (2002) note that the robust distances can be swiftly calculated in the eigenvector space without the requirement for matrix inversion because the ‫‬ components are orthogonal in this space. That is, where z ୧୨ are the data in the space of eigenvectors , Z ୨ are the components in this space, is robust location estimate and ߪ is a robust variance estimate. Explicit Outlier Detection Continuing robust estimation to outlier detection needs some knowledge of the distribution of robust distances If pursues a multivariate normal distribution, the squared classic Mahalanobis distance (based upon the sample mean and covariance matrix) pursues a χ ୮ ଶ distribution [8]. In addition to, if robust estimators and are applied to a large data set in which the non-outliers are normally distributed. Hardin and Rocke (2005) discovered that the squared distances could be expressed by a scaled -distribution. After all, for non-normal data, it is not clear how the outlier boundary should be driven to give optimal classification rates. These regards form the basis for the use of d by Maronna and Zamar (2002). The complete change of equation (6) helps the distribution of d ୧ ′s be like that of χ ୮ ଶ for non-normal original data, heading to better results for the cutoff value than simply χ ୮ ଶ ሺβሻ. Hopeful algorithms that focus on detection of outliers contain rocke-Estimator [25], sfast-Estimator [26],M-Estimator [27],MVE-Estimator [28],NNC-Estimator [29] ,BACON [9], PCDist [10][11] ,sign1 [12][13] and sign2 [14], Outlier identification using robust (mahalanobis) distances based on robust multivariate location and covariance matrix [15]. BACON and robust multivariate location and covariance matrix are distance-based and in an appropriate, direct the larger part of computational effort toward obtaining robust estimators and . BACON begins with a small subset of observations believed to be outlier-free, to which it iteratively adds points that have a small Mahalanobis distance based on T and of the current subset. One reason that builds MCD unreliable for high is that its contamination bias evolves very swiftly with p [16]. Robust multivariate location and covariance matrix aims to reduce the computational burden by subdividing the data into cells and running MCD on each cell, i.e. reducing the number of observations that MCD performs on, with the same number of dimensions. It then associates the results from each cell to yield a beginning point for an S-estimator [17] that deals a complex minimization problem to yields a robust estimate of the covariance matrix . S-estimators can occasionally converge to an inaccurate local solution, so a good beginning point is needed. Despite, awaiting on MCD in the first stage confines Robust multivariate location and covariance matrix from investigating large data sets, particularly those of high dimension. It would appear that methods based on combinatorial search and alternative there of obtains an inherent inability to investigate large data sets. Projection Pursuit In adverse to distance-based procedures are projection pursuit methods [18] which can equivalently be applied to robust estimation as a main goal or carries towards explicit outlier detection. The fundamental purpose of projection pursuit methods is to find suitable projections of the data in which the outliers are readily credible and can thus be downweighted to turn out a robust estimator, which in turn can be used to detect the outliers. Because they do not consider the data to begin from a particular distribution but only search for useful projections, projection pursuit methods are not altered by non-normality and can be mainly applied in diverse data situations. The penalty for such independence comes in the form of increased computational stress, because it is not clear which projections should be tested; an exact method would require that all attainable directions be tested. The primal equivariant robust estimator having a high breakdown point in arbitrary dimension was the Stahel-Donoho estimator [19][20]. Computer approximation based on instructions from random subsamples was developed by Stahel (1981), but without any doubt a large amount of time is necessary to obtain acceptable results. Even though projection pursuit algorithms have the advantage of being appropriate in different data situations, their computational difficulties seem horrible. The High Dimensional Situation High dimensional data present several problems to classical statistical analysis. As earlier discussed, computation time increases more rapidly with ‫‬ than with ݊. For combinatorial and projection pursuit algorithms, this increase is of enough importance to put in question the practicability of such methods for high dimensional molecular descriptor data. In between the speedy distance-based methods, computation times of algorithms increase linearly with ݊ and cubically with ‫.‬ This indicates that for very high dimensional molecular descriptor data, the computational stress of inverting the scatter matrix is nontrivial. This is particularly evident in iterative methods which require many iterations to converge, since the covariance matrix is inverted on each iteration. Thus, while the Mahalanobis distance is a very useful metric for finding correlated multivariate outliers, it is expensive to calculate. Every other methods of detection of outliers fare even worse, however, usually give up either computational time or detection accuracy. The MCD is a good case of this in that the precise solution is very accurate but infeasible to compute for all but small molecular descriptor data sets, whereas a faster solution can be obtained if random subsampling is used to produce an approximate solution. It will be investigated in results and discussion whether the subsampling version of MCD is competitive regarding both accuracy and computation time. Projection pursuit methods including the Stahel-Donoho estimator have computation times that increase very rapidly in higher dimensions, and are often at least an order of magnitude slower than distance based methods since their search for appropriate projections is an naturally time-consuming task. Thus even if the Mahalanobis distance may be computationally difficult due to the matrix inversion step, the robust version -RD, as defined in equation (1) -is an accurate metric for outlier detection and could well be more computationally fair than other ways. This is appropriate to several biological applications where the data commonly have orders of weight more dimensions than observations. This is also the typical situation in chemometrics, which led to the development of Partial Least Squares (PLS) [21] with other methods.Because the covariance matrix is singular the robust Mahalanobis distance cannot be calculated. This is not as big a problem as initially occurs, after all, because the data can be transformed via singular value decomposition to an equivalent space of dimension ݊ − 1 [2] and the analysis carried in the same way as ‫‬ < ݊. However, this situation needs special attention and most outlier algorithms have to be modified to way this type of high-dimensional molecular descriptor data. High dimensional molecular descriptor data have various interesting geometrical properties, discussed in [22]. One such property that is particularly appropriate to outlier detection, is that high dimensional data points lie near the surface of an enlarging sphere. For example , if ‫‖ݔ‖‬ is the norm of ‫ݔ‬ = ሺ‫ݔ‬ ଵ , … . . , ‫ݔ‬ ሻ ் drawn from a normal distribution with zero mean and identity covariance matrix, then, for ample ‫‬ we have because the summation includes a ߯ ଶ distribution In this manner, if the outliers have even a slightly dissimilar covariance structure from the inliers(non-outliers), they will lie on a different sphere. This does not help low dimensional outlier detection, but if an algorithm is efficient of processing high dimensional molecular descriptor data, it should not be too hard to discover the different spheres of the outliers and inliers. Principal components are a well known method of dimension reduction, that also advice an way to detecting high dimensional outliers. Recall that principal components are those directions that maximize the variance along each component, directed to the circumstance of orthogonality. Because outliers increase the variance along their respective directions, it appears instinctive that outliers will come into more visible in principal component space than the original data space; i.e. , Minimum some of the directions of maximum variance are hopefully to be those that enable the outliers to "tie-up" more. Exploring for outliers in principal component space should at least, till, not be any worse than searching for them in the original data space. If the data originally exist in a high-dimensional space, many of these dimensions likely do not provide significant additional information and are irrelevant. Principal components thus pick out a handful of highly informative components (relative to the total number of components), thereby performing a high degree of dimension reduction and making the data set much more computationally manageable without losing a lot of information. For high dimension molecular descriptor data, a ample portion of the smaller principal components are actually noise [23]. Particularly if ‫‬ ≪ ݊ , the larger part of principal components will actually be noise and will not add to the total variance. By considering only those principal components that constitute some agreed level of the total variance, the number of components can be extensively reduced so that only those components that are truly meaningful are kept. Almost we found good results using a level of 99%. It can be argued this yields similar results to transforming the data via SVD to a dimension less than the minimum of ݊ and ‫.‬ Thus, in place of imposing a level of improvement to the variance such as 99%, it would also be likely to choose the ݊ − 1 (or fewer) components with the largest variance. As marked in equation (7) in the OGK method [5] , after dividing by the MAD, the Euclidean distance in principal components space is for that reason similar to a robust Mahalanobis distance, because the off-diagonal elements of the scatter matrix are zero. Mahalanobis distance, because the off-diagonal elements of the scatter matrix are zero. Hence, it is not essential to invert a p ⨯ p matrix when computing a measure of outlyingness for every point (i.e. the robust Mahalanobis distance), on the other hand slightly to divide (or "standardize") each principal component by its specific variance element. Because eigenvector decomposition has computational complexity ‫‬ ଷ to matrix inversion, doing the robust distance computations in principal component space is not more time-consuming than in common data space. If this transformation benefits the outliers become more visible and reduces the number of iterations required to detect them, the result will be a net savings in computational time. It can be seen that the above approaches are based on simple basic properties of principal components; this is additional proof of how principal components continue to present appealing properties to both theoretical and applied statisticians. Details of the Proposed PrCmpOut Method The method we propose consists of two basic parts: First one aims to detect location outliers, and a next that aims to detect scatter outliers. Scatter outliers obtains a different scatter matrix than the rest of the data, while location outliers are described by a different location parameter. To begin, it is useful to robustly rescale or sphere each component using the coordinatewise median and the MAD, presenting to Dimensions with a MAD of zero should be one or the other excluded , or alternatively another scale measure has to be used. Beginning with the rescaled data x ୧,୨ * , we calculate a weighted covariance matrix, from which we compute the eigen values and eigen vectors and hence a semirobust principal components breakdown. We absorb only those eigen vectors or eigen values that provide to at least 99% of the total variance; call this new dimension ‫‬ * . The remaining components are generally useless noise and d only deal to complicate any fundamental structure. For the case ‫‬ ≫ ݊, this also clarifies the singularity problem because ‫‬ * < ݊. For the ‫‬ * ⨯ ‫‬ * matrix ܸ of eigenvectors we thus get the matrix of principal components as where ܺ * is the matrix with the elements x ୧,୨ * Rescale these principal components by the median and the MAD very much alike to equation (8), MAD൫z ଵ,୨ , … … , z ୬,୨ ൯ , j = 1, … … , p * . ሺ10ሻ Collect ܼ * for the second stage of the algorithm. After the above pre-processing steps, the location outlier stage is initiated by computing the absolute value of a robust kurtosis measure for each component as stated in : We make use of the absolute value because very much like to Multivariate outlier detection and robust covariance matrix estimation [24] , both small and large values of the kurtosis coefficient can be characteristics of outliers. This allows us to assign weights to each component according to how likely we think it is to disclose the outliers. We use respective weights w ୨ ∑ w ୧ ୧ ⁄ to present a well known scale 0 ≤ w ୨ ≤ 1. If no outliers are there in a given component, we assume the principal components to be nearly normally distributed similar to the original data, producing a kurtosis near to zero. Because the being of outliers is likely to originate the kurtosis to become different than zero, we weight each of the p * dimensions proportional to the absolute value of its kurtosis coefficient. Assigning equal weights to all components (meanwhile the computation of robust Mahalanobis distances) reduce the strength of the discriminatory power because if outliers undoubtedly stick out in one component, the information in this component will be weakened unless it is given higher weight. Specifically in principal component space, outliers are more likely to be clearly visible in one specific component than slightly visible in several components, so it is important to assign this component higher weight. Because the components are uncorrelated, we compute a robust Mahalanobis distance employing the distance from the median (as scaled by the MAD), weighting each component according to the relative weights w ୨ ∑ w ୧ ୧ ⁄ with the kurtosis measure w ୨ expressed in equation (11). The kurtosis measure expressed in equation (11) helps to guarantee that important information included in a particular component is not diluted by components which do not separate the outliers. To complete the first stage of the algorithm, we want to determine how large the robust Mahalanobis distance should be to get an accurate classification between outliers and nonoutliers. The kurtosis weighting strategy destroys any similarity to a χ ୮ * ଶ distribution that might have been introduce , so it is not possible use a χ ୮ * ଶ quantile as a separation barrier. Although , resembling to Maronna and Zamar (2002) and to equation (6), we got that transforming the robust distances ‫ܦܴ{‬ ሽ as stated in helped the experimental distances {d ୧ ሽ to have the same median as the assumed distances and thus bring the earlier somewhat near to χ ୮ * ଶ , where χ ୮ * ,.ହ ଶ is the χ ୮ * ଶ 50th quantile. We make use of the adapted biweight function [25] to assign weights to every observation and use these weights as a measure of outlyingness. The adapted biweight fits into the general scheme of S-estimators defined by equation (4) and is similar to Tukey's biweight function other than that ψ begins from 0 at some point ‫ܯ‬ from the origin. That is, observations closer than the scaled distance ‫ܯ‬ to location estimate accepts full weight of 1. The ψ function of the adapted biweight is thus given by which corresponds to the weighting function Precisely assigning known non-outliers full weights of one while assigning known outliers weights of zero results in increased capabilities of the estimators (supported the classifications are correct), and is also computationally faster. Between these peaks is a subset of points that receive weights similar to the usual biweight function. To support a high level of robustness, it is best to be moderate in assigning weights of one, since if any outliers enter the course of action with a weight of one (or close to it) that will make the other outliers harder to detect due to the masking effect. Remember that since the principal components have been scaled by the median and MAD these robust distances measure a weighted distance from the median (using transformed units of the MAD). We found good experimental values assigning a weight of one to the 1 3 ⁄ of points acquiring the smallest robust distances. At the other end of the weighting pattern we assign zero weight to points with ݀ > ܿ , where ܿ = ݉݁݀ሺ݀ ଵ , … . . , ݀ ሻ + 2.5 · ‫݀‪ሺ‬ܦܣܯ‬ ଵ , … . . , ݀ ሻ , ሺ15ሻ corresponding nearly to classical outlier barriers. Similar to equation (14), the weights for each observation are calculated by the interpreted biweight function as stated in where ݅ = 1, … … , ݊ and ‫ܯ‬ is the 33 ଵ ଷ ௗ of the quantile of the distances {݀ ଵ , … . . , ݀ ଶ ሽ. Alternate weighing strategies were tested ; the benefit of the interpreted biweight is that it allows a subset of points (that we are quite definite are non-outliers) to be given full weight, while another subset of points that is likely to contain outliers can be given weights of zero, thereby excluding unacceptable influence by potential outliers, and a smooth weighting curve for in between points The weights. {w ଵ୧ ሽ from equation (16) are saved; we will use them again at the end of the algorithm. The second stage of our algorithm is similar to the first except that we don't use the kurtosis weighting layout. Principal components addresses on those directions that have large variance, so it is possibly not unexpected that we find good results searching for scatter outliers in the semi robust principal component space expressed at the beginning of this section. That is, we search for outliers in the space determined by ܼ * from equation (10). Earlier , computing the Euclidian norm for data in principal component space is similar to the Mahalanobis distance in the original data space, other than that it is faster to compute. Because the distribution of these distances has not been modified like it was through the kurtosis weighting layout and assuming we start with normally distributed non-outliers, transforming the robust distances as before via equation (12) results in a distribution that is fairly close to χ ୮ * ଶ . In establishing the interpreted biweight as in equation (16), then, acceptable results can be acquired by assigning ‫ܯ‬ ଶ equal to the χ ୮ * ଶ 25 th quantile and ܿ ଶ equal to the χ ୮ * ଶ 99 th quantile. This distribution is certainly not exactly equal to χ ୮ * ଶ , so there are chances when visual examination of these distances could head to a better boundary than this automated algorithm. Denote the weights calculated in this way, ‫ݓ‬ ଶ , ݅ = 1, … . . , ݊. Lastly we combine the weights from these two steps to compute eventual weights w ୧ , i = 1, … . . , n, as stated in. Where usually the scaling constant ‫ݏ‬ = 0.25. The justification for presenting ‫ݏ‬ is that frequently too many non-outliers accept a weight of 0 in only one of the two steps; setting ‫ݏ‬ ≠ 0 helps to certify that the last weight ‫ݓ‬ = 0 only if both steps assign a low weight. Outliers are then classified as points that have weight w ୧ < 0.25. These values means that if one of the two weights w ଵ୧ or w ଶ୧ equals one, the other must be less than 0.0625 for the point x ୧ to be classified an outlier. Or, if w ଵ୧ = w ଶ୧ , then this common value must be less than 0.375 for x ୧ to be classified as outlying. We will hereafter call this algorithm as PrCmpOut. It is beneficial to recap the algorithm in brief. Stage 1: Detection of location outliers a) Robustly sphere the data as stated in equation (8). Compute the sample covariance matrix of the transformed data * . b) Compute a principal component break down of the semi robust covariance matrix from Step a and retain only those ‫‬ * eigenvectors whose eigenvalues provide to at least 99% of the total variance. Robustly sphere the transformed data as in equation (10). c) Compute the robust kurtosis weights for each component as in equation (11), and thus weighted norms for the sphered data from Step b. Because the data have been scaled by the MAD, these Euclidean norms in principal component space are similar to robust Mahalanobis distances. Convert these distances as stated in equation (12). (12) to produce a set of distances for use in Step f. f) Select weights w ଶ୧ for every robust distance as stated in the adapted biweight in equation (16) with ܿ ଶ equal to the χ ୮ * ଶ 99 th quantile and ‫ܯ‬ ଶ equal to the χ ୮ * ଶ 25 th quantile. Combining Stage 1 and Stage 2: Use the weights from Steps d and f to select final weights for all observations as stated in equation (17). Preliminary Investigation of developed method As a preliminary step to developing a new outlier detection method, we briefly examined and compared existing methods to determine possible areas of improvement We selected nine algorithms that appeared to have good potential for finding outliers: rocke-Estimator [25], sfast-Estimator [26],M-Estimator [27],MVE-Estimator [28],NNC-Estimator [29] BACON [9],PCDist [10][11], sign1 [12][13] and sign2 [14]. Somewhat similar to our method, the Sign procedure is also based on a type of robust principal component analysis. It obtains robust estimates of location and spread based upon projecting the data onto a sphere. In this way, the effects of outlying observations are limited since they are placed on the boundary of the ellipsoid and the resulting mean and covariance matrix are robust. Standard principal components can thus be carried out on the sphered data without undue influence by any single point (or small subset of points). We considered attributes p = 10, 20 ,30,40 and critical values α= 0.05,0.1,0.15,0.2 , so for each parameter combination we carried out 14 simulations with n = 100 observations. Frequently, results of this type are presented in a 2 × 2 table showing success and failure in identifying outliers and non-outliers, which we henceforth call inliers. In this paper, we present the percentage false negatives(FN) followed by the percentage false positives(FP) in the same cell; we henceforth refer to these respective values as the outlier error rate and inlier error rate(non outlier error rate). We propose these names because the outlier error rate specifies the percentage errors recorded within the group of true outliers, and similarly for the inlier error rate. True outliers among 100 observations are 10,16,18,22,23,25,27,29, 30,47,66,70,72,80,84 90,99 and 100. This is more compact than a series of 2 × 2 table and allows us to easily focus on bringing the error rates down to zero. RESULTS AND DISCUSSION Even if this algorithm was designed basically for computational performance at high dimension, we compare its performance against other outlier algorithms in low dimension, because a high dimensional comparison is not achievable. In the following we examine a variety of outlier configurations in molecular descriptor data. Examination of Table 2 unfolds that PrcCmpOut performs well at identifying outliers (low false positives), although it has a higher percentage of false negatives than most of the methods. PrCmpOut has the lowest percentage of false negatives, often by a tolerable margin, and is a competitive outlier detection method. We present simulation results in which the dimension was increased from ‫‬ = 10 to ‫‬ = 40, based on the mean of 16 simulations at each level. In contrast to the previous simulation experiment in dimension ‫‬ = 10 , in this case we were not able to examine the performance of the other algorithms since they were not computationally feasible for these dimensions. The number of observations was held constant at ݊ = 100, as was the number of outliers at 18. With increasing dimension PrCmpOut can identify almost all outliers. None of the known methods experience much success in identifying outliers for small dimensions because geometrically, the outliers are not very different from the non-outliers. However, as dimension increases, it can be seen how the outliers separate from the non-outliers and become easier to detect. At p = 20 dimensions, barely more than half of the outliers can be detected, at p = 30 dimensions almost 90% are detected, and at ‫‬ = 40 dimensions more than 95% of the outliers are detected. The results for the PrCmpOut and rest of the estimators are presented in Figure 1. For an outlier fraction of 10% all estimators except PCDist perform excellent in terms of outlier error rate (FN) and detect outliers independently of the percentage of outliers as seen from the left panel of Figure 1. The average percentage of non-outliers that were declared outliers (FP) differ and PrCmpOut performs best, followed closely by Sign2 ,PCDist (below 10). With more than 20% comes next for the rest of the estimators. MVE declares somewhat more than 30% of regular observations as outliers.Sign1 performs worst with the average non-outlier error rate increasing with increase of the fraction of non outliers as an outliers.In terms of the outlier error rate PrCmpOut performs best (11%) followed by NNC, SIGN1, BACON and MVE Estimators error rate between 13% and 34% and PCDist (89%) is again last. Exploratory data analysis is often used to get some understanding of the data at hand, with one important aspect being the possible occurrence of outliers. Sometimes the detection of these outliers is the goal of the data analysis, more often however they must be identified and dealt with in order to assure the validity of inferential methods. Most outlier detection methods are not very robust when the multinormality assumption is violated, and in particular when outliers are present. Robust distance plots are commonly used to identify outliers. To demonstrate this idea, we first compute the robust distances based on the sample mean vector and the sample covariance matrix. Points which have distances larger than χ ୮,ଵି ଶ are usually viewed as potential outliers, and so we will label such points accordingly. Figure 2 shows the distance plots of all the methods. The red points are according to the robust distances outliers. It is important to consider the computational performance of the different outlier detection algorithms in case of large Oxazolines and Oxazoles molecular descriptor dataset with n = 100. To evaluate and compare the computational times a simulation experiment was carried out. The experiment was performed on a Intel core i3 with 6Gb RAM running Windows 7 Professional. All computations were performed in R i386 2.15.3. The rocke, M,sfast, MVE, NNC, BACON, PCDist, Sign1, Sign2 and Proposed algorithm PrCmpOut algorithms from were used. Fastest is PrCmpOut, followed closely by Sign1,Sign2,PCDist ,NNC and BACON. Slowest is sfast-Est followed by rocke-Est, M-Est and MVE. These computation times are presented graphically in Figure 3 using a seconds scale A fast multivariate outlier detection method is particularly useful in the field of bioinformatics where hundreds or even thousands of molecular descriptors need to be analyzed. Here we will focus only on outlier detection among the molecular descriptors, clearly a high dimensional data set. First, columns with MAD equal to zero were removed, with the remaining columns investigated for outliers. Figure We compare the performance of PrCmpOut on this data set with the Sign2 method; the other algorithms are not feasible due to the high dimensionality. Figure 5 shows the distances (left) and weights (right) as calculated by the Sign2 method. Figure 9. Distances and weights for analyzing the Oxazolines and Oxazoles molecular descriptor dataset with the Sign method. A possible explanation for the difficulty experienced by the Sign2 method is a masking effect for PCA. It is evident that PrCmpOut has better performance than the Sign2 method, which is also evident from the results in Table 2. We infer that PrCmpOut is a competitive outlier detection algorithm regarding detection accuracy as well as computation time. CONCLUSIONS PrCmpOut is a method for detecting outliers in multivariate data that utilizes inherent properties of principal components decomposition. It demonstrates very good performance for high dimensional data and through the use of a robust kurtosis measure. In this paper we tested several approaches for identifying outliers in Oxazolines and Oxazoles molecular descriptor dataset. Two aspects seem to be of major importance: the computation time and the accuracy of the outlier detection method. For the latter we used the fraction of false negatives(FN) -outliers that were not identified -and the fraction of false positives(FP) -non-outliers that were declared as outliers. It is very fast to compute and can easily handle high dimensions. Thus, it can be extended to fields such as bioinformatics and data mining where computational feasibility of statistical routines has usually been a limiting factor. At lower dimensions, it still produces competitive results when compared to well-known outlier detection methods.
2013-12-10T08:35:25.000Z
2013-07-31T00:00:00.000
{ "year": 2013, "sha1": "76f071f11fbebb20682517149fafaf7fd4f331ae", "oa_license": null, "oa_url": "https://doi.org/10.5121/ijdkp.2013.3405", "oa_status": "BRONZE", "pdf_src": "Arxiv", "pdf_hash": "76f071f11fbebb20682517149fafaf7fd4f331ae", "s2fieldsofstudy": [ "Computer Science" ], "extfieldsofstudy": [ "Computer Science" ] }
54062767
pes2o/s2orc
v3-fos-license
Eastern Andean environmental and climate synthesis for the last 2000 years BP from terrestrial pollen and charcoal records of Patagonia Introduction Conclusions References Introduction The Southern Hemisphere Westerly Winds (SWW) constitute an important zonal circulation system that dominates the dynamics of Southern Hemisphere mid-latitude climate.Furthermore, they influence the global ocean circulation through wind-driven upwelling of deep water in the Southern Ocean and may play a significant role in the global climate system through the control of the CO 2 budget in the Southern Ocean (Anderson et al., 2009;Toggweiler et al., 2006;Varma et al., 2011).The understanding of the variability and the impact of various forcings on the SWW has been discussed by the study of different proxy and modelling approaches especially at millennial time scales during the Holocene (e.g.Fletcher and Moreno, 2011;Kilian and Lamy, 2012;Lamy et al., 2001Lamy et al., , 2010;;Varma et al., 2012;Whitlock et al., 2007).Little, however, is known about climatic changes in the Southern Hemisphere in comparison to the Northern Hemisphere due to the low density of proxy records, and adequate chronology and sampling resolution to address environmental changes of the last 2000 years (Moy et al., 2009;Villalba et al., 2009).Nevertheless, the few available records point towards significant fluctuations in both temperature and precipitation occurring during this period (Jones and Mann, 2004;Masiokas et al., 2009;Tonello et al., 2009).On this time scale orbital boundary conditions only changed slightly and thus internal variability, solar and volcanic forcing played a dominant role before the humans became noticeable (Jones and Mann, 2004;Wilmes et al., 2012).Introduction Conclusions References Tables Figures Back Close Full The "Little Ice Age" (LIA) usually refers to climatic anomalies over the Northern Hemisphere between the 13th and mid-19th century (750-150 cal yrs BP).The LIA is well documented in northern Europe and North America, where a huge variety of chronicles, historical documents, proxy-based reconstructions and also temperature measurements indicate cooler and wetter conditions (Meyer and Wagner, 2008). Within LIA, a period with even lower temperatures was the Maunder Minimum (MM; AD 1645-1715/305-235 cal yrs BP).Proxy and modelling studies point to a prominent influence of solar forcing causing the MM (Eddy, 1976;Zorita et al., 2004).At the beginning of the last millennium, a period of warmer conditions, especially over Europe, has been documented: the so-called Medieval Warm Period (MWP; ca 9th-13th centuries/1150-750 cal yrs BP; Jones et al., 2001;Osborn and Briffa, 2006).Recently, Neukom et al. (2010Neukom et al. ( , 2011Neukom et al. ( , 2014) ) points to a number of climatic variations occurring during the last millennium in Southern South America.The authors showed that the Southern Hemisphere response to external forcing may be delayed in approximately two centuries respect to Northern Hemisphere medieval times with high temperatures and coherent extreme cool conditions in both hemispheres around AD 1600 (350 cal yrs BP). Pollen records derived from lakes and bogs represent one of the most abundant paleoclimate archives in South America.Since the pioneering work by Auer (1933Auer ( , 1958)), many studies have reconstructed the ecological and climatic history over the Pleistocene and Holocene periods at millennial time scale (e.g.Heusser and Heusser, 2006;Mancini et al., 2008;Markgraf et al., 2003;Moreno et al., 2009).There are few pollen based paleoenvironmental reconstruction with highly-precise chronology in Patagonia for the last millennia (Fletcher and Moreno, 2012b;Huber and Markgraf, 2003a;Moreno et al., 2014;Whitlock et al., 2006;Wille et al., 2007) environments; the depositional environment, and local differences in the sensitivity of eastern Andean vegetation ecotones to changes in precipitation.Since 2009, new pollen and charcoal records from bog and lakes in northern and southern Patagonia at the east side of the Andes have been published with an adequate calibration of pollen assemblages related to modern vegetation and ecological behaviour (Bamonte and Mancini, 2011;Bamonte et al., 2014;Echeverria et al., 2014;Iglesias, 2013;Iglesias and Whitlock, 2014;Iglesias et al., 2012Iglesias et al., , 2014;;Mancini, 2009;Marcos et al., 2012a, b;Sottile et al., 2012;Sottile, 2014).In this work we improve the chronological control of some eastern Andean previously published sequences and integrate pollen and charcoal dataset available east of the Andes to interpret possible environmental and SWW variability at centennial time scales.Through the analysis of modern and past hydric balance dynamics we compare these scenarios with other western Andean SWW sensitive proxy records for the last 2000 years.region are additionally affected by air masses coming from the Atlantic Ocean.This Atlantic influence results in a more even seasonal distribution of precipitation in this part of Patagonia (Paruelo et al., 1998). Modern eastern Andean The Andes play a crucial role in determining the climate of Patagonia.The northsouth distribution of the mountains imposes an important barrier for humid air masses coming from the Pacific Ocean.Most of the water in these maritime air masses is dropped on the Chilean side, and air becomes hotter and drier through adiabatic warming as it descends on the Argentine side of the Andes (Fig. 1a).The westerlies are strongest during austral summer, peaking between 45 and 55 • S.During austral winter, the jet stream moves into subtropical latitudes (its axis is about 30 • S) and the low-level westerlies expand equatorward but weaken, particularly at ∼ 50 • S (Garreaud et al., 2009) (Fig. 1b). Over Patagonia, the inter-annual correlation between precipitation and zonal wind at 850 hPa (U850) using annual means exhibits positive values increasing from Pacific to a maximum along the Chilean coast and the western slope of the Andes (r(P , U850) ∼ 0.8), a sharp transition just to the east of the mountain ridge and negative values over the Argentinean Patagonia.During years with stronger than average westerly flow features increased precipitation to the west of the Andes and decreased precipitation over the lowlands to the east.The marked west-east precipitation gradient over Patagonia is always present but it is slightly less in those years with weaker than average westerly flow aloft (Garreaud et al., 2013). When averaged over the year, an ENSO warm event (positive multivariate ENSO index values) is associated with an overall decrease in the strength of the wind field and a slight reduction in precipitation in western Patagonia (Moy et al., 2009).Northern Patagonia exhibits an overall reduction in summer precipitation and warmer surface air temperature.Of particular relevance is the frequent occurrence of longlived, tropospheric deep anticyclonic anomalies west of the southern tip of South America (below 40 • S and centered at 50 • S, 100 • W) during El Niño years (Rutllant and Fuenzalida, 1991).These phenomena favour a northward displacement of the Introduction Conclusions References Tables Figures Northern Patagonia vegetation Eastern Andean communities in Northern Patagonia between 40 and 44 (Iglesias et al., 2014).This transition occurs where annual precipitation drops below ca 1800 mm yr −1 .Finally, a third transition takes place at ca 71-71.2 • W where easternmost small outpost trees population (Nothofagus pumilio and Austrocedrus chilensis) intermingle within the Patagonian steppe matrix.This transition coincides with rainfall areas below ca 600-800 mm yr −1 (Iglesias et al., 2014).South of 44 • S, Austrocedrus chilensis disappear and only Nothofagus tree patches intermingle between steppe patches (Veblen et al., 1997).Patagonian grass and shrub steppes cover plains and plateaus eastward ∼ 70 • W between 600 and 300 mm yr −1 , with a significant decrease on above-ground vegetation cover following precipitation gradient (León et al., 1998).Below 300 mm yr −1 , Patagonian steppe is replaced by "Monte" shrubland vegetation (Fig. 1a).Monte shrub communities are arranged as two-phase mosaic composed by a phase of perennial Figures Back Close Full grasses and shrub-dominated patches alternating with sparse cover (Bisigato et al., 2009). Southern Patagonia vegetation South of 47 • S, the forest communities impoverished due to the low temperatures of the growing season.Mixed evergreen-deciduous forest of Nothofagus betuloides and N. pumilio develop on eastern Andean lowland areas with annual precipitation above 800 mm yr −1 (Mancini et al., 2008).Between 1000 and 600 mm of annual precipitation closed deciduous forest of N. pumilio develop from the tree line to lowlands.These closed forest communities become progressively open with tree patches of N. pumilio and N. antarctica with high cover of tall xerophytic shrubs and grass species between 600 and 400 mm yr −1 .Eastward between 400 and 200 mm yr −1 a grass steppe covers a narrow and discontinuous strip along the extra-Andean and the Patagonian plateau and the southeastern tip of the continent dominated by Festuca spp., cushions plants and isolated shrub patches (Boelcke et al., 1985;Mancini et al., 2012).At the Patagonian plateau, the shrub steppe distribution is primarily related to the availability of water which is actually controlled by unpredictable precipitation inputs, runoff redistribution and edaphic diversity and is clearly reflected by the vegetation differences between the plateaus and valley and ravines ("cañadones") (Mancini et al., 2012). Fire regime The occurrence of wildfires is largely controlled by climatic variability through its action of modifying fine fuel build up rates and fuel desiccation.On the easternmost Patagonian communities where steppe bunchgrasses dominate, fires are limited by fuel amounts and continuity (Kitzberger, 2012;Sottile et al., 2012).Because fine fuels (grasses) are highly responsive to precipitation pulses, during rainy growing seasons, systems that normally do not spread efficiently due to lack of fuel loads suddenly Introduction Conclusions References Tables Figures Back Close Full become more prone for developing large fires (Morgan et al., 2003).Years with high net primary productivity and rainy springs/summers have also been highlighted as factors favouring fire occurrence in Monte shrubland communities (Hardtke, 2014).Further west in the transition or higher in altitude, in the realm of the tall Nothofagus forests fine fuels are less important and coarse fuels that require long drying periods dominate.Here fires are exclusively associated to strong droughts lasting several months, beginning during the winter, the time when soils are replenished with water (Kitzberger, 2012).Whenever dry winter-springs associate with warm summers, wet forests ignite and spread fire without significant natural fire breaks (Mermoz et al., 2005).These strong drought events not only produce larger fires but also more severe events that create conditions that provide less regeneration opportunities to obligate seed dispersed species (such as N. dombeyi or N. pumilio; Kitzberger et al., 2005) and more opportunities for the rapid expansion of resprouting shrubland species.Markgraf and Anderson (1994) postulated that even though lightning are scarce in southern Patagonia, they might have been more frequent in the past under different climatic conditions as fire ignition sources. Material and methods In order to reconstruct the past 2000 years of environmental variability on different landscapes of eastern Andean Patagonia, we selected continuous pollen and charcoal records from lakes and peat-bogs (Table 1) where data sets fulfil some qualitative Also, the sites selected for this work, fulfil more than four criteria of 2 K proxy records for paleoclimate reconstructions according to the PAGES-2 K criteria (see Supplement, Sect.3, for details). We constructed past pollen-based paleohydric balance indices.Main pollen taxa were considered suggesting above/below hydric availability at every site, following paleoecological and modern pollen-vegetation calibrations highlighted on previous published works (Bamonte and Mancini, 2011;Bamonte et al., 2014;Bianchi and Ariztegui, 2012;Echeverria et al., 2014;Iglesias, 2013;Iglesias and Whitlock, 2014;Iglesias et al., 2012Iglesias et al., , 2014;;Mancini, 2007Mancini, , 2009;;Mancini et al., 2012;Marcos and Mancini, 2012;Marcos et al., 2012a;Paez et al., 2001;Sottile et al., 2012;Sottile, 2014).Each paleohydric balance was calculated as the standardized ratio between the sum (in percentages) of positive hydric availability taxa and the sum of negative hydric availability taxa (see Supplement, Sect. 4 for details).Standarization of every ratio was calculated by subtracting the mean and dividing by the standard deviation.In order to highlight the general trend of every site index, we apply a locally weighted scatterplot 0.2 smoothing spline (Cleveland, 1979(Cleveland, , 1981) ) and plotted the 95 % confidence band based on a 999 bootstrap replicate technique.Modern hydric balance of every site (Table 1) was compared to paleohydric values in reference to the pollen samples with an age of ca AD 1900 of every record (preventing possible changes on pollen spectra related to European settlement).Also, composite pollen-based indices for Northern and Southern Patagonia were performed using all dataset available for each region.In order to highlight the general positive/negative trends of every region index, we applied a locally weighted 0.2 smoothing spline and plotted the 95 % confidence band based on a 999 bootstrap replicate technique.Introduction Conclusions References Tables Figures Back Close Full Southern Patagonia Southern Patagonian pollen dataset were classified into two categories (local and regional, sensu Jacobson and Bradshaw, 1981) in response to the pollen source area and the variables selected to calculate past paleohydric balance index.Local dataset category involves pollen records that register past local vegetation variations.These records present a high relationship between surrounding deposition site vegetation pollen indicators and modern pollen samples assemblages (PAA, PAB, MPD, LT, CV).Thus interpretation of the paleohydric balance index from this sites may be related to changes on local conditions.Regional category includes records that on recent pollen samples present higher amounts of pollen types reaching from longer distances (> 3 km southwestward) than pollen from surrounding areas of the deposition site (CF and RR).Thus, we interpret regional paleohydric balance indices not as changes on hydric balance in a single site but throughout the forest-steppe ecotone region.Southern Patagonian Forest and Forest-steppe ecotone paleohydric indices present positive values between 2000 and 750 cal yrs BP, suggesting above average water availability on Andean communities (Fig. 3).On the contrary steppe records present mainly negative values suggesting dry conditions on extra-andean areas (Fig. 3). Comparison with modern hydric balance values for pollen records registering local environmental variability, suggest higher than modern hydric balance values for PAA, PAB (> 104. Controls over hydric balance in Northern and Southern Patagonia The late Holocene changes in paleohydric balance reconstructed from Northern Patagonian records could be interpreted in terms of latitudinal variation of the SWW belt, using the modern latitudinal distribution of precipitation seasonality over Patagonia (Fig. 1b) as analogous.Thus, when modelling all Northern Patagonian dataset we perform a composite pollen-based Northern Patagonia SWW belt latitudinal variation index between 40 and 45 • S (Fig. 4c).This pollen-based index displays high precipitation seasonality before 750 cal yrs BP.Such a high seasonality likely suggests a more poleward position of the SWW belt, reflecting similar to present day precipitation seasonality (Fig. 4).Nevertheless, Lake Trébol shows high values of paleohydric balance index before 750 cal yrs BP and lower values since 750, around present day hydric values.These pattern joint to the general trend of most northern Patagonian paleohydric balance indices, may reflect intense SWW during winter favouring higher precipitation amounts over areas close to the Andean divide linked to a steeper west-to east precipitation gradient that soften up to present condition since 750 cal yrs BP.Since 750 cal yrs BP the Northern Patagonia pollen based index shows a remarkable decrease in precipitation seasonality peaking between 400 and 200 cal yrs BP (Fig. 4c).This low seasonality period likely reflects a northward expansion of the SWW favouring increased spring-summer precipitation near the Andes.The similar paleohydric balance of Bajo de la Quinta (Fig. 2e) at the Atlantic coast to those of forest environments suggest that between 400 and 200 cal yrs BP, Atlantic humid air masses reached the continent probably under weak SWW (Marcos et al., 2012a(Marcos et al., , 2014)). Therefore we can interpret dominant summer-like conditions in terms of hydric balance Conclusions References Tables Figures Back Close Full During the last 200 cal yrs BP, there is a remarkable decrease in the Northern Patagonia pollen based index suggesting higher than before precipitation seasonality toward present day conditions between 40 and 45 • S.Even though pollen spectra might be biased for the last 100 years, the decreasing trend in Northern Patagonia pollen based index, precedes European arrival (Fig. 4c). Precipitation seasonality inferences coincide with centennial fire activity in northern Patagonia.We found an antiphase behaviour of fire occurrence between western and eastward environments.During southward displacement of SWW, fire activity increases on forest communities likely related to coarse fuel desiccation and low biomass availability on eastern Monte shrublands avoiding fire propagation (Fig. 2e).On the contrary, during periods of winter-like conditions, fire activity increases on Monte shrublands, likely related to an increase in biomass favoured by Atlantic Humid air flow masses.Iglesias and Whitlock (2014) presented northern Patagonia biomass burning general trends since the last 18 000 cal yrs BP and compared them to environmental and archaeological information.They interpret that variations in indigenous population densities were not associated with fluctuations in regional or watershed-scale fire occurrence, suggesting that climate-vegetation-fire linkages in northern Patagonia evolved with minimal or very localized human influences before European Settlement (Iglesias and Whitlock, 2014).On the Atlantic coast, archaeological records suggest high anthropogenic activity ca 1000 cal yrs BP with a decreasing trend up to present day (Marcos and Ortega, 2014).Thus, patterns of fire activity increase since ca 500 cal yrs BP in Bajo de la Quinta are likely related to climate variability and lightning sources. The late Holocene changes in paleohydric balance reconstructed from Southern Patagonian records could be interpreted in terms of intensity variation of the SWW belt, using the modern latitudinal distribution of precipitation seasonality over Patagonia (Fig. 1b) as analogous.These sequences are not significantly affected by seasonal Conclusions References Tables Figures Back Close Full variability but mainly affected by changes on SWW intensity (Garreaud et al., 2013). During years with stronger than average SWW precipitation increased to the west of the Andes and decreased over the lowlands to the east (Garreaud et al., 2013).Therefore we expect that hydric balance increases in forest areas (especially those with present day positive hydric balance values) and decreases in grass steppe extra-Andean environments.Conversely, the marked west-east precipitation gradient is slightly less in those years with weaker than average westerly flow, thus we expect lower than average hydric balance values on forest areas and higher than average hydric balance values in grass steppe extra-Andean environments.Atlantic humid air masses probably increase hydric balance values on steppe records next to the Atlantic coast during periods of weaker westerlies (Agosta et al., 2015).Thus, when modelling all Southern Patagonian datasets we perform a composite pollen-based Southern Patagonia SWW intensity variation index between 48 and 52 • S (Fig. 4) by considering forest and forest-steppe ecotone index values and inverse steppe index values.Figure 4 shows the scatterplot dataset and smothering spline of Local and Regional records from southern Patagonia.This pollen-based index displays intense SWW before 750 cal yrs BP and weaker SWW since 750 cal yrs BP, peaking ca 500-600 cal yrs BP (Fig. 4).The Southern Patagonia index increases slightly toward ca 250-300 cal yrs BP suggesting an intensification pulse of the SWW.Since then, the Southern Patagonia index values decreases to modern ones, thus we interpret a slight weakening of the SWW up to modern conditions.In contrast to Northern Patagonia regional fire behaviour, Southern Patagonia fire activity trends on forest and steppe communities are synchronous.The maximum fire activity in southern Patagonia occurs during weaker westerlies (on steppe environments especially previous to 1600 cal yrs BP, Fig. 2).Therefore we interpret an antiphase behaviour between northern and southern forest communities and an inphase behaviour of fire occurrence in extra-andean steppe and Monte shrublands. Anthropogenic fires may represent an extra driving factor favouring fire activity between 1000 and 2000 cal yrs BP in southwestern Patagonia due to the more intense Conclusions References Tables Figures Back Close Full and extensive archaeological signal registered for this area (Franco et al., 2004).However, fire activity registered between 250 and 750 cal years BP is probably related to natural lightning sources since archaeological signal decreases during the last 1000 cal yrs BP in southwestern Patagonia related to an eastward population migration (Franco et al., 2004). Comparison with western Andean precipitation and SWW belt records The timing of major SWW changes in latitudinal shift and intensity recorded by the pollen-based Eastern Andean Northern and Southern Patagonia indices performed in this work at 750 cal yrs BP (1200 AD) roughly corresponds to a major reorganization of the climate system throughout the world, which is frequently associated to the Little Ice Age originally described in the Northern Hemisphere.Here, we compare our inferred-SWW variation during the last 2000 years to western Andean regional precipitation and SWW reconstructions.Fletcher and Moreno (2012) studied a pollen and charcoal record from Laguna San Pedro (38 • S, Fig. 1) located on the western side of the Andes and performed a Nothofagus vs. Poaceae (N/P) index to infer changes in humidity during the last 1500 years.The N/P index shows similar behaviour to our pollen-based Northern Patagonia SWW index (Fig. 4a).Indeed, a brief peak on both indices is registered ca 1100/1400 cal yrs BP, suggesting a short period of lower precipitation seasonality under a long term trend of higher precipitation seasonality at both sides of the Andes range.The charcoal record from Laguna San Pedro coincides with eastern Andean Northern Patagonia fire activity during the last 2000 years.Bertrand et al. (2014) performed a precipitation seasonality index by analysing past two millennia sedimentation changes at Quitralco fjord ( 46• S, Fig. 1).The authors suggests a poleward-shifted SWW belt between 1350 and 750 cal yrs BP, followed by a gradual shift towards the equator between 750 and 450 cal yrs BP, and stabilization in a sustained northward position between 450 and 0 cal yrs BP (Fig. 4b). The most recent return to a slightly poleward shifted SWW recorded at Quitralco Introduction Conclusions References Tables Figures Back Close Full fjord, is in agreement with recent trends observed in climatological data (Bertrand et al., 2014) (Sepulveda et al., 2009). In Southern Patagonia, Lake Potrok Aike ( 52• S) is located where precipitation is negatively correlated with westerly wind strength (Habertzettl et al., 2005).These authors inferred increased lake levels associated to easterly humid flows during weaker westerlies between 490 and 0 cal yrs BP.Further south, the MA1 stalagmite record (53 • S, Fig. 1) also provides evidence for a decrease in annual precipitation, and therefore a weakening of the westerlies, since 1000 cal yrs BP (Schimpf et al., 2011, Fig. 4d) synchronously with our composite pollen-based Southern Patagonia SWW belt index.Similarly, the sediment record from Lago Fagnano (Waldmann et al., 2010; Fig. 1) suggests a decrease in precipitation of westerly origin, represented by a decrease in iron supply between 750 and 100 cal yrs BP (Fig. 4e).These independent records and Koffman et al. (2013) interpretations of westerlies strength throughout changes in the grain-size of dust particles in the WAIS Divide ice core at Antarctic Peninsula, supports the sensitivity of our Southern Patagonia SWW belt composite pollen-based index to environmental variability.The slight intensification of the SWW belt ca 300 cal yrs BP, coincide with major glaciers advances in southern Patagonia (Aniya, 2013;Masiokas et al., 2009;Mercer et al., 1982;Strelin et al., 2008;Wenzens, 1999) and a Southern Hemisphere extreme cold period inferred by Neukom et al. (2014).Therefore the synergic direction of low Changes in SWW belt and possible forcing mechanisms Our SWW belt reconstruction suggest southward intensified westerlies since ca 1600 cal yrs BP including the MCA (1150-750 cal yrs BP) and northward weaker westerlies during LIA (750-150 cal yrs BP, Fig. 4c).During LIA, atmospheric cooling in the Southern Hemisphere would have caused a northward shift of the SWW and contraction of the Southern Hemisphere Hadley Cell (Koffman et al., 2013).General circulation model (GCM) experiments have shown that the latitudinal extent of the Hadley cell circulation is sensitive to changes in global surface temperatures, with warmer temperatures causing an expansion of the Hadley cell (Frierson et al., 2007).These changes in the Hadley cell width are likely driven by shifts in the latitude where baroclinic eddies begin to occur; as surface temperatures warm, the transition from baroclinic stability to instability shifts poleward, driving the eddy-driven Southern Hemisphere storm track southward (Frierson et al., 2007;Lu et al., 2010).This proposed mechanism implies that the SWW respond to surface temperature changes on decadal to centennial timescales (Koffman et al., 2013).The mechanism proposed above differs from the seesaw-type redistribution of heat between the hemispheres that was invoked to explain the migration of the SWWB during the last deglaciation (Anderson et al., 2009;Toggweiler, 2009).This suggests that the SWWB may respond to different forcing mechanisms at different timescales (Bertrand et al., 2014).Varma et al. (2011) presented proxy and model evidence that centennial-scale variability in the position of the SWW is significantly influence by fluctuations in solar activity during the past 3000 years.They argued that periods of lower solar activity were associated with annual-mean northward shifts of the SWW, whereas periods of higher solar activity were linked to annual-mean poleward displacements of the SWW.et al., 2004;Seager et al., 2007).On the contrary, during LIA dominated more intense El Niño like conditions and negative SAM values (Mann et al., 2009;Rein et al., 2004;Villalba et al., 2012).The marked decreased in our Northern Patagonia pollen based index suggesting a southward shift of the SWW belt storm track during the last decades coincides with modern climate data measurements (Archer and Caldeira, 2008;Hu and Fu, 2007) linked to the poleward migration of the descending branch of the Hadley cell (Villalba et al., 2012). Screen / Esc Printer-friendly Version Interactive Discussion Discussion Paper | Discussion Paper | Discussion Paper | Discussion Paper | . These authors presented different Patagonia climatic variability scenarios for the last 2000 years.Moy et al. (2009) and Kilian and Lamy (2012) suggest that the different signal shown in these data set could be attributed to the location of the records in different ecological Figures Back Close Full Screen / Esc Printer-friendly Version Interactive Discussion Discussion Paper | Discussion Paper | Discussion Paper | Discussion Paper | Most of Patagonia is dominated by air masses coming from the Pacific Ocean.The Patagonian region is located between the semipermanent anticyclones of the Pacific and the Atlantic oceans at approximately 30 • S and the subpolar low pressure belt at approximately 60 • S (Prohaska, 1976).The strong, constant west winds (westerlies) are dominant across the region.The seasonal movement of the low and high pressure systems and the equatorward ocean currents determine the precipitation pattern.During winter, the subpolar low is more intense.This situation, combined with the equatorial displacement of the Pacific High Pressure System and with ocean temperatures that are higher than the continental temperatures, leads to an increase in precipitation during this season.The northeastern and the southeastern parts of the Discussion Paper | Discussion Paper | Discussion Paper | Discussion Paper | Discussion Paper | Discussion Paper | Screen / Esc Printer-friendly Version Interactive Discussion Discussion Paper | Discussion Paper | Discussion Paper | Discussion Paper | Screen / Esc Printer-friendly Version Interactive Discussion Discussion Paper | Discussion Paper | Discussion Paper | Discussion Paper | criteria explained as follows: Dataset availability: pollen records previously published and available at Neotoma Paleoecology Database (http://www.neotomadb.org) and pollen/charcoal records from Paleoecology and Palynology Lab database (UNMdP-IIMyC, CONICET).Chronology and temporal resolution: proxy data series must have a chronology based on more than 2 dating for the last 2300 yr s BP.The time series should at least have a mean sampling resolution of one sample 200 yr s −1 .Discussion Paper | Discussion Paper | Discussion Paper | based paleohydric balance allow us to reconstruct past variability especially in terms of seasonality.Assuming that Northern Patagonian forest and Monte shrubland development, are favoured by spring-summer rain, positive (negative) values suggest above (below) average spring-summer precipitation.Nothofagus-Austrocedrus forest and Nothofagus forest/steppe transitions records present mainly negative values between 1600 and 750 cal yrs BP (Fig.2b-d).Since 750 cal yrs BP, there is a raising trend to positive paleohydric values peaking ca 250-300 cal yrs BP (Fig.2b-d).On the contrary, Lake Trébol present the opposite trend during the last 2000 yrs.The comparison of past paleohydric balance to modern hydric balance suggest > 493.7 mm yr −1 in Lake Trébol; < 8.60 mm yr −1 ; < 143.2 mm yr −1 in Lake Mosquito and < 268.4 mm yr −1 in Mallín Pollux between 1600 and 750 cal yr BP.Even though Bajo de la Quinta shows mainly negative values, its general paleohydric trend follows general forest and forest-steppe transition records behaviour, showing the major paleohydric values after 750 yr s BP (Fig.2e).A comparison with modern hydric balance values, suggests Bajo de la Quinta registered paleohydric values < −516.3 mm yr −1 between 1600 and 750 cal yrs BP.Fire activity presents an opposite behaviour between Andean communities and Monte shrubland.The highest Charcoal accumulation rates(CHAR) are registered between 2000 and 750 cal yrs BP in Andean sites while the highest CHAR values in Bajo de la Quinta occur after 500 cal yrs BP.Mallín Pollux and Bajo de la Quinta also register high CHAR values for the last 100 years, which might be related to European settlementsDiscussion Paper | Discussion Paper | Discussion Paper | 5 and 67.2 mm yr −1 , respectively) previous to 750 cal yr BP.Steppe sites suggest values similar to modern ones in MPD (∼ −163.2 mm yr −1 ), higher than modern values in LT (> −146.6 mm yr −1 ) and lower than modern values in CV (< −303.7 mm yr −1 ).After 750 cal yrs BP, Forest and forest-steppe sites exhibit a decreasing trend in paleohydric balance indices (Fig. 3).PAA and PAB indices suggest paleohydric values < 104.5 and < 67.2 mm yr −1 , respectively.Steppe sites exhibit the opposite paleohydric trend toward positive values.The three steppe sites suggest significant higher than present values of hydric balance (MPD > 163.2 mm yr −1 ; LT > 146.6 mm yr −1 ; CV > Discussion Paper | Discussion Paper | Discussion Paper | −303.3 mm yr −1 ).Fire activity exhibit synchronous CHAR patterns especially between 2000-1700 and 750-250 cal yrs BP in southern Patagonian charcoal records. Screen / Esc Printer-friendly Version Interactive Discussion Discussion Paper | Discussion Paper | Discussion Paper | Discussion Paper | in northern Patagonia between 1600 and 750 cal yrs BP and winter-like conditions between 750 and 200 cal yrs BP. Screen / Esc Printer-friendly Version Interactive Discussion Discussion Paper | Discussion Paper | Discussion Paper | Discussion Paper | Screen / Esc Printer-friendly Version Interactive Discussion Discussion Paper | Discussion Paper | Discussion Paper | Discussion Paper | Screen / Esc Printer-friendly Version Interactive Discussion Discussion Paper | Discussion Paper | Discussion Paper | Discussion Paper | temperatures and an increase in hydric balance may have favoured Maunder Minimum glacier advances in Southern Patagonia.Discussion Paper | Discussion Paper | Discussion Paper | Finally, our results coincide with other inferences predominantly from sea-surface temperature and modelling data about ENSO activity over the last 1500 years, where during the MCA, La Niña like or weak El Niño conditions and probably positive SAM dominates in Southern South America(Graham et al., 2010;Mann et al., 2009;Rein Discussion Paper | Discussion Paper | Discussion Paper | ( 48-52• S) pollen sites locations, shifts on latitudinal and strength of the SWW results in large changes on hydric availability on forest and steppe communities.Therefore, we can interpret fossil available pollen dataset as changes on paleohydric balance at every single site by the construction of paleohydric indices and comparison to charcoal records during the last 2000 cal yrs BP.Our composite pollen-based Northern and Southern Patagonia indices can be interpreted as changes in latitudinal variation and intensity of the SWW respectively.Our eastern Andean pollen and charcoal records synthesis suggest SWW variations during the last 2000 cal yrs BP at centennial scales, with poleward SWW between 1750 and 750 cal yrs BP and northward, weaker SWW between 750 and 200 cal yrs BP.These SWW variations are synchronous to Patagonian fire activity major shifts.We found an in phase fire regime (in terms of timing of biomass burning) between northern Patagonia Monte shrubland and Southern Patagonia steppe environments.Conversely, there is an antiphase fire regime between Northern and Southern Patagonia forest and forest-steppe ecotone environments.For the last 200 cal yrs BP we can concluded that the SWW belt were more intense and Discussion Paper | Discussion Paper | Discussion Paper | poleward than the previous interval, but the last 100 cal yrs BP were controversial by the European establishment.Comparison with other precipitation and SWW sensitive Patagonia records from western Andes present coincident late Holocene climatic scenarios.Our composite pollen-based SWW indices shows the potential of integrating pollen dataset at regional scales to improve the understanding of paleohydric variability supported in strongly calibrated pollen-vegetation calibration especially for the last 2000 millennial.However, the scarce availability of continuous pollen or charcoal records on eastern extra-Andean environments still challenge the understanding of past environmental changes on eastern Andean Northern and Southern Patagonia.Our results encourage future palynological research to develop new pollen dataset with high sample resolution and chronological control for the last millennia to calibrate pollen records to other decadal resolution proxy (e.g.dendrocronologycal data, sedimentary or isotopic records).The correspondence of our SWW reconstruction with proposed ENSO variability during LIA and MCA, suggests ENSO-Patagonia centennial scale tele-connection.Discussion Paper | Discussion Paper | Discussion Paper | wind belt in the eastern Pacific over the past two millennia, Clim. . The coincidence between Bertrand's sedimentation based seasonality index and our composite pollen-based Northern Patagonia SWW belt index supports the reliability of our Northern Patagonian proposed past environmental and climate variability scenarios.Similarly, other marine records shows increases on precipitation of SWW origin between 750 and 200 cal yr BP at 41 • S (Lamy et al., 2001) and 44 • S
2018-11-30T20:53:21.627Z
2015-06-03T00:00:00.000
{ "year": 2015, "sha1": "0e5115b752798bca97785d3974bf83fc7905d1d8", "oa_license": "CCBY", "oa_url": "https://ri.conicet.gov.ar/bitstream/11336/162015/2/CONICET_Digital_Nro.21fcd610-5907-498f-9f93-efd3b0aa7f20_A.pdf", "oa_status": "GREEN", "pdf_src": "ScienceParseMerged", "pdf_hash": "e48f80609d57ddd056cfacc769e14b51ec48dd6d", "s2fieldsofstudy": [ "Environmental Science", "Geography" ], "extfieldsofstudy": [ "Geography" ] }
225584779
pes2o/s2orc
v3-fos-license
Simulation Analysis of Safe Evacuation in a Railway Tunnel Emergency Rescue Station This paper in order to improve the design parameters of evacuation passageway of emergency rescue stations in long railway tunnel based on the principle of the minimum evacuation time and best evacuation route, and study the impact of evacuation speed on evacuation time due to personnel panic. The evacuation time in different combinations of parameters of spacing and width of evacuation passageway and width of platform is calculated by Pathfinder simulation software considering characteristics, speed and path of evacuation. The simulation takes the fire accident of 18 marshalled passenger trains in an emergency rescue station in tunnel as an example. The study results show that: 1) Due to the panic mentality, the evacuation speed will increase, so the population flow rate will increase and the evacuation time will be shorter. 2) The optimal distance for the evacuation passageway of the emergency rescue station is 50m, the optimal width is 3.5m, and the platform width should not be less than 2.5m. Introduction With the continuous increase in the number of long tunnels and large-scale tunnel groups, people are paying more and more attention to the evacuation of people after a fire in a tunnel. According to TB 10020-2017: Tunnels or tunnel groups with a length of 20km or more shall be provided with emergency rescue stations, the distance between emergency rescue stations shall not be greater than 20km [1] . Much research has been carried out on emergency rescue stations at home and abroad. However, the high construction cost of emergency rescue stations has become the focus of tunnel disaster prevention and rescue design, and the related parameter standards for reducing the cost of emergency rescue stations must be under the premise of ensuring the safe evacuation of personnel. At the same time, scholars at home and abroad have done a lot of research on the issue of evacuation of personnel at emergency rescue stations in railway tunnels. Xu Zhisheng et al. [2] used Pathfinder software to simulate the personnel return route and the evacuation process at different evacuation exit distances during a tunnel train fire, analyzed the necessary evacuation time and its influencing factors, and selected the best evacuation mode; Wang Mingnian et al. [3] derived the calculation formula for the evacuation time of people, proposed the control standards for the safety evacuation time of railway tunnel personnel, the maximum number of overloaded trains in various types of disaster prevention evacuation facilities is given; Zhang Hao [4] put forward the behavior characteristics of people in the fire as the basis for whether they can escape safely based on Pathfinder; Caliendo et al. [5] found that people evacuation from the tunnel would be safe when the time before starting to walk is short and the walking speed is a rather high; Ronchi et al. [6] combined the use of a simplified egress modelling method and advanced agent-based simulations IOP Conf. Series: Earth and Environmental Science 510 (2020) 052020 IOP Publishing doi:10.1088/1755-1315/510/5/052020 2 of evacuation, this approach is deemed to facilitate fire evacuation safety assessment in underground physics research facilities by optimizing the simulation of relevant fire risk scenarios; Nagatani et al. [7] studied the escape of personnel in dark environments and obtained the characteristics, and evacuation time of personnel in a basically invisible environment. This article conducts research on the evacuation of emergency rescue stations on long railways and determines the design ideas and design methods based on Pathfinder. Not only can it ensure the safety of personnel evacuation when the accident train is parked at the emergency rescue station, but also the cost of the emergency rescue station can be controlled, which has a guiding role in guiding the design of railway tunnel emergency rescue stations [8] . Many researchers in China use Pathfinder software to conduct evacuation research. Tian Xin [9] used Pathfinder evacuation software to simulate and compare the evacuation of subway trains and subway stations, and concluded that platform evacuation is better than tunnel evacuation; Xu Yanqiu et al. [10] used Pathfinder's SFPE (society of fire protection engineers) mode to calculate the time required for evacuation of all employees after a fire accident; Li Tiansheng et al. [11] used Pathfinder software to perform a real-world evacuation simulation of an underground transportation hub in Chongqing; Liu Songtao et al. [12] used pathfinder software to simulate five possible evacuation paths after a train fire in a double-hole single track tunnel; These all show that Pathfinder software has an important role in the simulation of personnel evacuation. Software introduction Pathfinder software is a simulator based on crowd evacuation and movement simulation. It is mainly used to simulate fire and people evacuation in various emergency situations. It uses technologies in the field of computer graphics simulation and game characters to perform virtual drills on the movements of each individual in multiple groups, so that it can accurately determine the escape path and escape time of each individual when a fire occurs [13]- [14] . In the created simulation model, functions such as setting personnel parameters, adding evacuation channels, and adding obstacles are provided to make the simulation environment more consistent with the layout of emergency rescue stations. Based on Pathfinder software for simulation, people will choose the safest shortest evacuation route (the best evacuation route) when they evacuate. Model building The basic parameters of personnel in Pathfinder software include shoulder width, height and evacuation speed. The accurate setting of the shoulder width can make the evacuation result closer to the escape situation in the real fire situation. The human body parameters in this paper mainly select the average value of adult males, adult females, children and the elderly. The overall evacuation speed of the crowd will affect the accuracy of the final evacuation time. In this model, considering the psychological panic factors of the personnel, the emergency evacuation speed of different types of personnel under the influence of panic in the tunnel is variable. But the personnel parameters of evacuation speed considering psychological panic are only used for distance for the two compartments near the fire; the parameters of the personnel as shown in Table 1 of calculated values. When calculating each combination, it is assumed that the width of the emergency rescue station platform is normal (2.5m) and does not affect the evacuation speed of the personnel. The evacuation model of the emergency rescue station is shown in Figure 1. Figure 1. Schematic diagram of emergency rescue station [8] In the calculation of the model, the behavioural response characteristics of personnel, the effective width of the evacuation passageway, and the exit blocking situation have been considered. The specific parameters are as follows: 1) The height of the emergency rescue platform (the distance from the floor of the carriage to the platform) is set to 0.3m. The width of the platform is set to 2.5m, and the length of the emergency rescue platform is set to 500m. There are a total of 18 train cars, each with a length of 25.5m, and the total length of the train is 459m. 2) The number of each car is set at 118 people (full load). Exit doors are opened on both sides of the train in one direction (facing the evacuation passageway), the door size is 1m. Result analysis During the Pathfinder simulation of personnel evacuation experiments, considered the psychological panic factors of the personnel. In order to study the effect of different evacuation speeds on the overall evacuation time, a simulation experiment was performed. In one working condition, the stair01_1_1_3 door 1 and door 2 is the exit of the compartment close to the source of the fire, and it is used for people evacuating at speed considering psychological panic; The stair01 door 1 and door 2 is the exit of the compartment far away the source of the fire, and it is used for people evacuating at speed of people in railway tunnel; Each compartment has the same number of passengers, the results of flow rate in the model simulation are shown in Figure 2. Figure 3: People have different psychology, so the evacuation speed is different. Due to the panic mentality, the evacuation speed of the personnel near the source of fire increase, the population flow rate at the stair01_1_1_3 door 1 and door 2 will increase, and the evacuation time will be shorter. On the contrary, considering the evacuation of people at normal speed in the tunnel, the flow rate at the stair01 door 1 and door 2 will decrease, and the evacuation time will increase, which will affect the overall evacuation time. As can be seen from Figure 2, the former takes 2020 4th International Workshop on Renewable Energy and Development (IWRED 2020) IOP Conf. Series: Earth and Environmental Science 510 (2020) 052020 IOP Publishing doi:10.1088/1755-1315/510/5/052020 4 12s less time than the latter. The evacuation time of the personnel is determined by all personnel, so appropriately increasing the personnel speed, the evacuation time will reduce, and the safety is higher. The necessary safety evacuation times for personnel evacuation in the case of different evacuation passageway spacing and width combinations is shown in Table 2. 151 180 162 154 153 153 223 196 175 168 168 238 215 203 173 173 It can be known from Table 3: In general, the spacing of evacuation passageway is greater, the necessary safety evacuation time for personnel evacuation will be longer; the width of the evacuation passageway is smaller, the evacuation time also will be more. (1) When the spacing of evacuation passageway is 50 m and the width is 3.5 m, the required safe evacuation time for personnel evacuation is 151 s, and the evacuation time does not change when the width increases. (2) When the spacing of evacuation passageway is 60 m and the width is 3.5 m, the required safe evacuation time for personnel evacuation is 153 s, and the evacuation time does not change when the width increases. (3) When the spacing of evacuation passageway is 70 m and the width is 3.5 m, the required safe evacuation time for personnel evacuation is 168 s, and the evacuation time does not change when the width increases. (4) When the spacing of evacuation passageway is 80 m and the width is 3.5 m, the required safe evacuation time for personnel evacuation is 173 s, and the evacuation time does not change when the width increases. The relationships between evacuation passageway width and evacuation time is shown in Figure 3. Figure 3. Relationships between evacuation passageway width and required evacuation time It can be known from Figure 3: When the width of evacuation passageway increase to 3m, the evacuation time does not change obviously when the width increases. When the spacing of evacuation passageway is 50m and the width of evacuation passageway is 3.5m, the evacuation time is 151s, which is the least under all combinations of spacing and width of evacuation passageway, and the construction cost is relatively low by comparison. This will ensure a minimum evacuation time. At this time, a combination which the spacing of evacuation passageway is 50m and the width of evacuation passageway is 3.5m is selected as the parameter to study the influence of the platform width on the evacuation time. Evacuation time under different widths of emergency rescue station platform is shown in Table 3. Table 3: When the spacing of evacuation passageway is 50m and the width of evacuation passageway is 3.5m, as the platform width increasing, the corresponding evacuation time required for personnel evacuation is decreasing; When the width is greater than 2.5 m, the decrease in the time required for safe evacuation of personnel is no longer obvious. Conclusion Based on the Pathfinder simulation software, a passenger train fire accident was studied, and personnel were parked and evacuated in an emergency rescue station in a tunnel. By designing and studying the parameters of evacuation passageway of an emergency rescue station in railway tunnel, the following main conclusions were obtained: (1) Due to the panic mentality, the evacuation speed will increase, so the population flow rate will increase and the evacuation time will be shorter. (2) The optimal distance for the evacuation transverse passage of the emergency rescue station is 50m, the optimal width is 3.5m, and the platform width should not be less than 2.5m.
2020-07-23T09:07:19.391Z
2020-07-14T00:00:00.000
{ "year": 2020, "sha1": "2747fcd66b0db2cd2e3819826c6d482421203239", "oa_license": null, "oa_url": "https://doi.org/10.1088/1755-1315/510/5/052020", "oa_status": "GOLD", "pdf_src": "IOP", "pdf_hash": "5048756b7ab1e010b1b60c8f32b8d70ada43d68e", "s2fieldsofstudy": [ "Engineering" ], "extfieldsofstudy": [ "Physics", "Engineering" ] }
221150888
pes2o/s2orc
v3-fos-license
Abelian and non-Abelian topological behavior of a neutral spin-1/2 particle in a background magnetic field We present results of a numerical experiment in which a neutral spin-1/2 particle subjected to a static magnetic vortex field passes through a double-slit barrier. We demonstrate that the resulting interference pattern on a detection screen exhibits fringes reminiscent of Aharonov-Bohm scattering by a magnetic flux tube. To gain better understanding of the observed behavior, we provide analytic solutions for a neutral spin-1/2 rigid planar rotor in the aforementioned magnetic field. We demonstrate how that system exhibits a non-Abelian Aharonov-Bohm effect due to the emergence of an effective Wu-Yang (WY)flux tube. We study the behavior of the gauge invariant partition function and demonstrate a topological phase transition for the spin-1/2 planar rotor. We provide an expression for the partition function in which its dependence on the Wilson loop integral of the WY gauge potential is explicit. We generalize to a spin-1 system in order to explore the Wilzcek-Zee (WZ) mechanism in a full quantum setting. We show how degeneracy can be lifted by higher order gauge corrections that alter the semi-classical, non-Abelian, WZ phase. Models that allow analytic description offer a foil to objections that question the fidelity of predictions based on the generalized Born-Oppenheimer approximation in atomic and molecular systems. Though the primary focus of this study concerns the emergence of gauge structure in neutral systems, the theory is also applicable to systems that posses electric charge. In that case, we explore interference between fundamental gauge fields (i.e. electromagnetism) with effective gauge potentials. We propose a possible laboratory demonstration for the latter in an ion trap setting. We illustrate how effective gauge potentials influence wave-packet revivals in the said ion trap. I. INTRODUCTION The double slit experiment and the Aharonov-Bohm (AB) effect [1] are iconic examples that highlight novel and counter-intuitive aspects of the quantum theory [2]. The former has long served as a pedagogical device [3] to introduce the notion of wave-particle duality to students of quantum mechanics and laboratory demonstrations of it have raised new questions regarding the role of measurement in quantum mechanics (QM) [4,5]. The AB effect demonstrates the role of gauge potentials in quantum mechanics, and Feynman [3] framed it in a double slit setting to illustrate and underscore its topological significance. From the Einstein-Bohr-Sommerfield quantization rules to the TKNN integers [6], topology has always played a role in QM, and for which the AB effect offers an instructive template. It has been applied to elaborate on the nature of anyons [7] and other forms of exotic quantum matter [8]. Researchers hope to harness topology in service of enabling high-fidelity qubit technology [9] and fault tolerant quantum computing [10]. In this paper we illustrate how AB-like topological effects, and its non-Abelian generalization [11,12], manifest in simple quantum systems that allow accurate numerical as well as analytic solutions. First, we consider the dynamics of a neutral spin-1/2 system coupled to an external static magnetic field. We perform a quantum mechanical numerical experiment in which the particle * bernard@physics.unlv.edu passes through a double-slit barrier. When the position of the particle is measured at a detection screen we find an anticipated wave interference pattern. In addition to interference due to the presence of slit barriers, we show that the resulting pattern is best described by appealing to a model in which a charged particle is minimally coupled to an effective magnetic flux tube. This, despite the fact that the spin-1/2 particle is neutral and couples locally to the external field via the standard µ · B term. Our numerical experiment provides a demonstration of how effective gauge potentials arise in quantum system that appear to have no overt gauge structure. This system (without the double slit) was first proposed [13] as an example of inertial frame dragging. Here we confirm, via our numerical simulation, the predictions of that gedanken system. In addition to the predicted [13] Abelian AB behavior, we explore non-Abelian features inherent in analogous systems that allow analytic solution. In section II, we summarize the results of our numerical experiment. We demonstrate the scattering of a neutral spin-1/2 wave-packet by a double slit barrier. The packet experiences a background magnetic field B in which the condition ∇( µ · B) = 0, is satisfied. The latter insures that the packet does not experience a gradient force. We analyze the interference pattern at a post-slit detection screen and find that it shares the predicted structure of a charged particle that is scattered by an AB magnetic flux tube. In order to gain better understanding of this phenomenon, we introduce, in section III, a system that allows analytic solution. We calculate the partition func-tion of a neutral spin-1/2 planar rotor placed in the aforementioned B field configuration. In addition to verifying the AB features observed in our numerical demonstration, we conclude that a model characterized by a non-Abelian Wu-Yang [11] (WY) flux tube provides a more accurate description. We demonstrate that the, gauge invariant, partition function is an explicit function of the Wilson-loop [14] integral of a (WY) gauge field. Early studies [15][16][17][18] have demonstrated how nontrivial gauge structures arise in molecular and atomic systems. In low energy atomic collisions [17,19] and molecular structure [16] calculations, it is convenient to express the state vector in a basis of Born-Oppenheimer eigenstates. A complete set of such states leads to gauge potentials, coupled to the nuclear motion, that have both spatial and temporal components [17,19,20]. The spatial components describe a pure gauge, and its is only after truncation from a Hilbert space spanned by a complete set to a subspace that the spatial components acquire a non-trivial Wilson-loop value. For that reason it has sometimes been argued that gauge fields that lead to nontrivial Wilson loop integrals, (a.k.a geometric, or Berry, phases) are artifacts of the approximation or truncation procedure. In section IV, we investigate this question for the model introduced in section II. We demonstrate how an open ended, but gauge invariant, Wilson-line integral of a 3 + 1 gauge field along a space-time path can lead to a non-trivial spatial Wilson loop integral when projected to a closed path of the spatial subspace. Wilzcek and Zee [21] demonstrated how non-Abelian geometric phases arise in the slow evolution of a system possessing degenerate adiabatic eigen-states that are well separated from distant states. As our spin-1/2 model contains only two internal states, separated by an energy gap, the Wilczek-Zee mechanism is not applicable. Therefore we introduce, in section V, an extension to our two state model by positing a three-internal state system that allows analytic solutions. In the latter, two internal states are degenerate and a third state is separated from them by a large energy defect. We analyze its gauge structure, and show that higher order gauge corrections [17,19,22,23] breaks the degeneracy evident in (semi-classical) adiabatic evolution [21]. As a consequence, gauge covariance is regained only in the 3+1 formalism [17]. In section VI, we provide a summary and conclusion of our efforts and propose possible systems in which the effects described above may be gleaned in a laboratory setting. Unless otherwise stated we use units in which = 1. With the exception of the Pauli matrices, we use boldface typeface to represent both vector and matrix valued quantities. In some cases, when there is the possibility of ambiguity, we use explicit vector notation to represent vector valued quantities. II. NUMERICAL DOUBLE-SLIT EXPERIMENT FOR A NEUTRAL SPIN-1/2 SYSTEM IN A STATIC MAGNETIC FIELD Consider a neutral spin 1/2 atom or neutron with magnetic moment µ, and mass m, in the presence of a static background magnetic field where φ, ρ are the polar and radial coordinates in a cylindrical coordinate system. We take B(ρ) ≡ B ρ , and B 0 to be constants so that B describes a vortex configuration superimposed on a constant magnetic field in thek direction. The Hamiltonian for a neutral spin-1/2 system is where 1 is the unit 2×2 matrix and σ are Pauli matrices. The adiabatic, or BO, eigenergies of H are the constant surfaces separated by a finite energy gap 2∆. Though the magnetic field lines have a vortex structure, and ignoring a small higher order correction [20], the gradient force − ∇V BO vanishes. Thus wave packets evolve, as confirmed in a previous numerical study [20], with minimal distortion induced by the presence of scalar potentials. Fig. (1) describes a wave packet, initially in the ground adiabatic state, whose probability density, as a function of time, is illustrated in the panels of that figure. In the first run of a simulation we set B ρ = 0 and the system evolves on the ground state adiabatic surface as the particle proceeds through the two slits. At the detection screen, shown by the red dashed line, the wave amplitude forms an interference pattern whose probability density is plotted in the left panel of Fig. (2). In that figure the solid blue line represents the data of this numerical simulation whereas the red dashed line is an analytic fit to the simulation. In calculating the latter we assumed that the probability amplitude at the observation screen is given by where ψ R,L are amplitudes, based on a Huygens principle construction, due to contributions coming from the right and left slits, shown in Fig. (3), respectively. β is a measure of the relative phase between the amplitudes and for this run β ≈ 0 provides the best fit. On a second run we set B 0 = 0, B ρ = |∆| so that the Zeeman energy splittings are unchanged from that of the first run. The resulting interference pattern is illustrated on the second (r.h.s) panel of Fig.(2) by the red line, and in that case we found the best value for β ≈ π. In a subsequent run we translated the B field so that the vortex center, labeld x c on the horizontal axis of Fig. (3), has been shifted to a point that is not framed by the pair of slits in the barrier. In that simulation we again found that β ≈ 0 provides the best fit to the numerical data. We also considered different ratios tan θ = B ρ /B 0 and fit β for these choices of θ. The results are summarized by the following observations, 1. The data obtained in the simulations, for vortex centers −L/2 < x c < L/2, are best described by Eq. (4) provided that β takes the value π(1−cos θ). 2. For an external magnetic field in which |x c | > |L/2| the value β ≈ 0 provides the best fit. 3. If the packet mean kinetic energy E >> 2∆ the interference pattern is largely insensitive to the location of x c and is best fit with β ≈ 0. The features described above are suggestive of dynamics influenced by topology. Indeed, it is the behavior predicted in Feynman's thought experiment treatment of Aharonov-Bohm (AB) scattering[1] of a charged scalar particle in a double slit apparatus [3]. Observations (1)(2)(3) are consistent with the following hypothesis, is a gauge potential that describes Aharonov-Bohm (AB)-like flux tube of strength (1 − cos θ)/2 centered on the barrier at x c = 0. The line integral is taken along a single circuit about a closed path C that circumscribes x c = 0 on the barrier. Hamiltonian (2) possesses no overt gauge structure, but it is known [15][16][17][18] that effective gauge potentials can emerge in quantum systems not coupled to fundamental gauge fields. In this study we highlight the utility of using a gauge theory framework to characterize quantum systems that exhibit apparent topological AB-like behavior in a scattering setting. However, the features itemized above do not completely fit into the standard AB framework. It requires, as shown below, application of non-Abelian ideas and in order to elaborate on this observation we introduce a simpler physical system that allows an analytic description. We substitute the 2D kinetic energy operator, in Eq. In the limit ∆ → ∞, provided that ∆ > |1−2m| and A. Adiabatic gauge In order to gain insight into these solutions we transform the eigenvalue equation corresponding to Hamiltonian (7) into the so-called adiabatic representation [17] which we define by where is a single-valued unitary operator. We get where the non-Abelian, pure, gauge potential If we ignore the off-diagonal components of the gauge potential and project this equation to the ground manifold via projection operator |− −|, we find We note that is an eigenstate of Eq. (23) corresponding to eigenvalue It agrees with the leading order limit of expression (18) as ∆ → ∞, Consider now the excited state manifold obtained via projection |+ +|. is an eigenstate of the latter corresponding to eigenvalue Note that and so Or comparing to Eq. (17) we find, as ∆ → ∞, E e = E + . In conclusion, we find that in the adiabatic gauge the following solutions to Eqs. (21) disregarding the off-diagonal couplings predicts adiabatic gauge eigensolutions with eigen-energies E − , E + , respectively. They agree with the leading order, in the limit ∆ → ∞, eigenvalues obtained given by the exact analytic solutions to Eq. . IV. THE WU-YANG FLUX TUBE. Some time ago, T.T. Wu and C.N. Yang [11] entertained the notion of a non-Abelian Aharonov-Bohm effect. They postulated a non-Abelian flux tube that may allow, if found in nature, topological transformation of isotopic charge when a system, described by an isotopic amplitude, is transported about the flux tube. In this paper we demonstrate how the spin-1/2 system described in the previous section possesses some of the salient features of a particle, with spin degrees of freedom, coupled to a Wu-Yang (WY) non-Abelian flux tube. To set the stage for that discussion we first introduce an idealized model in which a free rotor is coupled to a WY connection. A. Rotor coupled to Wu-Yang gauge potential Consider the following non-Abelian gauge potential where x, y are the coordinates of a (iso) spin-1/2 particle. It is straightforward to verify that the spatial components of the matrix-valued curvature two-form F vanish identically in the region excluding the point x = x 0 ≥ 0, y = 0. From that observations it may appear that gauge connection (31) corresponds to that of a pure gauge. Nevertheless, as for the conventional AB vector potential, its Wilson loop integral circumscribing the point x 0 , y = 0 is non-trivial. For connection (31) the gauge invariant trace of the Wilson loop phase integral has the value where C is an arbitrary contour (of counter-clockwise sense) that encloses the point (x 0 , y = 0) and m, the winding number, itemizes the number of circuits taken around C. P represents path ordering. As first pointed out by Wu and Yang, gauge potential (31) is a non-Abelian generalization of the Aharonov-Bohm potential. Despite the fact that in the gauge in which A is diagonal and therefore has an "Abelianized " structure, it is not simply the potential of two AB flux tubes of opposite charge [24]. In this sense A describes a non-Abelian flux tube piercing the x y plane at the point (x 0 , y = 0) We seek a Schrödinger equation for a spin-1/2 particle, constrained on the unit circle x 2 + y 2 = 1, coupled to gauge potential (31), as well as a scalar potential A 0 = −σ 3 ∆, where ∆ is a constant energy defect. Constrained systems typically involve singular Lagrangians [25] and a rigorous derivation of the corresponding Hamiltonian requires application of Dirac's theory [25] of constrained dynamical systems. The latter has been applied to construct the quantum Hamiltonian of a scalar particle constrained on a circular path [26]. Here we use a more heuristic approach by considering the standard (unconstrained) Schrödinger equation in two dimensions and in which the spin-1/2 particle is minimally coupled to gauge potential (31). We have is expressed in a polar coordinate system. If x 0 < r, and in the range −π < φ ≤ π, the function is single-valued and we are allowed the gauge transformation Thus A describes a WY flux tube centered at the origin. If the particle is constrained to move on the unit circle and x 0 < 1 we obtain the Schrödinger equation The energy eigenstates to Eq. (37) are where m is an integer. For x 0 > r , Ω > is no longer single-valued but is. Replacing Ω > with Ω < in (36) we find A = 0, i.e. a pure gauge. Thus, for x 0 > 1 Eq. (37)) is replaced with and, As the position of flux tube shifts from x 0 < 1 to x 0 > 1 the energy spectrum shifts into that of a free rotor. This topological feature is most clearly evident in the behavior of the partition function Z = m exp(−β E m ) where β is an inverse temperature and E m are the energy eigenvalues for the eigenstates summarized above. Consider the propagator for Schrödinger Eq. (37) in the region where τ = t − t . Thus With the following definition of the Jacobi-theta function [27,28] we re-express where Employing the identity [27], we re-write (45) as In this form, the propagator contains products that are proportional to the time interval τ , and are of a dynamical origin, with factors that are independent of τ and have a geometric, or topological, origin. Consider the classical equation of motion for a free rotor φ(t) = ω (t−t )+φ or, if we set t = 0, φ(t) = φ + 2 mπ, for a rotor trajectory that encompass m circuits in a given time period τ . The resulting classical action where we used the fact that ω = (φ−φ −2 m π)/τ Therefore, The partition function corresponds to the trace over all closed paths in which φ = φ , and the time interval τ is Wick rotated onto the imaginary axis. With the replacement τ → −i β, we obtain In the same manner we construct the partition function for x 0 > 1. Thus, we find In expression (52) the partition function is expressed as a product of a purely dynamical contribution Z 0 , and which is modulated by a topological term cos(2m π α ≶ ) proportional to the trace of the Wilson loop integral (32) corresponding to winding number m. In Fig (5) we plot the ratio r < (β) as a function of the inverse temperature β. The graph illustrates significant variation of that ratio with respect to α at lower temperatures. For x 0 | > 1, r(β) undergoes a phase change as the curve is independent of variations in α, and reverts to that labeled by α = 0 in that figure. It is now instructive to compare the behavior of the gauge invariant partition function for the Wu-Yang flux tube with that of the system described by the partition function where E ± are given by expression (14). The latter correspond to the partition function of our physical model; a neutral particle constrained on a rotor track in the presence of magnetic field (1). Instead of comparing Z W Y with Z, we compare terms that only include the topological contribution to the partition functions. To that end we definẽ whereZ 0 is defined in (52 ) but modified by the contribution of the induced, scalar, counter term introduced in Eqs. (25) and (28), i.e. The ratio corresponding to the Wu-Yang system is given by the constant, black, dashed line. The red dashed line (superimposed by the blue line labeled ∆ = 0) corresponds to a free rotor. In Fig. (6) we plot the ratio r(β) for the values α = 1/2, x 0 = 0, as a function of the inverse temperature β and the energy defect ∆. The (blue) curve corresponding to energy defect ∆ = 0 is identical to the curve obtained for the partition function of a free rotor (i.e. without a non-trivial gauge couplings). Since gauge potential (22) describes a pure gauge it is plausible that it does not contribute the value of the partition function Z. However, for non-vanishing energy defects the graph shows a strong dependence of Z on the topological factor r. For energy defect ∆ = 100, the value ofr is almost identical to, at low temperatures (β >> 1), to the value predicted by the Wu-Yang flux tube given by expression (53) and shown by the dashed line in that figure. We conclude that for large values of ∆ the gauge invariant partition function for the system defined in Eq. (6) approaches that of particle coupled to Wu-Yang flux tube. Though gauge potential (22) is that of a pure gauge, the energy defect ∆ breaks a restricted spacial gauge symmetry as it corresponds to the time component of a 3 + 1 gauge field [17]. Consequently we find a non-trivial, non-Abelian, Wilson loop contribution to the partition function. If we restrict our attention to the ground state, the latter appears as an Abelian holonomy whose semiclassical analog (in which the quantum variable φ is demoted to a classical parameter φ(t) ) corresponds to Berry's geometric phase [29,30]. Let's define amplitude G, so that where U is defined in Eq. (20). Inserting (57) into the time dependent version of Eq. (21), we obtain where A(t), like A in Eq. (22), is a pure gauge and generates a trivial Wilson loop integral. However, if we replace the off-diagonal components of (59) with a time expectation value, over interval τ , which as ∆ → ∞ we ignore. In this approximation pure gauge A(t) is replaced with the gauge potential of a non-Abelian WY flux tube. B. Shifted magnetic vortex field In the previous section we demonstrated how, in the limit ∆ → ∞, the eigen-solutions to Hamiltonian (6)) tend to those described by an effective Hamiltonian containing a Wu-Yang flux tube. Suppose we have the following B field configuration which describes a vortex configuration centered at (x = x 0 , y = 0). The Hamiltonian in (2) µ σ · B is given by and replacing x → cos φ, y → sin φ the above expression can be re-written as Now if we define the operator we find that Forming the non-Abelian connection A ≡ iU † ∂ ∂φ U we find, The diagonal components of A d of A has the form, and for the special case x 0 = 0 reduces to A d = σ 3 /2 and describes the non-Abelian Wu-Yang flux tube of "charge" 1/2 centered at the origin. In Fig. (7) we plot, with the red solid lines, the energy spectrum calculated for Hamiltonian (6) using field (60) for values of x 0 ranging from x 0 = 0 to x 0 = 1.8. Superimposed on the figure, by the blue dotted lines, is the corresponding spectrum for a rotor system minimally coupled to the gauge field of a Wu-Yang flux tube centered at x 0 , and calculated using the analytic formulas given in Eqs. (38) and (41). The dashed blue lines correspond to the eigenvalues for a free planar rotor. where a is a constant 2 × 2 hermitian matrix, φ is the azimuthal angle in a cylinderical coordinate system and A , A 0 are the spatial and time components of a 3+1 matrix-valued (i.e. non-Abelian) gauge potential. Let A = 0, and so Eq. (67) describes a spin-1/2 particle coupled to a matrix, or spin-dependent, scalar potential −A 0 . With gauge transformation ψ = U F , The similarity of Eq. where q is an integer and θ, γ are parameters, satisfies Eq. (70). A full quantum description of this model is given in Appendix A, but here we first explore the behavior of the Wilson loop integral of the 3+1 gauge potentials a, A 0 . Consider the following path-ordered Wilson-loop integral, where we used A defined in Eq. (69), C 0 is a closed path that circumscribes the origin in the, z = 0, xy plane and dφ is the differential angle, with respect to the origin, of a segment of an arc along the path. Since dφ = 2πm, where m is the winding number of the path, This identity is simply a reflection of the fact that A is a pure gauge. A. Wilson line in space-time. In our discussion so far we noted that the partition function of our spin-1/2 systems contain Wilson loop contributions that arise from non-trivial gauge fields, despite the fact that the spatial components A of the 3+1 gauge potentials describe a pure gauge. To achieve a better understanding of how non-trivial Wilson loop contributions arise in systems that are putatively coupled to a pure gauge, we note that in evaluation of the partition function we need to take into account paths in space and time. Therefore, we consider a general path integral along an arbitrary path (not including the origin) C(a, b) from point a to b for gauge field A µ . Here µ is an index that identifies a space-time component and we use a summation convention so that With gauge transformation ψ = U ψ , the gauge potentials [14] A Consider paths of the type illustrated in Fig. (8). They are trajectories in a manifold that is a Cartesian product of the coordinates in the xy plane with a 1-dimensional manifold labeled by time t. The trace of W (a, b) for an open-ended path is not, in general, gauge invariant. However, we evaluate the integral only along paths in which the projection of coordinates a, b onto the spatial plane are equal at the initial and final points of the trajectory. We also limit the gauge group to time independent gauge transformations U so that the trace of W (a, b) is invariant under this group of transformations. Below we study the properties of W (a, b) as a function of the defect parameter ∆. We parameterize the trajectory z(τ ) whereî,ĵ are the basis vectors in the spatial plane, andk is the unit vector orthogonal to that plane and which we take to define the time axis, so that the physical time t ≡ f t (τ ). The functions x(τ ), y(τ ), f t (τ ) are arbitrary but satisfy the conditions x(0) = x(t f ), y(0) = y(t f ) where 0 < τ ≤ t f in order for the path to make a closed loop, in the xy plane at τ = t f . Using Eqs. (69,74,76) we get where m is the winding number of the path. Exponentiation of expression (78) results in where Let's define an effective vector potential Unlike the pure gauge A, defined in Eq. (69), A ef f engenders a non-trivial Wilson loop integral W C for any loop, in the z = 0 plane, enclosing the origin. Indeed, where C is the projection of the space-time path (76) onto the xy plane. Because x(0) = x(t f ), y(0) = y(t f ), C forms a closed loop. In summary, we demonstrated how the space-time open-ended path integral of a 3 + 1 non-Abelian gauge potential leads to a non-trivial Wilson loop integral of an effective gauge field A ef f . For time independent gauge transformations, the trace of W is gauge invariant. As W C depends only on the winding number, C can be shrunk to an infinitesimal loop about the origin without altering the value of W C . Thus A represent the gauge potential of a Wu-Yang flux tube of "charge" ±Ω, the eigenvalues of a + ∆ ω . In general, W (a, b) is a function of the dynamical parameters, ∆, ω, but for large ∆ ω >> 1, it tends to the product We evaluated W in the adiabatic gauge [17], wherein A 0 is diagonal. Because U (φ = 0) = U (φ = 2π) = 1, W is invariant under a gauge transformation into the diabatic gauge [17]. The latter corresponds to Schrödinger Eq. (67) in which the spatial component A = 0. In that gauge where we used the fact that f t (τ ) = t and φ(t) = ω t. Replacing the upper limit in integral (84) with an arbitrary time value t, we find that W (t) obeys a time dependent Schrödinger equation. It can be integrated to give Thus, where we used the fact that exp(2 i π m a) = 1. In the adiabatic limit as ω → 0, W (a, b) tends to the limit Eq. (83). In that expression, the first, dynamical, factor exp(− 2 i mπ ω σ 3 ) depends on the length of time t f = 2πm/ω that it takes for the system to travel from starting to end points. The second factor exp(2 i m π q cos θ σ 3 ) depends on spatial path taken. This factorization is in harmony with the adiabatic theorem [29]. VI. ON THE WILCZEK-ZEE MECHANISM In the previous sections we illustrated how non-trivial gauge structures arise in a vector space that is a direct product of a two-state (or qubit) system with the Hilbert space of a rotor. It is straightforward to extend this formalism to systems possessing additional internal degrees of freedom (e.g. spin-1 etc.). Indeed, this procedure is ubiquitous in theoretical studies of slow atomic collisions and non-adiabatic molecular dynamics. In those applications it is especially applicable if the total system energy E << ∆ where ∆ is an energy defect that separates a sub-manifold of Born-Oppenheimer (BO) states separated by a large energy gap from energetically higher lying BO states. Thus the Hilbert space amplitude is projected to a set of effective, or matrix-valued, amplitudes in the sub-space. The resulting set of coupled equations constitute the Born-Huang [31] approximation, or the method of Perturbed Stationary States [32] (PSS). The latter typically result in effective, non-trivial, non-Abelian gauge couplings among the sub-space amplitudes. In a quasi-classical version of this procedure, Wilczek and Zee demonstrated how the projected amplitudes, for a sub-manifold of degenerate energy eigen-states, acquire a non-Abelian geometric phase during adiabatic evolution. Below we consider a spin-1 rotor system in which two internal states posses degenerate energy eigenvalues that are separated from the remaining internal states by a large energy gap ∆. To illustrate this mechanism we choose a straightforward extension of Hamiltonian (67) This particular choice for a guarantees that U is singlevalued. For our purposes it is convenient to choose e g = − sin 2 θ/4I. Defining the adiabatic gauge amplitude F , so that we obtain the matrix-valued Schrödinger equation With ansatz where c is a constant column matrix, we are led to the eigenvalue equation det |h − 1 E| = 0 where Finding the eigenvalues of h involve solving for the roots of a cubic equation and for which analytic expressions, the Cardano formula, is available. The latter can be used to construct the gauge invariant partition function to the required degree of accuracy. The sums extend over the spectrum of h, which are itemized by the motional quantum number m, as well as the internal state quantum number i. Here β is an inverse temperature. Instead, because ∆ >> e g , we use the PSS approximation in which the amplitude F is projected to a Hilbert subspace. In this case, the subspace is spanned by the degenerate eigen-states of V , or the computational basis for a single qubit. Introducing the projection operator where a p ≡ P aP, V p = P V P . In this approximation we ignore couplings between the P and Q = 1 − P submanifolds. Though V p is diagonal and degenerate, the higher order induced scalar term [19,22] P a · a − a p · a p 2I P = 1 4I is not. An additional gauge transformation in the projected qubit subspace G = W G results in where σ i are the standard spin-1/2 Pauli matrices. Because the eigen-states of V p are not degenerate, Eq. (94) is no longer covariant under a Wilczek-Zee gauge transformation. In the latter formulation φ(t) is treated as a classical variable undergoing adiabatic evolution. Here φ is a quantum variable, and the symmetry responsible for the degeneracy in a quasi-classical formulation is broken. However we can, as described in the previous sections, enlarge the gauge group by allowing the (matrix) scalar potential to be treated as the time component of a 3 + 1 gauge potential. Consider the gauge potential. which begets a p in Eq. (94). Its Wilson loop integral for a path C circumscribing the origin assumes the value where m is the winding number. For values cos θ / ∈ Z, identity (96) demonstrates that a p , unlike a in Eq. For higher temperatures, or β << 1, we can approximate Applying a Poisson transformation, we get In order to obtain the total partition function Z, we must include the contribution from the distant state whose energy eigenvalue E i=1 m >> e 0 ±e 1 . In solving for the eigenvalues of h we find that and so the leading order contribution is dominated by the term exp(−β ∆) → 0 as ∆ → ∞. Therefore, where S 0 (k) = 2π 2 k 2 I/β is the classical action for a free rotor making k complete circuits in a given time interval. It contains a dynamical contribution, proportional to the classical action, that is modulated by a purely topological term, the Wilson loop integral W C (k). At higher temperatures z is largely dominated by contributions from the classical action and so we investigate the behavior of Z in the low temperature β → ∞ limit. A detailed derivation is given in Appendix B and according to Eqs. in that limit. S 0 (k) is the Wick rotated action for a free rotor undergoing k circuits and In Fig. (9) we plot − ∂ ln z ∂β as β → ∞, and which represent the ground state energy. The solid line denotes the ground state energy for Hamiltonian (87), the dashed line the adiabatic energy e g , and the circle icons denote energies obtained in the PSS approximation and calculated using expression (103) for the partition function. The latter approximation is accurate for values ∆/e g >> 1. According to expression (103), the term m Ω is independent of the temperature parameter β and is therefore of topological origin. The cross icons in that figure represent the energies obtained by artificially setting Ω = 0 in expression (103). The difference between those values and the ones laying on the solid line, underscores the significance of that topological contribution. Interestingly, unlike in the high temperature limit, the value for 2π m Ω does not equal the Wilson loop integral W C of the projected gauge potential A p . VII. SUMMARY AND DISCUSSION The gauge principle forms a cornerstone to our modern understanding of the fundamental constituents of matter. Quantum Electrodynamics (QED) is the best known example of an Abelian gauge theory, and its non-Abelian generalization illuminates the landscape within the nucleus. Gauge invariance guarantees charge conservation, and is the guiding principle that insures a gauge field's raison'detre. For example, the following Hamiltonian (up to a surface term) for a scalar field φ is not invariant under the replacement of field operator φ(x) with exp(iΛ(x))φ(x). Introducing an auxiliary quantum field A so that gauge invariance is enforced provided that as φ(x) → exp(iΛ(x))φ(x), A → A + ∇Λ. In quantum mechanics (QM) the Schrödinger equation is not invariant under a gauge transformation of the wave amplitude, however the eigenvalues of operators, i.e. observables, are. Dirac [33] argued that a Schrödinger description in which the wave function is minimally coupled to a gauge potential is equivalent to a gauge field free theory whose wave amplitudes posses non-integrable [11,33], or Peirls [34] phase factors. In this paper we provided examples of pedestrian quantum systems in which gauge structures arise in a natural manner without the need to summon the former. This feature of QM has long been noted in studies of atomic and molecular systems [15][16][17]22]. But, as those descriptions require the application of Born-Oppenheimer like approximations, predictions are open to interpretations that attracts skepticism [35]. For example, laboratory searches for the Molecular Aharonov-Bohm Effect (MAB) [36], in the reactive scattering of molecules, has had a long and controversial history [37][38][39]. In this paper we addressed some of those concerns in two ways, (i) we identified systems that allow analytic solutions, and (ii) explicitly demonstrated the dependence of gauge invariant quantities (e.g. the partition function) on the Wilson loop integral of a non-trivial gauge potential. Furthermore, our analysis did not require the semi-classical notion of adiabaticity, or degeneracy in the adiabatic eigenvalues. Unlike gauge quantum fields, quantum mechanical gauge potentials, discussed here, do not exhibit dynamic content (but see Appendix C). In the remaining discussion we address possible laboratory demonstrations of effects predicted and discussed in this paper. Though we are unable to comment on the viability of present day laboratory capabilities to realize the double slit system discussed in the introduction, we anchor our focus on recent laboratory efforts to simulate a coherent quantum rotor. For example, a planar quantum rotor was simulated [40] in a cylindrical symmetric ion trap in which a pair of 40 Ca + ions formed a two-ion Coulomb crystal. That experiment demonstrated a capability to prepare and control angular momentum states. Along those lines we propose trapping a spin -1/2 ion in a toroidal trap as shown in Fig (10). In that figure a positively charged spin-1/2 ion, such as Ca + in its ground state, is trapped in the torus. Instead, one can also consider a pair ions forming a Coulomb crystal, as described in [40]. The latter simulates, after factoring out the center of mass motions, a single ion rotor. However, for the sake of illustration, we limit this discussion to a toroidal trap configuration. We thread an electric current along FIG. 10: Illustration of a toroidal trap in which an ion of charge Q simulates the motion of a planar, quasi-rigid, rotor. A current I (red arrow) threading the doughnut hole induces an axial magnetic field. The system is subjected to a background magnetic bias field (blue) arrow. the symmetry axis piercing the doughnut hole to induce a magnetic field along the axial direction of the torus. Alternatively, an axial magnetic field can also be gen-erated by joining a solenoid at its ends to form a torus (i.e. a micro-tokamak). In addition to the toroidal axial field, generated by current I, a constant homogeneous bias magnetic field of magnitude B 0 parallel the symmetry axis is applied. The Hamiltonian for this system is where, in a cylindrical coordinate system, is the Landau gauge vector potential for the total magnetic field. U (φ) is given by Eq. (22), q is the charge of the ion, µ 0 the magnetic constant, and V trap is a trapping potential. In the adiabatic representation, and assuming that V trap is independent of spin, we obtain the eigenvalue Schrödinger equation, where Assuming that the trap potential is effective in freezing the degrees of freedom in the radial andẑ direction, and for a large Zeeman energy gap ∆, we replace the 3D Schrödinger Eq. (109) with an effective 1D equation corresponding to a rigid planar rotor, where ρ 0 is the equilibrium value of the radial coordinate, and Φ = B 0 πρ 0 2 is the total magnetic flux enclosed by the rotor. By tuning the current I and the bias field B 0 we can alter and discriminate the values of the Wilson loop for different spin states. For example, if cos θ(ρ 0 ) − 1 + qΦ πc = 0 then, In this scenario the upper Zeeman level undergoes the motion of a free rotor, whereas the lower component experience an effective AB flux tube with charge Φ. Such a capability, if realized, could find application as a novel magnetometer and rotational sensor. The planar rotor has also been used as a model for the anyon [7]. In adiabatic transport about a flux tube it can acquire a non-integer phase (modulus 2π) as it completes one circuit. In the rotor systems discussed here adiabatic transport is problematic as an initial wave packet spreads in time. However, as a closed system, it eventually revives to its original shape. For example, the propagator for a spin-1/2 planar rotor coupled to a Wu-Yang flux tube of "charge" α is given by Now at the revival [41] where ∆φ N = t N 2 α I = 4πN α. Thus an arbitrary initial, localized, wave packet is displaced, depending on its spin state, by an amount ±∆φ N . Suppose α = m/p is a rational number where p is even, then the packet returns to its original starting point, i.e. ∆φ = 0 Mod 2π at t N * for N * = p/2. So if a localized packet at t = 0 has the form it evolves to is the argument of a Wilson loop integral with winding number m. A similar argument can be used when p is odd. Expression (116) demonstrates that an arbitrary wave packet revives, up to a topological phase factor exp(iW m α σ 3 ), at its initial position. On a final note, at the time of writing I have become aware of recent literature in which similar themes, presented in this paper, are discussed. Synthetic gauge structures on a ring lattice have been explored in [42], and non-Abelian Wu-Yang structures have been observed in optical systems [43,44] and, the partition function, where β is the inverse temperature. Consider the limit ∆ << 1, in which Taking the Poisson transform of the r.h.s of Eq. (A7), we find Z → 2 2 π I β m exp(− 2 I π 2 m 2 β ) cosh(β ∆ cos θ). (A8) Thus, in this limit the partition function assumes the form of a free rotor in the presence of a constant "scalar" potential ∆ cos θ. In that figure the solid lines are calculated using the exact values Eq. (A6) for Z, whereas the dashed lines represent the value obtained using the approximate expression (A10). According to Eq. (A10), the ratio Z/Z 0 = m exp(− 2 π 2 m 2 I β ) cos(2 π m cos θ) in the limit β >> 1. The variation of this ratio, shown in Fig. (11), demonstrates the role of the topological contribution cos(2 π m cos θ) to the, gauge invariant, partition function. where D F is the Dawson integral [45] and where the summation is over all integers k. It is useful to express the latter in terms of a confluent hypergeometric function [45] D F (ξ) = ξ exp(−ξ 2 ) 1 F 1 ( 1 2 , 3 2 , ξ 2 ). (B6) For |ξ| >> 1 we use the asymptotic expansion for the Kummer function [46] 1 F 1 ( where the ± sign refers to the cases respectively. Or where ± corresponds to Re(ξ) > −Im(ξ) and Re(ξ) < −Im(ξ) respectively. Since ξ = I 2β (2π k − i β α 1 ) we find that as β → ∞ (α 1 = 0 ) Im D F I 2β (2πk − i α 1 β) → β 2 I β α 1 4π 2 k 2 + α 2 1 β 2 ± √ π 2 exp( α 2 1 I β 2 ) exp(− 2π 2 I k 2 β ) cos(2π I k α 1 ) (B9) Thus, if α 0 > 0, is an infinite dimensional column matrix. A r is a square matrix whose n mth entry A nm = n|A r |m , and V is a diagonal matrix whose nth entry is n ω. Consider a Hilbert space generated by bosonic operators a, a † so that [a, a † ] = 1 This space is spanned by the basis vectors where a|0 = 0. In this space we define a Hamiltonian where e is an arbitrary function. The spectrum of H BO is e(n) for n ∈ Z ≥ 0. We now posit the Hamiltonian which a straight-forward generalization of the finite dimensional models discussed in the main section. Here, U is a unitary operator that, in general, is a function of r and a, a † . For example, let U = exp(−iφ a † a) exp(−iλ(a + a † )) exp(iφ a † a), (C8) where φ is the azimuthal angle in a cylindrical coordinates system, and λ is a real valued parameter. Because the eigenvalues of the number operator a † a are integers, U is single valued, i.e. U (φ = 0) = U (φ = 2π), and so we can express the system amplitude Ψ = n f n (r)U |n . Using this ansatz we arrive at the set of equations (C3) where now the amplitudes f n (r) are coupled to V nm = n|H BO |m = e(n)δ n m A nm = n|A|m Here A describes a pure gauge. Alternatively, we could induce a unitary transformation so that H describes a particle minimally coupled to a dynamical Abelian gauge field A. In this picture the ansatz n f n (r)|n leads to identical equations for the amplitudes f n (r) described above. Allthough A is a pure gauge, low energy eigensolutions to H exhibit, as we demonstrate below, non-trivial effective gauge structure. For example, suppose that e(n) >> e(0) for n > 0. We can then employ the PSS approximation, which begets the Schrödinger equation for the ground state scalar amplitude F (r). Here . 12 is the gauge potential of an Aharonov-Bohm flux tube of charge λ 2 , and is an effective scalar potential that is the sum of the adiabatic ground state energy e 0 ≡ e(0) = 0|e(a † a)|0 and the correction 1 2m n =0 A 0n · A n0 = λ 2 2mr 2 0|a exp(iφ)|1 1| exp(−iφ)a † |0 = 1 2m λ 2 r 2 . (C15) We can think of the latter as a self-energy induced by the emission and re-adsorption of gauge quanta, thus demonstrating dynamical content encapsulated in A.
2020-08-19T01:00:53.752Z
2020-08-17T00:00:00.000
{ "year": 2020, "sha1": "3a23e2063e05965b75a6982a80bf92aa2e740040", "oa_license": null, "oa_url": "http://arxiv.org/pdf/2008.07607", "oa_status": "GREEN", "pdf_src": "Arxiv", "pdf_hash": "3a23e2063e05965b75a6982a80bf92aa2e740040", "s2fieldsofstudy": [ "Physics" ], "extfieldsofstudy": [ "Physics" ] }
254444155
pes2o/s2orc
v3-fos-license
Early Childhood School Leaders Knowledge, Attitude, Practices Schools Reopening Amidst Covid-19 in every educational institution worldwide. Like many other countries, Pakistan has had to close schools and educational facilities twice over the past year to stop the spread of the COVID-19 pandemic. Objective: To determine early childhood school leader's knowledge, attitude, practices schools reopening amidst Covid-19. Methods: This cross-sectional survey was conducted to examine Early Childhood School leaders' knowledge and practices related to COVID-19. The data were collected as part of an online survey of 154 school leaders from Karachi's Early Childhood Education (ECE) sector. Results: The knowledge constructs' overall mean score (right answers) was 6.8 with 1.3 standard deviations. Many respondents had misconceptions regarding the covid-19 virus's characteristics; only 70% of them are aware that the virus is not airborne. According to about 65% of the answers, the covid19 virus is not surface carried. On the other hand, more than 90% of the respondents stated that the covid-19 virus spreads through respiratory droplets; consequently, an overwhelming majority (95%) expressed their concern about the transmission of covid19 in school. Nearly 3 out of 4 responders thought the school should continue to be closed. Conclusions: The study concludes that some proper training for school leaders regarding knowledge and practices of Covid-19 would help prepare them for safe school reopening. In addition, the majority of the school leaders showed a positive attitude towards school reopening amidst Covid-19. I N T R O D U C T I O N The COVID-19 pandemic has seriously disrupted the educational process in every educational institution worldwide. Like many other countries, Pakistan has had to close schools and educational facilities twice over the past year to stop the spread of the COVID-19 pandemic. Objective: To determine early childhood school leader's knowledge, attitude, practices schools reopening amidst . Methods: This cross-sectional survey was conducted to examine Early Childhood School leaders' knowledge and practices related to . The data were collected as part of an online survey of 154 school leaders from Karachi's Early Childhood Education (ECE) sector. Results: The knowledge constructs' overall mean score (right answers) was 6.8 with 1.3 standard deviations. Many respondents had misconceptions regarding the covid-19 virus's characteristics; only 70% of them are aware that the virus is not airborne. According to about 65% of the answers, the covid19 virus is not surface carried. On the other hand, more than 90% of the respondents stated that the covid-19 virus spreads through respiratory droplets; consequently, an overwhelming majority (95%) expressed their concern about the transmission of covid19 in school. Nearly 3 out of 4 responders thought the school should continue to be closed. Conclusions: The study concludes that some proper training for school leaders regarding knowledge and practices of Covid-19 would help prepare them for safe school reopening. In addition, the majority of the school leaders showed a positive attitude towards school reopening amidst leaders in Pakistan, with an aim to understand their readiness and fears regarding schools reopening during COVID-19. The students responded to the following questions: What are the fears of early childhood educational leaders pertinent to schools reopening amidst COVID-19 pandemic? What is the knowledge, attitude, and practices of early childhood educational leaders regarding COVID-19? R E S U L T S Descriptive statistics were employed to compute frequencies and percentages for demographics, whereas mean and standard deviations were calculated for other constructs. M E T H O D S This quantitative study was carried out to explore and describe school leaders' knowledge, attitude, practices and fears towards school reopening in Covid-19. A crosssectional online survey was used to collect data from the ECE school leaders in Karachi -a metropolitan city of Pakistan. The data collection was undertaken from May 2020 to August 2020, where an online survey link was shared with schools in Karachi and globally to participate in this study. The participants of this webinar were also invited to this study, and 154 respondents completed the online survey questionnaire. As presented in table 1, altogether 154 ECE leaders based in Karachi participated in this study and their demographic details are provided. This survey questionnaire was adapted from a customized Knowledge, Attitude and Practices (KAP) developed by researchers from Wuhan, China [9]. The questionnaire was in English and included questions that were relevant to the purpose of the study. The questionnaire was comprised of four sections. 1) Study information along with the consent of participation; 2) demographic variables; 3) items related to knowledge of Covid-19 (9 items), practices as per covid-19 SOPs (5 items) and, attitude, fear, and con dence towards school reopening (9 items); 4) open-ended questions to capture the in-depth picture of respondents' attitudes and fears towards school reopening. The survey required approximately 20 minutes to be completed. Cronbach's alpha value was calculated and found in the acceptable range (Alpha=0.702). The data were analyzed through SPSS version 23 by employing descriptive and inferential statistics. The items were coded (No = 0, Yes=1), whereas negative items were re-coded by giving a 1 value to each correct response. Similarly, the overall scores for each construct were computed by adding all the correct responses. On the other hand, items related to fear, con dence and resources were coded (none=0, low=1, moderate=2, high=3). Results presented in Table 2 show the frequency and percentage of respondents' responses. The overall mean score of the knowledge constructs (correct responses) was 6.8, with 1.3 standard deviations. In other words, on average, the respondents responded to seven questions correctly out of nine questions, which shows that they have good knowledge of Covid-19. However, the minimum values reveal that a few respondents have limited knowledge about the Covid-19 infection. On the other side, almost 7% of the total respondents responded correctly to all the knowledge-based questions. Speci cally, the item-wise analysis reveals that an overwhelming majority (i.e., 96%) of the total respondents know about ' clinical symptoms of Covid-19'. Further, 91% of the school leaders viewed that child can also be affected by covid-19, while 9% of the respondents think that children cannot be affected by covid-19. Moreover, a considerable percentage of the respondents (38%) reported availability of cure for covid-19; conversely, most (62%) respondents accepted the unavailability of treatment for Covid-19. Many respondents possess misconceptions about the nature of the covid-19 virus; for instance, 70% of them understand that covid19 is not airborne. Around 65% of the respondents considered that the covid19 virus is not surface borne. On the other hand, more than 90% of the respondents reported that the covid-19 virus spreads through respiratory droplets; therefore, mostly school leaders (90%) believed that all School leader's attitude towards school reopening amidst covid19: Results presented in table 3 revealed school leaders' attitude towards school reopening amidst the covid-19 pandemic and lockdown. Overall, the respondents showed a positive attitude towards school reopening, as the mean score for positive attitude was 4.5 with 0.79 standard deviations. The minimum and maximum scores lie between 2 and 6; around 60% of the respondents scored 4 and above for this construct. Further, an overwhelming majority (95%) exhibited their concern about the transmission of covid19 in school. Almost 3 out of 4 respondents reported that the school should remain closed. On the other hand, the school leaders felt con dent about schools reopening, as 86% of the respondents viewed that they can con dently open their school with well-planned SOPs. Additionally, it was quite encouraging that a vast majority of the respondents (i.e., 97%) thought they would prepare their teachers for safe school reopening with covid-19 SOPs. Furthermore, 88% of the respondents were also con dent to communicate and involve parents and community members for safe school reopening to help the school leaders effectively implement COVID-19 SOPs. (2) -Max. score (9) *Shows correct response The results from our study suggest that most of the school leaders have some preliminary knowledge related to the COVID-19 virus and its spread. However, there were two things which showed that the school leaders need further understanding and information. As the nding suggested that the understanding about COVID-19 being an airborne, surface borne or a respiratory borne disease was This highlights a matter of concern as many of the school leaders perceived that COVID-19 is not transmitted through airborne and surface borne routes. Another contradictory view encountered in this research was that, at one point, the school leaders had mixed thoughts as to whether COVID-19 was a surface-borne virus. However, they did mention that they clean the surfaces as part of their SOPs practice [10]. This depicts that the school leaders follow the SOPs without enquiring about the rationale or knowing the reason for their actions. A similar study was conducted in the Jammu and Kashmir, India, which used the same KAP questionnaire to detect the knowledge, attitude, and practice of the general population [11]. Their questionnaire did not have this statement for "COVID-19 is an airborne virus"; thus, that study did not have any of the ndings which might overlap to ours. While many of the school leaders believe that schools can be a potential source of the spread of COVID-19, the initiation of online learning was greatly encouraged and adopted by many schools in Pakistan. The initiation of Tele school has been a tremendous effort from the government in a concise period [12]. While the title school initiative is growing in popularity and viewership, it has its fair share of challenges such as, student engagement and retention, ageappropriateness of the content, and assessment for learning [13]. Distance learning is also a challenge for most families who may not have internet facilities and paraphernalia for online learning [14]. Managing children's routines at home is also a newfound challenge for most family's incapable of managing children's learning routines at home [15]. According to the government, school reopening is also not possible since the public has yet to understand the gravity of the situation and follow the standard operating procedures (SOPs) for their safety and that of others [16]. The responsible reopening of schools is deemed a Herculean task across Pakistan due to many reasons, primarily lack of professionalism, resources, commitment, and adherence of the law [14]. The government believes that schools could become nurseries for the contagion to spread quickly, weakening the mitigation strategies that are already not being followed effectively by the people [17]. Schools will be ineffective in implementing safety and prevention SOPs on their premises for children and staff. scenario. The situation described above sums up the incapable and ineffective infrastructures in the country and the impact cannot miss the majority of schools [19]. School leaders may not be able to conjure safety protocols for their teachers and students, given the many restrictions. With collective responsibility, we may curb the spread of the virus and hope for the situation to make school reopening possible with the new variants appearing every now and then. The knowledge and implementation have not been standardized in our country; hence, opening schools and resuming academic activity would be di cult to monitor, and managing the consequences might be an additional burden [20]. C O N C L U S I O N S The study concludes that some proper training for school leaders regarding knowledge and practices of Covid-19 would help prepare them for safe school reopening. In addition, the majority of the school leaders showed a positive attitude towards school reopening amidst Covid-19. The contradictory responses from participants regarding the spread of the virus and implementation of SOPs leave little room for resumption of in-person teaching/learning practices in the wake of the remitting pattern of Covid-19 cases. This offers an excellent opportunity to Pakistan to work on remote-learning programs and ensure the availability of equipment to remote regions of the country. C o n  i c t s o f I n t e r e s t The authors declare no con ict of interest. S o u r c e o f F u n d i n g The author(s) received no nancial support for the research, authorship and/or publication of this article.
2022-12-09T16:02:44.040Z
2022-10-31T00:00:00.000
{ "year": 2022, "sha1": "78de3ccab8b2f0251399a21e4a724690919a2252", "oa_license": "CCBY", "oa_url": "https://www.thejas.com.pk/index.php/pjhs/article/download/209/176", "oa_status": "HYBRID", "pdf_src": "Anansi", "pdf_hash": "3e2122bd39302d9797212126a1777d663d24cda3", "s2fieldsofstudy": [ "Education" ], "extfieldsofstudy": [] }
211091421
pes2o/s2orc
v3-fos-license
Interaction of APOE4 alleles and PET tau imaging in former contact sport athletes Highlights • Cortical PET tau was compared between APOE4 carriers and non-carriers.• APOE4 carriers had higher cortical PET tau comparing to non-carriers.• APOE4 as a risk factor for tau accumulation in former contact sports athletes. aggression, depression, memory and cognitive impairments, as well as heightened suicidality [McKee et al., 1, McKee et al., 2009, Omalu et al., Jun, Hazrati et al., 2013. Although the pathological changes of CTE were originally described in boxers [Martland, 1928Oct 13, Critchley, 1949, Millspaugh, 1937, confirmed CTE cases come from a variety of contact sports including American football, hockey, wrestling, and soccer; as well as from military personnel and non-sport related concussions [McKee et al., 2009, Tartaglia et al., 2014. Two recent studies found strong dose-response relationships between number of years played contact sports and CTE neuropathology [Mez et al., 2020, Stern et al., 2019. The clinical and pathological presentations of CTE overlap with those of Alzheimer's disease (AD) and frontotemporal lobar degeneration, but CTE pathology has its own distinct features [McKee et al., 2009, Tartaglia et al., 2014. The pathognomonic lesion of CTE, as defined by a National Institute of Neurological Disease and Stroke (NINDS)/National Institute of Biomedical Imaging and Bioengineering (NIBIB) meeting, consists of irregular hyperphosphorylated tau deposits in neurons and astroglia, preferentially at the depths of the sulci in the superficial cortical layers and around blood vessels [McKee et al.,1]. βamyloid and TAR DNA-binding protein 43 inclusions are also reported in some studies [McKee et al., 1, McKee et al., 2009, Omalu et al., Jun, Hazrati et al., 2013, Omalu et al., 1, Gavett et al., 1, Corsellis et al., Aug, 16, Corsellis and Brierley, 1959. There are currently no antemortem biomarkers for the tau pathology of CTE, and the diagnosis is made based on post-mortem neuropathological examination of the brain tissue. Phosphorylated tau, the pathological substrate of CTE is similar to that observed in Alzheimer's disease but has its own distinct features [Falcon et al., Apr]. The use of positron emission tomography (PET) imaging with [F-18] AV-1451 ([F-18]T807; Flortaucipir, AVID Radiopharmaceuticals), a tau specific tracer, allows the detection of abnormal aggregates of phosphorylated tau protein in vivo in AD [Marquié et al., 2015]. Its use in AD has been widely examined and tracer retention correlated with post mortem neurofibrillary tangles (NFTs) containing tau in the form of paired helical filaments [Marquié et al., 2015, Lemoine et al., 2017, Jovalekic et al., 2017. As well, binding was higher in AD patients than in patients with mild cognitive impairment or healthy controls, and tracer binding was associated with worsening cognitive function [Cho et al., 2016]. PET imaging with [F-18]AV-1451 tau specific tracer shows promise as a potential in vivo biomarker of CTE pathology, however its ability to reliably detect CTE lesions is unclear and requires more investigation [Robinson et al.,3,1,Marquié et al.,28]. One study reported mildly elevated PET tau binding in two out of nine amyloid negative patients at risk for CTE, with the distribution pattern consistent with CTE pathology stages III-IV. This result suggests PET tau might not be sensitive to CTE lesions in early disease stages 1]. Earlier case reports of this tracer in formerly concussed athletes presented cases of former National Football League (NFL) players with a history of multiple concussions [Mitsis et al., 2014, Dickstein et al., Sep]. The first case was of a 71-year-old with memory impairments and a clinical profile similar to AD. The amyloid PET scan was negative so no evidence of AD pathology. The PET tau tracer [F-18]AV-1451 showed predominantly subcortical signal, with the highest signal coming from the basal ganglia and substantia nigra [Mitsis et al., 2014]. Tracer retention in the basal ganglia and substantia nigra regions has previously been pathologically confirmed to be off-target binding [Marquié et al., 2015], but a more recent study described the basal ganglia binding to be correlated with age-related iron accumulation in that region [Marquié et al., 2015, Choi et al., 1]. The second case of [F-18]AV-1451 tracer binding was in a 39 year old athlete with progressive neuropsychiatric issuesspecifically emotional lability and irritability. The amyloid scan was negative, largely ruling out AD pathology, and the PET [F-18]AV-1451 tau scan showed a higher tracer signal in the cortex [Dickstein et al., Sep]. Other signal increases were noted in the midbrain, globus pallidus, and the hippocampus, with the midbrain and globus pallidus being pathologically confirmed off-target binding sites [Marquié et al., 2015, Dickstein et al., Sep, Choi et al., 1]. Another study examined the use of the same PET tau tracer in veterans with blast neurotrauma, and found increased tracer signal in the frontal, occipital, and cerebellar brain regions [Robinson et al.,3]. Finally, a more recent cohort study using [F-18]AV-1451 PET tau tracer found increased bilateral superior frontal, bilateral medial temporal and left parietal SUVRs in 26 former National Football League players comparing to 31 controls. Tau SUVRs in these regions correlated with total years of tackle football amongst the former players cohort [Stern et al., 2019]. Even though the exact CTE incidence amongst athletes is unclear, not all individuals with exposure to contact sports and repetitive head impacts develop CTE [Hazrati et al., 2013, Omalu et al., 1, Mez et al., 25, Bieniek et al., 1]. Genetics might play a role in increasing CTE susceptibility. There is growing evidence that some genetic polymorphisms increase the risk of neurodegenerative diseases [Zhang et al., 29, Corder et al., 1993]. Allelic variants of the apolipoprotein E (APOE) gene have been implicated in a number of neurodegenerative diseases [Giau et al.,16]. The two missense polymorphisms in APOE underly the three molecular isoforms [Mahley and Apolipoprotein, 2016 Jul 1]: APOE epsilon 2 (ε2), APOE epsilon 3 (ε3), and APOE epsilon 4 (ε4). APOE4 has been shown to increase the risk of AD [Corder et al., 1993, Mahley andApolipoprotein, 2016 Jul 1]. The exact mechanism by which APOE4 influences AD risk is not yet understood, however increasing evidence points to the amyloid hypothesiswhere APOE4 directly, and indirectly influences amyloid beta metabolism [Kanekiyo et al., 2014]. The relationship between APOE alleles and tau pathology is less clear. Some authors propose an interaction between amyloid and tau proteins in the brain, where amyloid fibrils increase tau phosphorylation and aggregation [Adalbert et al., 2007]. Therefore, APOE4 may have an indirect effect on tau accumulation through amyloid. However, some in vitro animal studies demonstrated a direct effect of APOE on tau pathogenesis [Strittmatter et al., 1994, Shi et al., 2017. In context of traumatic brain injuries (TBIs), APOE4 is associated with poor clinical outcomes in patients with TBIs [Mahley and Apolipoprotein, 2016 Jul 1]. Additionally, the APOE4 allele has been associated with elevated postconcussion symptoms in military veterans [Merritt et al.,21], and increased phosphorylated tau levels in the brains of a blast-injury mouse model [Cao et al.,12]. This provides limited, but possible evidence for an association between APOE and tau pathology in TBI cases. Another polymorphism implicated in neurodegeneration is in the microtubule-associated protein tau (MAPT) gene, which is responsible for the production of tau protein [Pittman et al.,15]. Mutations in the MAPT gene may lead to abnormal structure and function of tau, and currently almost 60 MAPT mutations are linked to neurodegeneration. There are two main MAPT haplotypes -H1 and H2 [Zhang et al.,29,Zhang et al.,1]. The H1 haplotype is associated with an increased risk of developing 4-repeat tauopathiesprogressive supranuclear palsy (PSP) and corticobasal degeneration (CBD). Previous research highlighted that the H1 haplotype is significantly overrepresented in pathologically confirmed CBD and PSP populations, compared to controls [Houlden et al., 2001, Pastor et al., Aug, Pittman et al., 15, Baker et al., 1]. The literature examining MAPT haplotypes in relation to head impacts and CTE is limited, however one study found a slight increase in frequency of MAPT H1/H1 genotype in men with contact sports exposure and confirmed CTE pathology, comparing to men with contact sports exposure without CTE pathology and to clinical controls [Bieniek et al.,1]. This study examines the effect of the APOE4 allele and MAPT H1H1 on SUVRs of PET tau-specific [F-18]AV-1451 tracer in former professional contact sport athletes at risk for CTE. We hypothesize that carriers of APOE4 allele and/or H1H1 carriers will have higher PET [F-18]AV-1451 signal. Participants Thirty-eight athletes engaged in sports with high risk of concussions were included as part of this ongoing study. The recruitment was completed through the Canadian Football League (CFL) Alumni Association and the Toronto Western Hospital (Toronto, Canada) concussion clinic. Inclusion criteria are participants under 85 years old who are fluent in English and are former professional or semi-professional sport athletes at high risk of concussions. Exclusion criteria included the diagnosis of a neurological or psychotic disorder prior to the concussions, systemic illnesses affecting the brain, or lesions seen on magnetic resonance imaging (MRI). Due to the invasiveness of the procedure, only nine of 38 participants agreed to undergo a lumbar puncture so their CSF could be tested for AD biomarkers. For participants with no CSF availablestructural MRI scans and PET tau imaging were examined by a cognitive neurologist (MCT) for evidence of AD pattern. All participants underwent comprehensive neuropsychological and neurological assessments, neuroimaging and blood collection during the same consecutive two-day visit. The study was approved by the Research Ethics Board of the University Health Network and written consent was obtained from all participants. Concussion exposure was determined based on the player's recall of injury using the concussion definition provided by the Concussion in Sport Group, as detailed in their most recent consensus statement on concussion in sport [McCrory et al., 2016]. In addition, all players underwent a semi-structured interview to verify the information and to jog memory for any events they may not have recalled. Biofluid collection and genetics Lumbar puncture for CSF collection was performed following AD Neuroimaging Initiative (ADNI) protocol [Jack Jr et al., 2008]. After CSF collection into polypropylene tubes, a sandwich ELISA method was used to measure Aβ 42 , phosphorylated tau (p-tau) and total tau (t-tau) levels according to the manufacturer's instructions [Maddalena et al., 2003]. AD pathology was considered present if p-tau > 68 pg/ml and Aβ 42 to t-tau index < 0.8 [Blennow et al., 2015]. Blood was collected from all participants and genomic DNA was extracted using a Qiagen kit from whole blood. The APOE genotypes and MAPT haplotypes were determined as previously described . Neuroimaging PET tau imaging with 5mCi of [F-18]AV-1451 tracer was performed. Thirty-six participants were scanned using a Biograph HiRez XVI PET/CT scanner (Siemens Molecular Imaging, Knoxville, TN, USA), while 2 participants were scanned using a 3D High Resolution Research Tomograph (HRRT) (CPS/Siemens, Knoxville, TN, USA) PET scanner. Following a 45-minute uptake time, static PET images (45-120 min) were acquired for a duration of 75 min. T1 structural MRI images were acquired using a 3T GE Signa scanner with 8 channel headcoil and the following scan parameters: TE=5 ms, TR=12 ms, flip angle = 45°; 128 axial slices, slice thickness=1.5 mm, 256 × 256 matrix, FOV=24 × 24 cm. The region of interest (ROI) analysis was completed on the PET data using in-house ROMI software using the ROI delineation method as previously described [Rusjan et al.,30]. The PET images were corrected for head motion and partial volume effect [Müller-Gärtner et al., 1992]. For a single ROI of the cortical grey matter (excluding cerebellum), SUVRs were calculated from the PET data between 50 and 80 min and, in a subset of the participants, from the data between 80-100 min post injection. The cerebellar grey matter was used as the reference region. Neuropsychological testing The following tests, with known sensitivity to TBIs and neurodegeneration were used for this study: trail making test (TMT) parts A and B [Kortte et al., 2002, Strauss et al., 2006, Rey auditory verbal learning test (RAVLT) [Schmidt, 1996] and Rey visual design learning test (RVDLT) [Strauss et al., 2006], symbol digit modalities test (SDMT) [Smith, 1982, Lezak et al., 2004 and digit span backward and forward [D. Wechsler, 1997]. Personality was assessed using the personality assessment inventory (PAI) [Morey, 1991]. The scores were standardized based on posted norms [Strauss et al., 2006, Smith, 1982, D. Wechsler, 1997, Geffen et al., 1990, Heaton, 1992. The higher scores on TMT A & B, RAVLT, RVDLT, SDMT, digit span forward & backward assessments indicate better cognitive functioning, while higher scores on PAI depression and aggression assessments indicate higher levels of impairment. The cut-off threshold of 1.5 standard deviations below the mean was used to signify impaired functioning on TMT A & B, RAVLT, RVDLT, SDMT, digit span forward & backward assessments. The cut-off threshold of 1.5 standard deviations above the mean was used to signify impaired functioning on PAI aggression and depression assessments. Statistical analysis Statistical analysis was completed using IBM SPSS Statistics version 24 (IBM Corp., Armonk, NY, USA). All between-group demographics and neuropsychological testing comparisons were completed using an independent samples t-test, with the type of scanner comparison completed using Fisher's exact test. The number of concussions was not found to be normally distributed, therefore all between-group concussion number comparisons were completed using the Mann-Whitney U test. Due to the small sample size, participants had to be grouped based on APOE4 carrier and non-carrier status. Regarding the MAPT genecarriers of H1H2 and H2H2 diplotypes had to be grouped together and compared to carriers of H1H1 diplotype. The difference in mean cortical grey matter PET [F-18]AV-1451 SUVRs between carriers and noncarriers of specific alleles and diplotypes was determined using one-way ANCOVA, controlled for age. Neuropsychological assessment scores between APOE4 carriers and non-carriers were also determined using an independent samples t-test. The cortical PET tau SUVR values amongst the study population presented on a continuum, ranging from 0.95 to 1.57. In order to compare the frequency of high risk allele APOE4 in the lowest and the highest group based on PET tau SUVR values in the cortex, participants were divided into tertiles based on mean cortical PET [F-18]AV-1451 SUVR values, and the middle group was dropped from the analysisleaving the high and low groups to be compared. We then completed a hypothesis driven comparison using Fisher's exact test of APOE4 frequency between high and low cortical PET tau group, expecting a higher frequency of APOE4 carriers amongst the high cortical grey matter PET tau group. Bonferroni correction was used to account for comparisons in mean cortical grey matter SUVR values between genotypes, and both adjusted and non-adjusted p-values are reported with a significance level set at p<0.05. For neuropsychological assessment score comparisons between genotypes, Bonferroni adjusted p-values with a significance level set at p<0.05 were reported only if any unadjusted p-values were significant at p<0.05. The number of self-reported concussions for the whole cohort (N = 38) ranged from 0 to 60 (6.16 ± 9.61). For those who had self-reported concussions (data presented for N = 35 because 2 participants did not recall any concussions and 1 participant did not remember the date of last concussion), the number of years since last reported concussion ranged from 0.5 to 61 years (20.90 ± 16.27). The 2 participants with no reported concussions were included in the study because each had ≥10 years of play in contact sports and were very likely exposed to subconcussive blows. Nine out of 38 participants who had cerebrospinal fluid (CSF) were AD negative. The remaining 29 participants were examined for the presence of AD-like pattern on MRI i.e. medial temporal atrophy and/or precuneus/posterior cingulate atrophy and on PET [F-18]AV-1451 SUVR for increased tracer uptake specifically in middle temporal lobe and posterior cortical regions including parietal lobe, and no such pattern was seen. Although cannot be ruled out entirely, AD pathology is unlikely in this cohort. The APOE genotype distribution of the entire cohort was as follows: 2 individuals with APOE2/APOE4, 5 individuals with APOE3/APOE2, 20 individuals homozygous for APOE3, 10 individuals with APOE3/APOE4, and 1 individual homozygous for APOE4 allele. The MAPT diplotype distribution of the entire cohort was as follows: 21 individuals with H1H1, 14 individuals with H1H2 and 3 individuals with H2H2 diplotype. Neuropsychological assessment results of the participant cohort Overall, the participant cohort of this study showed to be quite high functioning with only a few individuals with impaired scores on neuropsychological testing. The distribution of performance on neuropsychological assessments was as follows: 1/38 participants had impaired performance on TMT A & B assessments; 1/37 participants had impaired performance on RAVLT, SDMT oral score, and digit span forward assessments; 7/37 participants had impaired performance on RVDLT assessment; 2/38 participants had impaired performance on SDMT written score; 5/38 participants had impaired performance on PAI depression and aggression assessments. Finally, no participants had impaired performance on digit span backward assessment. The impaired scores for each neuropsychological assessment between APOE and MAPT genotype groups are presented in Table 1 and 2. The impaired scores for each neuropsychological assessment for groups divided into tertiles based on cortical grey matter PET tau SUVR values are presented in Table 4 and 5. Comparison between 50-80 and 80-100 min post-tracer injection time All PET SUVR values reported were computed using 50-80 min post-tracer injection time. A subset of the participants (N = 24) had results available for 80-100 min post-tracer injection time, allowing for direct comparison between the time intervals. The cortical grey matter PET SUVRs were not found to be significantly different between the 2 time intervals for these 24 participants (p>0.4). The relationship between APOE4 and cortical grey matter PET tau The APOE4 carrier and non-carrier groups did not differ in demographics (Table 1). No difference in demographics was found between the MAPT H1H1 and H1H2/H2H2 diplotype groups (Table 2). One-way ANCOVA controlled for age showed a significant difference in cortical grey matter PET [F-18]AV-1451 SUVR values between the APOE4 carrier and non-carrier groups (p = 0.010), however, no significant difference in SUVR were found in MAPT diplotypes (p = 0.895). After implementing Bonferroni to control for multiple comparisons, the relationship between the APOE4 allele and cortical SUVRs remained significant (p = 0.020) ( Table 3). The relationship between APOE and MAPT genotypes and neuropsychological assessments The neuropsychological assessment results for the APOE4 carrier/ non-carrier groups are summarized in Table 1. The neuropsychological assessment results for the MAPT H2 carrier and non-carrier groups are summarized in Table 2. The independent student t-test showed no significant difference in the scores on TMT A & B, RAVLT, RVDLT, SDMT, digit span forward & backward, and PAI depression and aggression scores (all unadjusted p>0.06), between the APOE4 carriers and non-carriers. No significant differences in the neuropsychological assessment scores (all unadjusted p>0.15) were found between MAPT H2 carrier and non-carrier groups. Independent student t-test, Fisher's exact test & Mann-Whitney U comparison; unadjusted significance level set at p<0.05 (2-sided). The number of participants with impaired scores is presented underneath the mean scores for each neuropsychological assessment in each group. a Data is not included for 1 participant because he did not recall any concussions. b Data is not included for 2 participants because 1 did not recall any concussions and 1 could not recollect the date of last concussion. c One participant's score is missing due to the refusal to undergo the full neuropsychological testing, and a reduced battery was administered instead. A. Vasilevskaya, et al. NeuroImage: Clinical 26 (2020) 102212 3.6. Genotype counts between high and low cortical PET tau groups In order to compare the frequency of APOE4 carriers and non-carriers according to cortical PET tau, we divided the entire cohort (N = 38) into three equal groups based on PET [F-18]AV-1451 SUVR values and dropped the middle group, leaving the low (N = 13; ≤1.278 SUVR) and high (N = 13; ≥1.384 SUVR) PET tau groups for comparison. The demographics of the high and low PET tau groups did not differ (Table 4). The independent student t-test showed no significant difference in the scores on TMT A & B, RAVLT, RVDLT, SDMT, digit span forward & backward, and PAI depression and aggression scores following Bonferroni correction, between the high and low cortical PET tau groups. Fisher's exact test showed a significantly higher frequency of APOE4 allele carriers in the high cortical grey matter PET SUVR group (p = 0.048; one-sided) (Fig. 1). The demographics, neuropsychological assessment scores, and genotype counts for the middle tertile that was dropped from the statistical analysis is presented in Table 5. Discussion To our knowledge, this is the first study to examine the relationship between APOE, MAPT and cortical tau burden as seen with PET [F-18]AV-1451 imaging in a cohort of former professional and semi-professional sport athletes with multiple concussions or sub-concussive hits at risk of delayed neurodegeneration, specifically CTE. The results of this study showed a significant association between the presence of an APOE4 allele and higher cortical grey matter PET [F-18]AV-1451 SUVR, currently believed to be a marker of tau burden in AD. As well, APOE4 carriers were more frequent amongst the high cortical PET tau group, compared to the low cortical tau group. No association was found between MAPT H1H1 carrier status and cortical grey matter PET [F-18]AV-1451 SUVR. The exact direct or indirect mechanism that implicates APOE in tau burden is still unclear. APOE is present in the cytoplasm of nerve cells, where it may interact with other molecules in an isoform-dependant manner [Huang et al., 1995]. Tau is a microtubule-associated protein implicated in axonal transport [72], and previous findings show a decreased affinity of APOE4 towards the microtubule-binding domain of tau protein [Huang et al., 1995]. This makes tau more vulnerable to being hyperphosphorylated, and therefore unable to bind microtubules, leading to its aggregation and consequently pathology [Terrell et al.,1]. Furthermore, APOE4 showed increased binding to Aβ which is implicated in increased senile plaque formation in AD [Tierney et al.,Nov,Strittmatter et al.,1]. Autopsy studies showed greater staining for senile plaques in the brains of APOE4 homozygotes than APOE3 homozygotes [Saunders et al., 1993, Strittmatter et al., 1]. In the most recent literature, tau and amyloid were proposed to work together synergistically to amplify each other's abnormal aggregation and subsequent tauassociated cognitive decline, specifically in the context of AD [Ittner and Götz, 2011 Feb] . In the current study, of the 9 participants who had CSF analysis all were negative for AD biomarkers. The remaining 29 participants showed no typical AD atrophy on MRI or tracer signal retention on PET, so there is no obvious evidence to suspect that the results of our study are due to an underlying AD pathology. Our cohort is that of former contact sport athletes at risk for neurodegeneration, especially CTE, and the pathophysiology behind CTE is mainly defined by abnormal aggregates of hyperphosphorylated tau. The exact pathophysiology behind the toxic function of tau aggregates remains unclear. However, it is hypothesized that abnormal aggregates of hyperphosphorylated tau disrupt the normal cellular transport within the axons, leading to synapse loss and ultimate neuronal deathresulting in disrupted neural circuits and eventual cognitive decline [Spillantini and Goedert, 2013 Jun 1]. Previous studies highlight a close relationship between tau pathology, neuronal loss and disease severity in AD and other tauopathies [Williams and Lees, 2009, Iqbal et al., 3]. The lack of association between tau burden and MAPT H1H1 may not be unexpected given that this diplotype prevalence is elevated in PSP Independent student t-test, Fisher's exact test & Mann-Whitney U comparison; unadjusted significance level set at p<0.05 (2-sided). The number of participants with impaired scores is presented underneath the mean scores for each neuropsychological assessment in each group. a Data is not included for 2 participants because 1 could not recall any concussions and 1 could not recollect the date of last concussion. b Data is not included for 1 participant because he had no reported concussions. c One participant's score is missing due to the refusal to undergo the full neuropsychological testing, and a reduced battery was administered instead. Table 3 Difference in mean cortical grey matter SUVRs based on genotype (mean ± standard deviation). One-way ANCOVA, controlled for age; N.S. = not significant. a Bonferroni adjusted p-value; significant at p<0.05. A. Vasilevskaya, et al. NeuroImage: Clinical 26 (2020) 102212 and CBD, wherein the underlying tau pathology is a 4-repeat isoform tauopathy and a straight filament, whereas CTE is similar to AD with a mixture of both 3-and 4-repeat and a paired helical filament, and so very different [Woerman et al.,13]. The role of APOE4 in concussion remains unclear. Most previous studies examining the potential effect of APOE4 included TBIs of various severity in diverse populations, making between-study comparisons difficult. An association between APOE4 alleles and concussion has been reported in college athletes [Terrell et al.,1,Tierney et al.,Nov], and there is evidence for an increased risk of bleeding following TBI in APOE4 carriers, which may prolong recovery [Tierney et al., Nov]. A prospective study in college athletes did not, however, find an association between APOE4 and the risk of first concussion [Kristman et al., 2008]. amongst army veterans, APOE4 allele carriers showed poorer performance on memory tasks following TBI compared to non-carriers but no difference in executive function [Crawford et al.,9]. A meta-analysis showed an association between APOE4 and increased risk of poor outcome 6-months post TBI [Zhou et al., Apr]. However, another study using the same 6-month post TBI follow up duration found no relationship between APOE4 and patient prognosis [Chamelian et al.,1]. Specific to athletes, the presence of APOE4 has been associated with increased symptom reporting following a sportrelated concussion [Merritt et al.,21] and boxers who were APOE4 carriers showed worse neurological outcome [Jordan et al.,9]. There does not appear to be an increased risk of suffering a concussion in APOE4 carriers [Terrell et al.,1,Tierney et al.,Nov]. The results of our study are similar to previous research, where we found no significant association between APOE4 and concussion history or performance on neuropsychological assessments, however, we did find that APOE4 There are a number of limitations to the current study. First, the small sample size and lack of a replication cohort limit the statistical power. As well, the total years of play for all athletes was not collected, missing an opportunity to examine the effect of total years of play on neuroimaging and fluid biomarkers. Next, participant cohort is highly varied with regards to age, concussion number, and performance on neuropsychological tests. There is also no matched healthy control group. Presence of a reliable control group with no history of contact sports would have provided a PET tau SUVR cut-off that could be used to divide participants into groups with normal and elevated tau burden. Furthermore, making a comparison between the high and low PET tau groups by dropping the middle third of the cohort decreased the total number of participants significantly, reducing the power for that specific analysis. Another limitation is the solely neuropathological nature of CTE diagnosis, leaving us unable to tell whether any of the participants have underlying CTE related changes. The results of this study are thus generalizable to former professional and semi-professional sport athletes at high risk of concussions with no evidence of active neurodegenerative changes. One limitation of the current study is lack of information with regards to race of included participants. Previous studies reported differences in APOE allele frequencies between populations [Eto et al., 1986, Seet et al., 1, Tang et al., 11, KB Rajan et al., 2019, and APOE4 was found to be a determinant of AD risk in whites. Earlier studies reported that African Americans and Hispanics have an increased frequency of AD regardless of their APOE genotype, however the most recent literature showed that APOE4 has a weak association with AD incidence amongst African Americans and Hispanics, in comparison to white populations [Tang et al., 11, Blue et al., 2019, Rajabli et al., 2018, KB Rajan et al., 2019. With regards to MAPT, the H2 haplotype was reported to be almost exclusively Caucasian in origin [Evans et al.,21]. Finally, the use of cerebellar grey matter as a PET reference region has been widely studied and established for use in AD, but not in concussion. Cerebellar atrophy has been reported within a concussed cohort [Misquitta et al., 2018], and therefore the cerebellum might not be the ideal reference region in TBI cases. One study examining the [F-18]AV-1451 tracer in veterans with blast neurotrauma used a different reference region (ie. isthmus of cingulate) for its PET tau analysis [Robinson et al.,3] rather than the usual reference region of the cerebellum used in the athletes' PET studies described above [Robinson et al., 3, Mitsis et al., 2014, Dickstein et al., Sep]. Further research is warranted in this area. Overall, our results suggest a relationship between APOE4 and tau burden as measured by [F-18]AV-1451 in the brain of athletes at risk for delayed neurodegeneration and CTE. A marked feature of CTE pathology is the abnormal aggregates of phosphorylated tau protein within the cortex in the form of NFTs. The increased tracer signal in the cortex of APOE4 carriers could signify a neurodegenerative process and PET tau may be a biomarker for this process, but more research is needed to establish that. Authors' roles A.V. acquired the data, analysed the data, interpreted the data and drafted the manuscript for intellectual content. F.T. and C.B. analysed and interpreted the data. A.T., S.A.N, M.K., and R.G. had major roles in data acquisition. C.S. analysed and interpreted the data, revised manuscript for intellectual content. M.G. and D.M. analysed and interpreted the data. R.W. and D.M. interpreted the data and revised the manuscript for intellectual content. R.B. and B.C. acquired and interpreted the data, R.B. also revised the manuscript for intellectual content. K.D.D., P.R., S.H. and E.G. interpreted the data and revised the manuscript for intellectual content. C.T. had a major role in acquisition of data, interpreted the data and revised the manuscript for intellectual content. M.C.T. had a major role in acquisition of data, interpreted the data, drafted and revised the manuscript for intellectual content. A. Vasilevskaya, et al. NeuroImage: Clinical 26 (2020) 102212 Writing -review & editing. Maria C. Tartaglia: Investigation, Formal analysis, Writing -review & editing, Supervision. Declaration of Competing Interest Authors report no conflicts of interest. The number of participants with impaired scores is presented underneath the mean scores for each neuropsychological assessment. a One participant's score is missing due to the refusal to undergo the full neuropsychological, testing, and a reduced battery was administered instead.
2020-02-13T16:35:47.128Z
2020-02-13T00:00:00.000
{ "year": 2020, "sha1": "dde0ddeecb5edc16193ba934928f73d4203f52df", "oa_license": "CCBY", "oa_url": "https://doi.org/10.1016/j.nicl.2020.102212", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "a6ea016eec4aaced536041463609de30ffcbbb0d", "s2fieldsofstudy": [ "Medicine" ], "extfieldsofstudy": [ "Medicine" ] }
221571701
pes2o/s2orc
v3-fos-license
Composite Nafion-CaTiO3-δ Membranes as Electrolyte Component for PEM Fuel Cells Manufacturing new electrolytes with high ionic conductivity has been a crucial challenge in the development and large-scale distribution of fuel cell devices. In this work, we present two Nafion composite membranes containing a non-stoichiometric calcium titanate perovskite (CaTiO3−δ) as a filler. These membranes are proposed as a proton exchange electrolyte for Polymer Electrolyte Membrane (PEM) fuel cell devices. More precisely, two different perovskite concentrations of 5 wt% and 10 wt%, with respect to Nafion, are considered. The structural, morphological, and chemical properties of the composite membranes are studied, revealing an inhomogeneous distribution of the filler within the polymer matrix. Direct methanol fuel cell (DMFC) tests, at 110 °C and 2 M methanol concentration, were also performed. It was observed that the membrane containing 5 wt% of the additive allows the highest cell performance in comparison to the other samples, with a maximum power density of about 70 mW cm−2 at 200 mA cm−2. Consequently, the ability of the perovskite structure to support proton carriers is here confirmed, suggesting an interesting strategy to obtain successful materials for electrochemical devices. Introduction For sustainable economic growth and environment protection, energy delivered from renewable sources is indispensable. In this field, fuel cell technologies are considered to be promising solution for a future clean energy environment. Among the different types of fuel cells, low temperature polymer electrolyte membrane fuel cells (PEMFCs), including direct methanol fuel cells (DMFCs), offer several advantages compared to other systems in terms of low emission of pollutants, high energy conversion and efficiency. However, these devices present two major drawbacks, namely their high cost and low durability, that must be solved for a large-scale application and commercialization [1,2]. In a typical low temperature PEMFC design, the perfluorosulfonic acid polymers are the state-of-the-art for solid electrolytes [3]. The most used polymer for these devices is Nafion®, manufactured by DuPont, due to beneficial features, such as high proton conductivity under fully hydrated conditions, suitable mechanical properties, high chemical and electrochemical stability, low fuel permeability, and electronic insulation [4,5]. In general, the proton conductivity of these membranes is approximately~10 −1 Scm −1 at room temperature and under fully hydrated conditions. Consequently, an adequate humidification of the Nafion membrane is necessary to obtain a good where Vo •• represents the oxygen vacancy, O ox is the oxygen in a regular crystal lattice site, OH • is the protonic defect, and h • is an electronic hole. Considering the formation of protonic defects as an amphoteric reaction, the oxide material can simultaneously act as an acid (absorption of hydroxide ion by oxygen ion vacancy) and as a base (protonation of lattice oxygen ions) [17]. The proton migration pathways are mainly characterized by lower activation barriers within solid oxides as compared to oxygen ions because of their smaller mass, lower radius, and absence of electron cloud. In a crystal lattice, protons are located close to their crystallographic position because of electrostatic attraction and can rotate and migrate between adjacent anions through the Grotthuss mechanism [18]. Consequently, in proton-conducting fuel cells, perovskite-type transition-metal oxides can be used as proton-conducting electrolytes given their high mobility of protonic defects. In a previous work, we have used a calcium titanate perovskite (CaTiO 3−δ , CTO) as a water-retention and reinforcing additive in low-humidity Nafion membranes obtaining, for the composite sample with a low concentration of filler, an improved protonic conductivity [19]. With the aim to better understand the behavior of the additive in the composite Nafion membranes, the present work is focused on new results obtained from measurements of water uptake, methanol crossover, and ion exchange capacity, as well as from small-and wide-angle x-ray scattering (SAXS and WAXS), vibrational spectroscopy (Raman and infrared), and high resolution field emission scanning electron microscopy (HR-FESEM). Materials and Methods A non-stoichiometric calcium titanate (CaTiO 3-δ , CTO) perovskite was prepared by a solvo-thermal procedure as reported in our previous papers [19,20], using a Pluronic F127 both as a structure directing agent and as a reducing component to obtain oxygen vacancies in the lattice of the perovskite. This procedure results in an orthorhombic CaTiO 3-δ perovskitewith an average crystallite size of about 145 nm. The specific surface area of the particles, determined by the Brunauer-Emmett-Teller (BET) method, was 6.6 ± 0.5 m 2 g −1 and an amount of oxygen vacancies corresponding to δ~0.025 was obtained [20]. All Nafion membranes were prepared by a solvent casting procedure [21]. The hydro-alcoholic solvents of a 5 wt.% Nafion ionomer solution (E.W. 1100, Ion Power Inc, München Germany) were evaporated and replaced with N,N-dimethylacetamide (> 99.5%, Sigma Aldrich, St. Louis, MO, USA) at 80 • C. Subsequently, the desired amount of CaTiO 3-δ , to obtain weight ratios of 5% and 10%, was added to the mixture which was subsequently cast into a glass Petri dish and dried at 80 • C overnight. A plain, filler-free Nafion membrane was also prepared and used as internal benchmark. The samples will be referred to as M5, M10 and N, respectively. The thickness of all samples was measured in dry state, after removing them from the Petri dish and hot pressing (at 50 atm, 175 • C, for 15 min), resulting to be in the range of 90-110 µm. All as-formed membranes were finally pre-treated and purified in boiling 3 wt.% hydrogen peroxide (H 2 O 2 , 34.5%-36.5%, Sigma Aldrich, St. Louis, MO, USA), H 2 SO 4 (0.5 M) and distilled water. Composite membranes were evaluated in term of Ion Exchange Capacity (IEC) and Water Uptake (W.U.), which are important parameters since they provide a direct measure of the number of available protons and of the hydration level, respectively. The IEC was evaluated by a titration method. All dry membranes were immersed in a NaCl aqueous solution and the exchanged protons were neutralized with a standard solution of NaOH (0.1 M) [22]. The IEC error was estimated as the standard deviation from three different measurements. The total WU was evaluated at room temperature by a gravimetric method according to the following equation: where W wet is the weight of fully hydrated membranes, obtained by equilibrating the samples in a sealed container in the presence of water for two weeks, and W dry is the weight of dry membranes measured after a night at 80 • C under vacuum. According to the literature [23], the above-mentioned properties can be used to estimate the value of λ, a parameter that allows to define the number of water molecules for each sulfuric acid group. Small-angle X-ray scattering combined with wide-angle X-ray scattering (SAXS-WAXS) measurement have been used to understand the interactions between the filler and the Nafion matrix after hydration. In this case as well, the humidified membranes were obtained by equilibrating the samples in a close container in the presence of water for 4 days. These measurements were performed using a Mat:Nordic instrument from SAXSLAB/Xenocs. The instrument was equipped with a micro-focus Cu X-ray source and a Dectris Pilatus 300K R detector. The entire beam path was evacuated to 0.2 mbar before each measurement to minimize air scattering. The membranes were sandwiched between two mica windows in a holder suitable for solid self-standing films. The SAXSGui software was used for processing the data and the correlation distance between scattering objects was calculated using Bragg's diffraction law d = 2π/q. Vibrational spectroscopy studies were carried out by Attenuated Total Reflectance -Fourier Transform Infrared (ATR-FTIR) and Raman spectroscopy, to examine molecular interactions and chemical composition of all the membranes. Raman spectra were collected with an In-Via Reflex spectrometer from Renishaw, while the analysis of the spectra was performed using the WIRE 5.0 software. ATR-FTIR spectra were collected with a PerkinElmer 2000 FT-IR spectrometer in the attenuated total reflection mode using a ZnSe crystal. The spectral resolution was set to 1 cm −1 recording 64 scans for each sample at ambient temperature. The morphology of the Nafion membranes was evaluated by high-resolution field emission scanning electron microscopy (HR-FESEM), using an Auriga Zeiss instrument at the Interdepartmental Research Center on Nanotechnologies applied to Engineering (CNIS) of Sapienza University of Rome. Methanol crossover measurements were carried out electrochemically in a typical fuel cell configuration [24]. Membrane-electrode assemblies (MEAs) were obtained by hot-pressing the Pt-based electrodes onto the prepared composite or filler-free membranes. Linear sweep voltammetry (LSV) was performed in the voltage range of 0-0.9 V with a scan rate of 2 mV s −1 . A 2 M MeOH solution (3 mL min −1 ) was fed to one side of the cell used as the counter/reference electrode, while N 2 (100 mL min −1 ) was supplied to the other compartment (hence operating as the working electrode). Methanol crossing the membrane is oxidized at the working electrode generating a positive current, which reaches a plateau when all methanol is converted to CO 2 under steady state conditions [25,26]. The same MEAs were used for the direct methanol fuel cell tests. The anode was made by mixing a 60% Pt-Ru/C (Alfa Aesar) as the catalyst and a 5% Nafion®(Ion Power) solution (33 wt.% with respect to the catalyst amount) as the ionomer; whereas, for the cathodic catalytic layer a mixture of 20% Pt/C (E-TEK) and Nafion®ionomer in a ratio 67:33 as for the anode was used. They were deposited by a doctor blade technique onto the diffusion backing layers above reported with a Pt loading of 1.5 mg cm −2 at the anode and 0.5 mg cm −2 at the cathode. The MEAs were tested in a 5 cm 2 single cell fixture (Fuel Cell Tech., Inc.). Results and Discussion The ion exchange capacity of the Nafion sample approaches its theoretical value of 9 × 10 −4 eqg −1 considering that the Nafion polymer used for this research has an equivalent weight of 1100 geq −1 . None of the composite membranes shows a dramatic decrease of the IEC values compared to Nafion. Typically, composite systems show a significant decrease of their IEC due to an increase of density in the membrane because of the presence of inorganic particles which also can lead to fewer available exchangeable protons. In Table 1, the measured values of IEC, W.U. and λ are reported, revealing comparable properties for all samples. Figure 1 shows the scattering intensity I(q) obtained by SAXS and WAXS measurements for all the membranes under investigation at both dry and humidified conditions. In Figure 1a, the scattering profile of the neat and dry perovskite calcium titanate is shown for comparative purposes (green trace) and to justify the slope observed at lower q values for the composite Nafion membranes. In these plots, two main peaks emerge which give information on the shape and size of the scattering objects in the Nafion membranes. The first peak, also called the matrix knee, corresponds to the fluorocarbon polymer crystallites randomly distributed in the amorphous polymer matrix. Here, it is observed at q values close to 0.06 Å −1 and its intensity depends on the polymer's degree of crystallinity [27]. The second peak, also known as the ionomer peak, is here observed in the q range 0.2-0.3 Å −1 and arises from the local ordering of the ionic domains within the polymer. From the position and intensity of the ionomer peak, it is possible to achieve information about the hydration degree of the Nafion sample, since it is related to the periodicity of the water channels within the membrane. As a note, the Nafion membrane's structure can be described as rod-like ionic domains that expand radially upon hydration. This model is considered a more appropriate description than the Gierke's clustering model [28], which considers spherical ionic hydrated clusters connected by narrower channels only 1 nm wide. In order to evaluate the hydration state of the Nafion membranes, the correlation distance d was calculated from Bragg's diffraction law d = 2π/q, where q is the center of the ionomer scattering peak. In Table 2, the values of d and q are reported for all samples at both investigated conditions. As it can be seen from Table 2, all Nafion membranes (undoped and doped) in the dry state display scattering peaks at equivalent positions and hence comparable d values. However, when the membranes are subjected to hydration, the position of the ionomer peak shifts systematically to lower q values, which is particularly evident for the two composite membranes. These values reflect an increased size of the ionic domain as a result of the absorbed water. Among the investigated samples, M10 shows the highest d value indicating a higher degree of hydration and confuting the W.U. values obtained by weight. Anyhow, our previous results based on calorimetric measurements and dielectric spectroscopy [19] revealed that M5 sample has displayed the highest water affinity and conductivity, suggesting that an intermediate filler concentration (i.e., 5 wt.%) could be the best compromise. It can be assumed that the presence of oxygen vacancies in the perovskite structure guarantees the water trapping but a high dose of CTO (i.e., 10 wt.%) in the Nafion matrix can cause a lack of relative interactions between the polymer, additive and water to ensure the mobility of the protons. It's worth noticing from the trends in Figure 1 that the matrix knee peak is still visible in M5 sample, whereas it almost disappears in M10. Figure 2 shows the FTIR spectra of a Nafion membrane in the dry state and that of the neat calcium titanate perovskite. In the FTIR spectra of Figure 2 the following vibrational modes can be distinguished: the hydrophobic fluorocarbon chains of Nafion (1300-1000 cm −1 ) and the perfluoroetheral side chains of Nafion (1000-940 cm −1 ), as well as the Ti-O stretching vibration and the Ti-O-Ti bridging stretching modes of the inorganic component (430-560 cm −1 ). These spectral features are in agreement with those observed in the IR spectra of CaTiO 3 already reported by other authors [29]. In the low wavenumber range, the peaks arising from the inorganic additive overlap with those of Nafion. The assignment of the main peaks from Nafion is given in Table 3 [30]. The V s (SO 3 -) mode at 1059 cm −1 was chosen for the intensity normalization of all IR spectra. Moreover, the spectrum of the Nafion membrane exhibits broad features just above 1600 cm −1 and at about 3500 cm −1 that are related to the bending and stretching modes of water still present despite the drying treatment at 80 • C prior to measurement. Interestingly, for the composite membranes, the IR spectra show some differences depending on the side of the membrane analyzed. Figure 3 for the case of sample M5 (the composite membraneM10 displays the same behavior and is not here reported), strong differences are observed when analyzing the two opposite sides of the membrane: the peaks assigned to Nafion appear more intense on one side (here called side A) whereas the peaks related to the perovskite are more emphasized on the opposite side (here called side B). As already mentioned in the literature, during the solvent casting process, the filler can develop a concentration profile [31], making the Nafion membrane not completely uniform in terms of dispersion of the additive, especially in the region closest to the surface. Raman spectra were collected over selected areas with step increases of1 micrometers. Then, a color-coded image could be obtained by mapping the intensity of selected peaks, in this case the peak at 732 cm −1 representative for the presence of Nafion and the peak at 250 cm −1 representative of the presence of CaTiO 3 . As evident from Figure 4, in the composite membranes some micrometers large CaTiO 3 aggregates tend to form in the bulk of the membrane, while a distinct CaTiO 3 layer is formed close to one of the surfaces. As shown in The Raman spectrum of M5 sample, obtained as an average of the spectra recorded in the selected area, is shown in Figure 5 and it was normalized with respect to the band peaking at 732 cm −1 , which corresponds to the V(CF 2 ) mode of Nafion as reported in Table 4. As discussed for the IR spectra, also for the Raman spectra all the peaks due to the Calcium Titanate perovskite overlap with those of Nafion, except for a weak one at ca. 250 cm −1 assigned to the O-Ti-O bending mode [32]. The major characteristic vibrational bands of the polymer are assigned as given in Table 4 [33]. However, some important differences between plain Nafion and composite M5 membrane have been noticed: that is the absence in the composite membrane of bands at about 398 and 410 cm −1 related to CF 2 stretching vibration (marked with a green circle) which is the signature of an important phase rearrangement of the calcium titanate component incorporated into the Nafion membrane. The functionality of the membranes here proposed were tested with respect to direct methanol fuel cell (DMFC) applications. In DMFC, methanol crossover is a critical issue which is responsible of cathode catalyst poisoning causing about a 30% performance reduction in terms of fuel cell efficiency [34]. Nafion possesses high methanol permeability because of its hydrophilic channels but hosting suitable additives within these channels would significantly reduce the crossover. In accordance to the literature, in the composite approach, the extent of methanol crossover largely depends on the distribution of fillers and their effective interaction with the polymer matrix [35]. The uniform distribution of fillers in Nafion membranes reduces the size of channels that are available for methanol passage, whereas particles agglomeration has a negative impact on methanol crossover. Figure 6 shows the methanol crossover behavior for the two composite membranes compared with a filler-free Nafion membrane of similar thickness in a wide range of temperatures (from 30 to 90 • C) feeding a 2 M methanol solution to one side of the cell used as counter/reference electrode. Methanol crossing the membrane is oxidized at the other electrode (working electrode) generating a positive current, which reaches a plateau when all methanol is converted to CO 2 . Unfortunately, as observed in Figure 6, it appears that methanol crossover slightly increases after modification of the Nafion membrane with CaTiO 3-δ particles. The value of current density represents the amount of methanol passing through the membrane; it increases with the temperature, due to a higher mobility of methanol. At 60 • C, the value of the so-called crossover current density for all membranes is in the range of 130-160 mA cm −2 , lower than the state-of-the-art Nafion 115 (125 µm in thickness), which showed 195 mA cm −2 crossover current in a previous work [36]. At 90 • C a further increase of methanol crossover has been reported for Nafion 115 membrane (380 mA cm −2 ) [37]; whereas, the filler-free and composite membranes with CTO present lower values, in the range 180−220 mA cm −2 , with the filler-free having the lowest methanol crossover. Since the composite membranes were designed to allow operation of a DMFC (or a PEFC) at temperatures higher than 90-100 • C [38,39], reducing the drawback of membrane dehydration at those temperatures with consequent loss of conductivity. The influence of the CTO filler on the electrochemical behavior of the membrane in DMFC at 110 • C has been evaluated, and the results are reported in Figure 7. As observed, the composite membrane with 5% CTO additive provided the highest performance in the fuel cell among the different membranes, confirming the fact that at temperatures higher than 100 • C the filler helps the proton conduction mechanism in particular in the presence of a lower water content and also enhances the mechanical properties and the stability of the Nafion membrane, as reported in a previous paper [19]. Several researches confirm that good mechanical properties are crucial to obtain better fuel cells performances especially at high temperature [40,41]. Consequently, the role of calcium titanate perovskite as a water retention and reinforcing additive has been confirmed. The presence of the oxygen vacancies in the structure of CTO can play a key role as active sites for the absorption of water. Furthermore, a distortion of the crystal lattice can influence the energetic properties of the oxygen ions in terms of different binding energies of protons [14], with a consequent increase of the proton mobility and conductivity. Unfortunately, a large presence of CTO (10%) in the polymer is not beneficial, probably due to a large presence of agglomerates, and thus defects in the membrane morphology (presence of micro-holes) which cause an increase of methanol crossover and a decrease of performance. This could be understood better from the SEM images (Figure 8), revealing a surface morphology of composite Nafion membranes where the additive particles are not uniformly distributed, giving rise to two different sides as also demonstrated by Raman spectroscopy. In particular, comparing the two composite membranes on the side rich in perovskite, it is remarkable that in M5 the mixing of the perovskite with Nafion uniformly occurs on the surface of the membrane whereas in M10 sample the presence of the perovskite is so pronounced to hide completely the Nafion component. At this point it is clear the existence of a critical filler concentration that influences the morphology of the composite membrane. Furthermore, as reported in previous studies, an excessive additive begins to form agglomerates in the composite membrane, the hydrophobic polymer backbones will occur around the hydrophilic ion-cluster for methanol permeation, increasing the whole permeability [42]. In our case, the bad performance in terms of methanol crossover can be further explained considering another characteristic aspect of the calcium titanate additive, i.e., the presence of oxygen vacancies in the lattice of the perovskite. The oxygen defects have been considered as a strategy to improve the number of active sites able to absorb oxygen-containing ligands, such as methanol molecules. Yang et al. [43] reported a study about an ultra-thin nickel oxide rich in oxygen vacancies, demonstrating that the presence of these oxygen defects can enhance the methanol oxidation reaction (MOR) performance. Furthermore, as reported in our previous work [20], it was demonstrated that CTO could improve the methanol oxidation if it is used as additive to Pt/C catalyst, confirming a promoting effect of the perovskite for this reaction. Consequently, the capability of oxygen vacancies to interact with methanol molecules can support the methanol permeability through the Nafion membranes, justifying the results obtained. Overall, in view of DMFC applications, the non-uniform additive distribution can be overcome by reducing the inorganic particles size and/or properly modifying the membrane preparation to obtain thinner and uniform CTO/Nafion composites (e.g., by moving from a solvent-casting procedure to a spray coating or thin film deposition methods). In contrast, as an interesting original approach, the additive-rich side can be conveniently adopted at the cathode interface of a DMFC to increase reactivity towards ORR. Conclusions In the present work, two composite membranes based on a Nafion polymer matrix incorporating a CaTiO 3-δ additive have been proposed and characterized. A detailed analysis in terms of structure and morphology was performed. Applicability in DMFCs was also tested. An increase of methanol crossover was observed in the composite membranes, most likely due to both the non-uniform distribution of the filler within the polymer and the presence of oxygen vacancies in the lattice of the perovskite, that can promote the methanol permeability through the Nafion membrane. From both ATR and Raman vibrational spectroscopy, as well as from SEM images, two sides with different morphology and filler concentration have been identified in the composite systems. The issues of inhomogeneous dispersion and non-optimized filler-to-polymer interactions appear particularly critical in this work and are believed to prevent the beneficial, functional effect of oxygen vacancies in the perovskite structure. Moreover, a too high filler concentration could obstruct the ionic channels and impede ionic motion, influencing the performance of the composite membranes during fuel cell operations. Anyhow, the reported results indicate that composite membranes, obtained by adding a suitable concentration of non-stoichiometric calcium titanate in a Nafion matrix, display interesting properties that ought to be considered for fuel cell applications. Indeed, improved performances, in terms of current and power delivered, were observed when using a 5 wt% CTO-added Nafion in a DMFC operating at 110 • C.
2020-09-10T10:21:45.512Z
2020-09-01T00:00:00.000
{ "year": 2020, "sha1": "e94ac4fca13eb5d8c8aa2cc6bf27d0cde0d52394", "oa_license": "CCBY", "oa_url": "https://www.mdpi.com/2073-4360/12/9/2019/pdf", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "f7d7b58cc16db14e351e3d6cbbbe2b94216229ec", "s2fieldsofstudy": [ "Materials Science" ], "extfieldsofstudy": [ "Medicine", "Materials Science" ] }
119325910
pes2o/s2orc
v3-fos-license
Quantum generic Toda system The Toda chains take a particular place in the theory of integrable systems, in contrast with the linear group structure for the Gaudin model this system is related to the corresponding Borel group and mediately to the geometry of flag varieties. The main goal of this paper is to reconstruct a"spectral curve"in a wider context of the generic Toda system. This appears to be an efficient way to find its quantization which is obtained here by the technique of quantum characteristic polynomial for the Gaudin model and an appropriate AKS reduction. We discuss also some relations of this result with the recent consideration of the Drinfeld Zastava space, the monopole space and corresponding Borel Yangian symmetries. Introduction The subject of this work is a very particular example of an integrable system -the generic Toda system related to the A n root system. The method used here is based on the concept of the spectral curve on both classical and quantum levels. The method of the spectral curve and more generally the algebraic-geometric methods in integrable systems provide an intriguingly effective and universal way in describing, solving and quantizing dynamical systems. This work was challenged by the initial construction of the commutative family [1] which is far from the space of spectral invariants for some evolving linear operator. Let us remind the spectral curve construction in open and periodic Toda chains due to [5]. The open Toda chain is defined by the Hamiltonian function and canonical Poisson brackets on variables p k , q l . It has the Lax representation with the Lax operator: where c k = e (q k −q k+1 )/2 , v k = −p k . Let us remark that this Lax representation is not unique for the open chain but unifies the spectral curve technique in open and periodic cases. The commutative family is defined by the coefficients of the characteristic polynomial det(L(w) − λ) = 0 which in turn defines a rational curve. This curve can be interpreted as a limit of a hyperelliptic curve in the periodic case. In fact the open chain is a limit of the system in a quite wider setup -the generic Toda system. We just outline here the main strategy. We start by introducing a generating function for the classical integrals of the generic Toda system. This function appears a limit of the classical characteristic polynomial for the Gaudin model with a particular choice of magnetic term. Then remarking that this family is invariant with respect to the Borel group action we realize the AKS reduction with respect to the decomposition gl n = b ⊕ so n . This idea was generalizable to the quantum level. We used the same elements: we considered the quantum Gaudin model with a particular magnetic term, considered its certain limit, demonstrated the invariance of the resulted commutative family with respect to the Borel group action and realized the quantum AKS reduction. 2 Spectral curve for the classical system Definition The generic Toda system for the Lie algebra gl n is obtained in terms of the so-called chopping procedure. Let us consider a symmetric matrix A those elements are generators of the Borel subalgebra where E ij are generators of End(C n ), e ij for i ≥ j are generators of the Lie algebra b. The matrix coefficients are interpreted as functions on the dual space to the Lie algebra b * which is a Poisson space with the Kirillov-Kostant Poisson bracket. Let us define also the partial matrices A k (λ) obtained by deleting k right columns and k upper rows of the matrix A − λId. By the result of [1] the complete set of roots of all polynomials constitutes a commutative family. The alternative way to define this family is by the help of ratios of coefficients of ∆ k (λ). One can use the fact that the leading term on λ of ∆ k (λ) is ∆ n−k (λ) = ∆ n−k . Hence one can introduce a family of characteristic polynomials P k (λ) = ∆ k (λ)/∆ n−k (λ), k = 0, . . . , n/2. Generating function Let us consider the following matrix A corresponding to the complete Lie algebra gl n We need the notation One can arrange the coefficients of minors 2 into the generating series. where I k (z, λ, ε) are homogeneous in (z −1 , λ) of degree n − k. Then in particular Remark 1 This construction demonstrates for example that the minors commute with respect to the Kirillov-Kostant bracket on S(gl n ). Indeed, this algebra is a limit case of the Poisson commutative algebra obtained by the argument-shift method or equivalently by considering the corresponding Gaudin model. AKS scheme Let us remind one of the central concept of the integrable systems theory -the Adler-Kostant-Symes scheme. We use here the following variant: let g be a Lie algebra represented as the direct sum of two subalgebras g = g + ⊕ g − . The symmetric algebra S(g) is always considered as an algebra of functions on g * . There is a natural projection map related to the decomposition of the symmetric algebra Let us remark that g − S(g) is a Lie subalgebra. The map i can be interpreted as the restriction to the space Ann(g − ) ∈ g * . Despite this map is not in general Poisson it preserves in a sense an integrability property. Lemma 1 Let f, h ∈ S(g) be invariant with respect to the g + Lie algebra action and commute with respect to the Kirillov-Kostant bracket. Then their images i(f ), i(h) commute with respect to the bracket in S(g + ). Proof. Let us consider f, h ∈ S(g) decomposed subject to (5): Both sides take values in different direct summands of (5) hence vanish. Remark 2 In this section we need a rational generalization for this statement, i.e. the case where the Poisson algebra is the field of rational function on the dual space to the Lie algebra. It is a straightforward generalization and we omit it here. By the way it follows from the statement on the quantum level. Invariance We will show that the ratios ∆ k (λ)/∆ n−k are invariant with respect to the Borel subgroup of lower-triangular matrices B ⊂ SL(n). Let us firstly show that the action of the group on functions e ij can be expressed in terms of the action on the Lax operator A The action of the group on the coefficients of the characteristic polynomial is expressed as follows: Let us consider an element of the Borel group g = exp(te j,i ) = 1 + te j,i , j > i. Its action on the matrix Ω ε is expressed as follows This matrix satisfies the property that the lowest term in ε in each row and in each line is on the antidiagonal. This argue by the way that the lowest term in ε in the characteristic polynomial (6) is the same as in the non-deformed one. Let us now show that the Cartan subgroup acts by a character. Let us consider an element of B g = exp(te i,i ) and its action This observation allows to conclude that the antidiagonal terms of Ω ε are multiplied by scalars, and this affects the asymptotics by the following manner where χ k (g) is the corresponding character. Hence we have demonstrated the following Remark 3 In fact there is a wider invariance in this context. One can consider a series of parabolic subalgebras b ⊂ p 1 ⊂ . . . ⊂ p n = sl n such that p n is generated by b and positive generators corresponding to roots α k , . . . , α n−k−1 . Let also consider the series of parabolic groups B ⊂ P 1 ⊂ . . . ⊂ P n = SL n . Quantization The quantum model is constructed by considering a special limit of the Gaudin commutative algebra related to 3-point case also known as the argument-shift construction. We also demonstrate that this subalgebra is invariant with respect to the B-action on the universal enveloping algebra U(sl n ) and provide a quantum analog of the AKS construction which produces a commutative algebra in U(b). Noncommutative determinant Let us consider a matrix B = ij E ij ⊗B ij those elements are elements of some associative algebra B ij ∈ R. We will use the following definition for the noncommutative determinant in this case There is an equivalent definition. Let us introduce the operator A n of the antisymmetrization in (C n ) ⊗n The definition above is equivalent to the following where B k denotes an operator in End(C n ) ⊗n ⊗ A given by the fomrula the trace is taken on End(C n ) ⊗n . Quantum spectral curve We consider the same matrix A as in the classical case but interpret its elements as generators of U(gl n ) Let us consider the generating series QI k (z, ∂ z , ε) are homogeneous in (z −1 , ∂ z ). 4 The commutative algebra in U(gl n ) is generated by the coefficients of These operators are highest terms in ε-expansion of the corresponding Gaudin Hamiltonians and hence commute. The general statement for the Gaudin model is proved in [2], the case with the magnetic field is analyzed in [8]. Theorem 2 The set {QI k,i } ⊂ U(gl n ) generates a commutative algebra H q which quantizes the Poisson-commutative subalgebra in S(gl n ) generated by I k,i . Quantum invariance The subject of this part is the commutative subalgebra in U(b) quantizing the subalgebra of Hamiltonians for the generic Toda chain. We proceed by the same strategy as in the classical case -we will find invariants with respect to the action of the Borel subgroup B on some localization of U(b). Let us define the decomposition problem in the quantum case. We always consider the decomposition of the Lie algebra which transforms to the following one on the level of universal enveloping algebras: given by choosing the normal ordering. Let us denote by a + the projection of a ∈ U(sl n ) to the second summand. Let us demonstrate the specific invariance property of the quantum commutative family. The action of the group element g ∈ B on QI k,i can be realized in terms of the action on the quantum Lax operator, which is the same as in the classical case: hence the question is reduced to the properties of the matrix Ω ε . To prove (9) we need the following lemma Then det(gLg −1 ) = T r 1,...,n A n g 1 L 1 g −1 1 g 2 L 2 g −1 2 . . . g n L n g −1 n = T r 1,...,n A n g 1 g 2 . . . g n L 1 L 2 . . . L n g −1 1 g −1 2 . . . g −1 n = T r 1,...,n g −1 1 g −1 2 . . . g −1 n A n g 1 g 2 . . . g n L 1 L 2 . . . L n = T r 1,...,n A n L 1 L 2 . . . L n = det(L). (10) Here we have used the fact that T r(AB) = T r(BA) for matrices with commuting entries [A ij , B kl ] = 0; and the fact that the action of the symmetric group algebra commute with the diagonal linear group action on the tensor product of vector representation. Quantum AKS lemma We have demonstrated that the group B acts on ∆ k (z, ∂ z ) by a character χ k (b) where b ∈ B. Let us denote by the same letter the character on the Lie algebra χ k (X) for X ∈ b such that Let us also introduce a notation η a (m) for a = ∆ k,i and m ∈ U(b) such that ma = aη k (m). For m being a monomial m = b Lemma 4 Let us consider decompositions of the type (8) of two elements of our commutative algebra Then Remark 5 This is an analog of the AKS lemma. Proof. Let us use the commutativity condition The positive part of (12) equals We are interested in considering the localization of U(b) with the multiplicative set S generated by {QI k,n−k } -the set of highest terms of all partial quantum characteristic polynomials. Lemma 5 S is a right Ore set. Let us remind the Ore requirements. S ⊂ A is a right Ore set if 1. ∀s ∈ S and a ∈ A ∃s ′ ∈ S and a ′ ∈ A such that sa ′ = as ′ ; 2. ∀a 1 , a 2 ∈ A, ∀s ∈ S : (sa 1 = sa 2 ) ⇒ (∃s ′ ∈ S : a 1 s ′ = a 2 s ′ ). The second condition is trivial because the algebra U(sl n ) has no zero divisors. The first condition fulfills due to (11). Theorem 3 Let us consider the localization loc S U(b). Then the ratios (QI k,i ) + /QI k,n−k generate a commutative subalgebra: which is a quantization of the classical generic Toda subalgebra. Proof. Let us use notations a = QI k,i , b = QI m,j , c = QI k,n−k , d = QI m,n−m . This is zero due to lemma 4 and the commutativity of d and c in both algebras. The space L d has natural Poisson bracket given by the R-matrix structure which allows to make the AKS reduction analogous to the generic Toda case. As a result one obtains an integrable system on the space Md due to the invariance of the constructed integrals with respect to the Borel group action. We suppose that the quantization technique of [3] provides the same quantization for the generic Toda chain as those from our approach.
2010-12-15T11:57:56.000Z
2010-12-15T00:00:00.000
{ "year": 2010, "sha1": "7b24b67f4da6fe156924e77f426319741952e1a5", "oa_license": null, "oa_url": null, "oa_status": null, "pdf_src": "Arxiv", "pdf_hash": "7b24b67f4da6fe156924e77f426319741952e1a5", "s2fieldsofstudy": [ "Mathematics" ], "extfieldsofstudy": [ "Mathematics", "Physics" ] }
225165450
pes2o/s2orc
v3-fos-license
Retrieval of Aerosol Fine-mode Fraction over China from Satellite Multiangle Polarized Observations: Validation and Application The aerosol fine-mode fraction (FMF) is an important optical parameter of aerosols, and the FMF is difficult to accurately retrieve by traditional satellite remote sensing methods. In this study, FMF retrieval was carried out based on the multiangle polarization data of Polarization and Anisotropy of Reflectances for Atmospheric Science coupled with Observations from Lidar (PARASOL), which overcame the shortcomings of the FMF retrieval algorithm in our previous research. In this research, FMF retrieval was carried out in China and compared with the AErosol RObotic NETwork 15 (AERONET) ground-based observation results, Moderate Resolution Imaging Spectroradiometer (MODIS) FMF products, and Generalized Retrieval of Aerosol and Surface Properties (GRASP) FMF results. In addition, application of the FMF retrieval algorithm was carried out, a new FMF dataset was produced, and the annual and quarterly average results of FMF from 2006 to 2013 were obtained in all of China. The research results show that the FMF retrieval results of this study are comparable with the AERONET ground-based observation results in China, with correlation coefficient (r), mean absolute 20 error (MAE), root mean square error (RMSE), and the proportion of results that fall with the expected error (Within EE) are 0.770, 0.143, 0.170, and 60.96%, respectively. Compared with the MODIS FMF products, the FMF results of this study are closer to the AERONET ground-based observations. Compared with the FMF results of GRASP, the FMF results of this study are closer to the spatial variation in the ratio of PM2.5 to PM10 near the ground. The analysis of the annual and seasonal average FMF of China from 2006 to 2013 shows that the FMF high value area in China is mainly maintained in the area east of the 25 "Hu Line", with the highest FMF year being 2013, and the highest FMF season is winter. observations (Zhang et al., 2016), mainly based on multiangle scalar observations to obtain total aerosol optical depth (AODt), and multiangle polarization observations to obtain AODf. The ratio of the two is FMF. Compared with the existing MOIDS FMF products, the accuracy of the FMF results obtained by this method is significantly improved, which shows the feasibility of the method. However, there are still some problems that need to be solved if this method is to be applied in large spaces. For example, the empirical parameters of surface reflectance estimation during scalar retrieval vary greatly with region, and 70 high-precision AODt retrieval results can only be obtained in specific regions. In polarization retrieval, there is a problem of low retrieval value for high aerosol loading (Chen et al., 2015;Zhang et al., 2018). In response to these problems, we have also carried out follow-up research work, made certain improvements to the above problems and have achieved more accurate AODt and AODf in a large space (Zhang et al., 2017;Zhang et al., 2018). Then, in theory, it is possible to achieve the goal of FMF in a large space. Although Yan et al. achieved high-precision FMF retrieval based on the LUT-SDA method (Yan et al.,75 For the retrieval of AODt, we introduced the empirical orthogonal function (EOF) to estimate the surface reflection contribution under multiangle observations to solve the regional limitation of the semiempirical parameters of the surface in the original method. Subsequently, this is combined with the retrieval lookup table and substituted into the forward model for simulation 100 calculation, and finally, AODt can be obtained through the cost function. The correlation coefficient (r) and root mean square error (RMSE) between the obtained AODt and AERONET ground-based observations are 0.891 and 0.097, respectively. For more details about the EOF method, please refer to our 2017 study (Zhang et al., 2017). For the retrieval of AODf, our research and other scholars have shown that the AODf results obtained by using the official LOA algorithm have a certain deviation compared with ground-based observations. To improve the retrieval accuracy of AODf, 105 we proposed the Grouped Residual Error Sorting (GRES) method in 2018 to solve the problem of an inaccurate evaluation function caused by error accumulation under multiangle observation. Based on this method, combined with a bidirectional polarized surface reflectance (BPDF) model to estimate the polarized surface reflectance (Nadal and Bré on, 1999), we have obtained higher-precision AODf results in eastern China, and the r and RMSE between the results and the AERONET groundbased observations are 0.931 and 0.042, respectively. More method details can be found in our research published in 2018 110 (Zhang et al., 2018). Based on the new retrieval method, we have obtained higher-precision AODt and AODf retrieval results on a large spatial scale, which also provides the possibility of obtaining accurate FMF results on a large spatial scale. Next, we will obtain FMF based on the AODt and AODf retrieved by the new method, validate the FMF retrieval results based on the AERONET ground-based observation results and further obtain the FMF temporal and spatial distribution results over terrestrial China. Note that since 115 the EOFs during the AODt retrieval need to be constructed with the observation results of the POLDER 3*3 window, the resolution of the final FMF retrieval result is also the size of the POLDER 3*3 window (approximately 18 km). AERONET data At present, aerosol ground-based products of AERONET have been developed to version V3, and the data of version V2 are no longer available for download. Among these products, there are two products that can be used to validate the results of 120 satellite FMF retrieval: one is the FMF product based on the spectral deconvolution (SDA) method (O'Neill et al., 2001a;O'Neill et al., 2001b;O'Neill et al., 2003), and the other is based on the size distribution (SD) retrieval product (Dubovik and King, 2000). Generally, SDA products can provide more FMF ground-based results. At present, most base stations in China provide SDA products with level 2.0 data quality. Therefore, SDA products are the first choice for FMF comparison in this study. However, it is worth pointing out that the Beijing site lacks the SDA product with level 2.0 data quality, so we used 125 the SD product instead. Finally, this study selected the level 2.0 products of 16 AERONET sites in China during 2006-2013 (POLDER on-orbit time) to validate the FMF retrieval results of this study. The specific spatial locations of AERONET sites are shown in Figure 2, and the specific site information is shown in Table 1. However, note that not all AERONET sites have long-term observational data. The sites with long-term observational data are the Beijing, Xianghe, Taihu, and Hong_Kong_PolyU sites. The FMF retrieved in this study is the FMF at 550 nm. Neither the SDA product nor the SD product directly provides the FMF result at this wavelength. Therefore, the AERONET FMF needs to be wavelength converted. For SDA products, the products include AODt, AODf at 500 nm and the corresponding Angstrom Exponent (AE), so the FMF of SDA products can be (1) 135 where 550, is the FMF of the SDA product at 550 nm after conversion, 500 is the AODf at 500 nm, 500 is the AODt at 500 nm, is the fine-mode AE, and is the coarse and fine-mode AE. The SD products provide AODt and AODf at 440 nm and 675 nm, respectively. Eq. Validation method 145 In this study, the average value of ground-based observation results within ±30 min of the satellite's transit was used for comparison with the satellite retrieval results. The satellite retrieval result used for comparison is the effective retrieval result https://doi.org/10.5194/amt-2020-374 Preprint. Discussion started: 9 October 2020 c Author(s) 2020. CC BY 4.0 License. where Cov() represents the covariance, D() represents the variance, represents the FMF retrieval value, 155 represents the value of AERONET FMF, and n is the number of validation points. We have counted the FMF validation results of different surface types, and the specific information is shown in Table 2. The 170 r, MAE, and RMSE at all sites in this study are 0.770, 0.143, and 0.170, respectively, and Within EE is 60.96%, again indicating that the FMF satellite retrieval results of this study are comparable with the ground-based observation results. All the validation results of this study cover six surface types: urban, barren, grasslands, wetlands, croplands, and forests. Overall, since the validation data of the barren type mainly come from the QOMS_CAS site, the validation results at this surface type are poor. Introduction to the FMF retrieval method Although the r at the other five surface types has a certain change, it is 0.508 (barren)-0.831 (forests)), but in terms of the three 175 indicators of MAE, RMSE and Within EE, the differences in the five surface types are relatively small, especially Within EE, which is concentrated at approximately 60%, similar to the site-by-site results. The errors of the FMF retrieval results in this study are relatively stable at these five surface types. We further counted the error distribution of the FMF retrieval results, and the statistical results are shown in Figure 4. The figure shows that the FMF error of this research is mainly distributed between -0.3 and 0.3. This part of the data accounts for 180 approximately 86%, but the part less than the AERONET ground-based FMF observation value accounts for approximately 75%, indicating that the retrieval result of this study is lower than that of the ground-based observations. The specific reason needs to be analysed from the FMF retrieval method of this study. The FMF in this study is obtained from the ratio of AODf to AODt, and the retrieval accuracy of the two parameters directly determines the retrieval accuracy of FMF. Therefore, we https://doi.org/10.5194/amt-2020-374 Preprint. Discussion started: 9 October 2020 c Author(s) 2020. CC BY 4.0 License. compared the retrieved AODs with those of the ground-based data in 2013, and the statistical results are shown in Table 3. 185 The table shows that the mean errors between the AODf and AODt of our retrieval and the ground-based results are -0.039 and 0.043, respectively, indicating that the AODf retrieval result has a negative offset, and the AODt retrieval result has a positive offset, that is, the numerator is small and the denominator is large, eventually leading to a small FMF. somewhat similar to this study, while the Yangtze River Delta is a low-value area. Comparison with GRASP products In our previous research, the accuracy of FMF calculated from the GRASP product was validated (Wei et al., 2020). The results of comparison with 8 SONET (Sun-sky radiometer Observation NETwork) sites show that the r between GRASP FMF and ground-based observations is 0.77, and Within EE is 62.35%, which is similar to the results of this study in Section 3.1. However, by comparing the spatial distribution results of the two, we found some differences. We processed the latest V2.06 220 version of GRASP aerosol products. Figure 8 shows the annual averaged FMF spatial distribution of GRASP in 2013 (also normalized). Compared with Figure 6, we can see certain differences. The relatively high value area of GRASP results is mainly in southern China. We subtracted the results of this study from the average GRASP FMF results and obtained the numerical difference between the two, as shown in Figure 9. The figure shows that the difference between the two in the North China Plain and the southern Xinjiang region is relatively small. The largest differences are mainly concentrated in the southern 225 and northeastern China and Qinghai-Tibet Plateau regions. The GRASP results in these areas are greater than our results, and a small number of pixels can be larger than 0.3. However, these areas lacked publicly available sunphotometer observations in 2013 and before. The PARASOL ended its exploration mission in October 2013, and it is impossible to compare the subsequent time periods, so it is difficult to directly compare with ground-based observations to illustrate the correctness of the spatial distribution of the two. 230 In this study, the ground PM2.5 and PM10 in situ results were compared with the ground-based FMF results. It is expected that the ratio of PM2.5 to PM10 can be used to analyse the correctness of this study, as well as the GRASP FMF results in the spatial distribution trend. We selected the 2015 Beijing Olympic Sports Center monitoring site (116.407°E, 40.003°N, straightline distance of less than 4 km), which was the closest to the AERONET Beijing site, and compared the hourly averaged results of the ratio of PM2.5 to PM10 with the FMF results. Although the definitions of the two are quite different, the ratio of PM2.5 to 235 PM10 is actually a parameter of particulate matter near the ground, while FMF is actually a parameter of the atmospheric column of aerosols, but the comparison results of the two ( Figure 10) show that there is a correlation between the ratio of PM2.5 to PM10 and FMF, and the r is 0.709. This result may be because aerosols are mainly distributed near the ground, and PM2.5 and PM10 can represent different particle modes. In the end, the actual difference between the two parameters is smaller. Since the ratio of PM2.5 to PM10 is comparable to the ground-based FMF results, if there are more in situ data, it can indirectly verify 240 the spatial distribution trend of this study and the GRASP results. Due to the lack of in situ data for particulate matter in China in 2013, this study can only be based on the 2013 environmental protection key city air in the China Statistical Yearbook (http://www.stats.gov.cn/tjsj/ndsj/). The annual average value of air quality is used for limited analysis. We extracted the FMF retrieval results and GRASP results of the corresponding 47 cities in the statistical yearbook and calculated the annual average FMF of each city for comparison with the ratio of the annual 245 average PM2.5 to PM10 of each city. The spatial distribution of the administrative regions of these 47 cities is shown in Figure 11. These cities cover most of China's provinces and have a wider spatial distribution range than the AERONET sites in Figure 2. The comparison results in Figure 12 show that although the annual average FMF results of this study in each city are lower https://doi.org/10.5194/amt-2020-374 Preprint. Discussion started: 9 October 2020 c Author(s) 2020. CC BY 4.0 License. than the annual average results of the ratio of PM2.5 to PM10, the change trend of the FMF results of this study is better than the results of GRASP FMF. The r between the FMF of this study and the ratio of PM2.5 to PM10 is 0.778, while GRASP is 250 0.472, which can provide evidence for the correctness of the FMF results of this study in the spatial distribution. The low FMF results in this study are related to the calculation methods of the annual average values of PM2.5 and PM10 in each city. Generally, most of the in situ monitoring sites for particulate matter in each city are distributed in urban areas, and the number of sites distributed in rural areas is small (for example, 9 of the 12 state-controlled sites in Beijing are in urban areas). When calculating the average FMF of a city, one pixel may contain the results of multiple monitoring stations in place, which makes it difficult 255 to achieve accurate spatial location matching. To facilitate data processing, all pixels within the urban administrative boundary are directly used to calculate the average value, and the large number of FMFs in rural areas is generally lower than that in cities, which ultimately leads to a lower FMF average result. Based on the validation and comparison results in Sections 3.1 to 3.3, this research has obtained FMF satellite retrieval results with good accuracy in China, which proves the reliability and stability of the retrieval method. Compared with the MODIS 260 FMF products, the r, MAE, RMSE and Within EE of the results of this study are all higher than the results of MODIS. Compared with the GRASP FMF, the results of this study are closer to the results of the ratio of PM2.5 to PM10 in terms of the spatial distribution of the entire region of China. The above results all illustrate the effectiveness and advantages of the FMF retrieval method used in this study. Compared with our original FMF retrieval method, which can only be used at the urban area scale, this research has achieved FMF retrieval in a large space. Therefore, we will carry out the practical application of 265 FMF satellite remote sensing retrieval based on the new method. from March to May, summer is from June to August, autumn is from September to November, and winter is from December to February. As seen from the figure, for the east area of the "Hu Line", the overall FMF reached its highest value in winter, mainly concentrated in the range of 0.7-0.8; the FMF of southern China still has a relatively high value in the spring, and the 335 overall value is approximately 0.6, while in North China, the plain area is lower, generally between 0.4-0.5; the North China Plain in summer is similar to that in spring, but there is a significant decline in southern China, the value is generally between 0.3-0.5; in autumn, the overall value begins to rise, the value is approximately 0.6. The Sichuan-Chongqing economic zone maintains a relatively high value in all four seasons, and the value in some areas in winter is close to 0.8; the three northeastern provinces also have high values in winter, and the overall value is between 0.4-0.7. For the area west of the "Hu line", the 340 northern Xinjiang area is higher in autumn and winter, and it can reach 0.7 in some areas in winter, and the southern Xinjiang area also shows a significant increase in winter, with some high values close to 0.6; the Qinghai-Tibet Plateau maintains a low value in all seasons, and the value is mainly concentrated between 0.1-0.3. https://doi.org/10.5194/amt-2020-374 Preprint. Discussion started: 9 October 2020 c Author(s) 2020. CC BY 4.0 License. Summary In this study, the multiangle polarization data of PARASOL were used to perform FMF retrieval, and the retrieval results were 345 compared with the AERONET ground-based observations, MODIS results, and GRASP results. Based on the FMF retrieval method, the retrieval of air pollution cases in China was carried out, and the results of the FMF temporal and spatial distribution in China from 2006 to 2013 were also obtained. Based on the above work content, the conclusions of this research are described as follows: (1) There is good agreement between the FMF results obtained in this study and the AERONET ground-based observation 350 results. The overall r, MAE, RMSE, and Within EE between the two are 0.770, 0.143, 0.170, and 60.96%, respectively. (2) The FMF results obtained in this study were more practical than the MODIS FMF products. The r, MAE, RMSE, and Within EE between the FMF results and the ground-based observations are 0.812 versus 0.302, 0.072 versus 0.512, 0.102 versus 0.574, 79.72% versus 12.59%, respectively. (3) Compared with the GRASP FMF, the FMF results obtained in this study are closer to the ratio of PM2.5 to PM10 in terms 355 of the spatial distribution trend. Compared with the annual average ratio of PM2.5 to PM10 in 47 Chinese cities in 2013, the r of this study is 0.778, and GRASP is 0.472. The FMF retrieval method in this study has significance for the development of aerosol polarization satellite remote sensing algorithms, and the FMF results obtained in China also have good practical value for application research in the field of atmospheric environments. China has launched the Gaofen-5 (GF-5) satellite equipped with a new multiangle polarization 365 sensor. With the release of GF-5 satellite data in the future, the results of this study can also provide algorithmic support for the application of its multiangle polarization sensor in the field of atmospheric environmental monitoring and are expected to produce subsequent FMF datasets. However, there are some shortcomings in this research. For example, the retrieval of FMF still depends on the accuracy of the two parameters AODf and AODt. In our previous research, although higher-precision results of AODf and AODt have been obtained, the FMF error is related to the error of the two retrieval parameters. The 370 transmission of the error will eventually amplify the retrieval error of FMF. Compared with the individual retrieval of AODf and AODt, the retrieval of FMF is still difficult. In the future, it is still necessary to further improve the retrieval accuracy of AODf and AODt. In addition, due to the limitation of the validation data, we are temporarily unable to further discuss the correctness of the spatial distribution trend of the FMF in this study and GRASP, and only the results of the ratio of PM2.5 to PM10 were used for indirect comparison. In the future, we can try to perform FMF retrieval in other regions with many ground-375 based observations around the world to further compare the findings of the two results.
2020-10-28T19:10:52.216Z
2020-01-01T00:00:00.000
{ "year": 2020, "sha1": "90cb8c1551a00da477f301f99a26f540b949bf4c", "oa_license": "CCBY", "oa_url": "https://amt.copernicus.org/articles/14/1655/2021/amt-14-1655-2021.pdf", "oa_status": "GREEN", "pdf_src": "Anansi", "pdf_hash": "75a02e91cc0d403eebbaced964fec1a6e3618fcb", "s2fieldsofstudy": [ "Environmental Science", "Physics" ], "extfieldsofstudy": [] }
38448104
pes2o/s2orc
v3-fos-license
Superconductivity-Induced Transfer of In-Plane Spectral Weight in Bi2Sr2CaCu2O8: Resolving a Controversy We present a detailed analysis of the superconductivity-induced redistribution of optical spectral weight in Bi2Sr2CaCu2O8 near optimal doping. It confirms the previous conclusion by Molegraaf et al. (Science 66, 2239 (2002)), that the integrated low-frequency spectral weight shows an extra increase below Tc. Since the region, where the change of the integrated spectral weight is not compensated, extends well above 2.5 eV, this transfer is caused by the transfer of spectral weight from interband to intraband region and only partially by the narrowing of the intraband peak. We show that the opposite assertion by Boris et al. (Science 304, 708 (2004)) regarding this compound, is unlikely the consequence of any obvious discrepancies between the actual experimental data. We present a detailed analysis of the superconductivity-induced redistribution of optical spectral weight in Bi2Sr2CaCu2O8 near optimal doping. It confirms the previous conclusion by Molegraaf et al. (Science 295, 2239), that the integrated low-frequency spectral weight shows an extra increase below Tc. Since the region, where the change of the integrated spectral weight is not compensated, extends well above 2.5 eV, this increase is caused by the transfer of spectral weight from interband to intraband region and only partially by the narrowing of the intraband peak. We show that the opposite assertion by Boris et al. (Science 304, 708 (2004)), regarding this compound, is unlikely the consequence of any obvious discrepancies between the actual experimental data. The following reasons can potentially cause the opposite conclusions by different teams, namely: (i) nonidentical compounds and doping levels were used, (ii) essentially different experimental results were obtained, or (iii) the analysis and interpretation of similar experimental data seriously deviate, most likely, in the way to determine the SC-caused change of the low-frequency SW: where ρ s (T ) is the spectral weight of the condensate, σ 1 (ω, T ) is the real part of the optical conductivity and the cutoff energy Ω c represents the scale of scattering of the intraband (Drude) excitations. Because of reason (i), we restrict our present discussion to the case of Bi2212 near optimal doping, which was studied by all mentioned teams 1,2,3 . We should keep in mind that, even with this restriction, the T c 's and the stoichiometries of the samples are still not identical. Regarding point (ii) we found that the experimental graphs, presented in Ref. 3 The limited space of Ref.1 did not allow explaining in depth the full analysis done. In this paper we present a more detailed and rigorous analysis of the experimental data published in Ref.1 and arrive at additional arguments supporting the original conclusions. In order to numerically decouple the superconductivity-induced changes of the optical properties from the temperature dependencies, already present in the normal state (such as a gradual narrowing of the Drude peak), we apply the slope-difference analysis, developed in Ref. 11, which is closely related to the well-known temperaturemodulation technique 31,32 . It appears that any realistic model, which satisfactorily fits the total set of experimental data (reflectivity below 0.75 eV and ellipsometrically obtained real and imaginary parts of the dielectric function at higher energies), gives an increase of W (Ω c , T ) below T c . Moreover, all attempts to do the same fitting with an artificial constraint, that the SC-induced change of W (Ω c , T ) is negative (as is claimed in Ref. 3), or even zero, failed to fit the data, despite using flexible multi-oscillator models 33 . Importantly, the ellipsometrically measured real and imaginary parts of the dielectric function, ǫ 1 (ω) and ǫ 2 (ω), at high frequencies provide the most stringent limits on the possible change of the low-frequency spectral weight W (Ω c , T ), due to the KK relations. Finally we discuss whether the SC-increase of W (Ω c , T ) is caused by the extra narrowing of the Drude peak below T c or by removal of the SW from the interband region. Indeed we find an extra narrowing of the Drude peak, in agreement with Ref. 3. However, this narrowing is too small to explain the increase of the lowfrequency spectral weight, which suggests that there is a sizeable spectral weight transfer from the range of the interband transitions. II. EXPERIMENTAL CONSISTENCY OF THE REPORTED RESULTS First of all, we need to check if there are any qualitative discrepancies between the results published in Ref. 3 and our data, which can immediately lead to the opposite conclusions. In Fig.1 we reproduce the difference curves ∆ǫ 1 (ω) = ǫ 1 (ω, 100K) − ǫ 1 (ω, 20K) and optical conductivity ∆σ 1 (ω) = σ 1 (ω, 100K) − σ 1 (ω, 20K) for Bi2212 close to optimal doping from ellipsometric measurements of Ref.3 (Fig. S5) together with the same curves, obtained by the KK transformation of the reflectivity spectra of Ref. 1, re-plotted in the same fashion. Although for the purpose of comparison of spectra in this range we have to use the KK-transformed quantities, the main analysis which we present in sections IV and V of this paper is based on the directly measured optical quantities. We also point out right away, that these difference curves do not eliminate the normal-state temperature trends, unrelated to the SC phase transition. In sections IV and V we present what we believe to be the correct analysis, which takes into account the normal-state temperature dependence of optical properties. One can state that there is a qualitative agreement between the ∆ǫ 1 (ω) and ∆σ 1 (ω) curves of the two groups, especially considering that these difference curves are on a 10-fold amplified scale as compared to the spectra itself ( Fig. 1 Clearly, no model-independent argument, without specific calculations, can give opposite signs of the change of the SW, when applied to these two data sets. Therefore, the different conclusions of different teams are unlikely the consequence of any obvious discrepancies between the actual experimental data. An important remark is that the scale and the data scatter on Fig.1 do not allow to distinguish the change of ǫ 1 (ω) from zero for frequencies between 0.2 and 0.5 eV. As a result, the authors of Ref.3 base their argument on ∆ǫ 1 (ω) being zero in this range. In contrast, later we show that it is small, but not zero, including the range above 0.5 eV. In fact, this observation will be extremely important to determine the sign of the SW change. III. ON THE ROLE OF THE KRAMERS-KRONIG RELATIONS Since the optical conductivity σ 1 (ω) is not directly measured down to zero frequency, the calculation of W (Ω c , T ) requires, in principle, the extrapolation of data, using certain data modelling. It turns out, however, that the use of the Kramers-Kronig relation between ǫ 1 (ω) and σ 1 (ω) helps to avoid unnecessary model assumptions and drastically decrease the resulting error bars. The authors of Ref.3 propose a model-independent argument in order to show that there is an overall decrease of the intraband spectral weight below T c . The argument is based on two of their experimental observations 34 : (i) ∆σ 1 (ω) > 0 between ω 1 = 0.15 eV and ω 2 = 1.5 eV, (ii) ∆ǫ 1 (ω) = 0 in the same energy region. We reproduce partially their statement here: "The SW loss between 0.15 eV and 1.5 eV then needs to be balanced by a corresponding SW gain below 0.15 eV and above 1.5 eV. In other words, there is necessarily a corresponding SW gain in the interband energy range above 1.5 eV caused by a decrease of the total intraband SW." We believe that this statement is incorrect. The justification given by the authors of Ref.3 considers a weaker set of conditions: ∆σ 1 (ω) > 0 for ω 1 < ω < ω 2 , and ∆ǫ 1 (ω 0 ) = 0 at one particular frequency ω 0 between ω 1 and ω 2 . We rewrite Equation (S3) of Ref. 3 in the form: Now we note, that even though ∆σ 1 in the integral B is positive, the value of B can have either sign. It means that there is formally no limitation on the sign of C − A, nor on the signs of A and C separately. Appealing to the general f -sum rule ∞ 0 ∆σ 1 (x)dx = 0 does not remove the uncertainty, because the change of the low-frequency spectral weight can be, in principle, compensated at arbitrarily high frequencies, which give an arbitrarily small contribution to the KK integral. The stronger condition that ∆ǫ 1 (ω) is exactly zero for ω 1 < ω < ω 2 is physically pathological and it appears to be more difficult to address mathematically. It was not explained in Ref.3 how this condition leads to the cited statement. Furthermore, we observe that ∆ǫ 1 (ω) is actually not zero in this range. We thus disagree with the model-independent argument in favor of the SC-induced decrease of the charge carrier SW, as is claimed in Ref. 3. On the other hand, we absolutely agree with Ref.3, that the KK relations between ǫ 1 (ω) and σ 1 (ω) must stay as an important ingredient of the data analysis 35 . However, in our opinion trustworthy conclusions can only be drawn from a thorough numerical treatment of the full set of available optical data. Importantly, due to the KK relations, the behavior of ǫ 1 (ω, T ) and ǫ 2 (ω, T ) at high frequencies appears to be the most sensitive indicator of the spectral weight transfer. We present such an analysis in the following sections. In Figs. 2, 3 we display the temperature dependence of reflectivity, and ǫ 1,2 (ω, T ) respectively, for selected photon energies. The complex dielectric constant changes as a function of temperature in the entire range from 0 to 300 K. For frequencies larger than 0.25 eV the variation as a function of temperature is essentially proportional to T 2 . The same temperature dependence has been observed in other cuprate superconductors, for example 36 La 2−x Sr x CuO 4 , and has been explained quantitatively using the Hubbard model 37 . For frequencies below 0.1 eV the optical conductivity has a very large intrabandcontribution, with a linearly increasing dissipation, which causes a non-monotonous temperature dependence of ǫ 1 and ǫ 2 . In particular, it was found 38 that the 1/(T σ 1 (ω)) is equal to a constant plus a term proportional to T −2 . The gradual decrease of the high-frequency conductivity with cooling down is also expected due to the reduction of the electron-phonon scattering 39 . The onset of the superconductivity is marked by clear kinks (slope changes) at T c of the measured optical quantities (Fig.2, 2). An exciting feature of the high-T c cuprates is that the kinks are seen not only in the region of the SC gap, but also at much higher photon energies (at least up to 2.5 eV in Ref. 1). It means that the formation of the SC long-range order causes a redistribution of the spectral weight across a very large spectral range; a fact, which several groups agree upon 1,2,3,32,40 . An essential aspect of the data analysis is the way to separate the superconductivity-induced changes of the optical constants from the temperature-dependent trends, observed above T c . A similar problem is faced in the specific-heat experiments 41,42 , where the superconductivity-related structures are superimposed on a strong temperature dependent background. Let us introduce a slope-difference operator ∆ s , which measures the slope change (kink) at T c 11 : where f stands for any optical quantity. It properly quantifies the effect of the SC transition, since the normal state trends are cancelled out 43 . Since ∆ s is linear, the slope-difference KK relation is also valid: In order to detect and measure a kink as a function of temperature as well as to establish whether or not the kink is seen at T c the spectra must be measured using a fine temperature resolution. A resolution of 2 K or better was used in the entire range from 10 to 300 K, enables us to perform a reliable slope-difference analysis. The datapoints in Figs. 4a and 4b show ∆ s R(ω) and ∆ s ǫ 1 (ω) and ∆ s ǫ 2 (ω), obtained from the directly measured temperature-dependent curves shown in Fig. 2 and 3. The details of the corresponding numerical procedure and the determination of the error bars are described in Appendix A1. One can see, that ∆ s ǫ 1 (ω) is negative and its absolute value is strongly decreasing as a function of frequency. In the same region, ∆ s ǫ 2 (ω) is almost zero (within our error bars). Intuitively, it already suggests that the lowfrequency integrated spectral weight W (Ω c , T ) is likely to increase in the SC state. Indeed, in the simplest scenario, when the extra SW is added at zero frequency, one has ∆ s ǫ 1 (ω) = −8∆ s W (0+)ω −2 . This formula becomes approximate, if changes also take place at finite frequencies. However, the approximation is good, if the most significant changes ∆ s σ 1 occur only below ω 1 ≪ ω, and above ω 2 ≫ ω. This follows from the exact expansion 44 of Eq. 5, valid for ω 1 < ω < ω 2 : For ω 1 = 0.8 eV and ω 2 = 2.5 eV we can calculate ∆ sǫ1 (ω) directly from the measured ∆ s σ 1 (ω). It turns out, that |∆ sǫ1 (ω)| < 10 −4 K −1 , while the average value is indistinguishable from zero. In this situation we can neglect ∆ sǫ1 (ω), compared to the contributions from the low and high frequencies. Thus, to leading order where ǫ ∞ = 8 ∞ ω2 σ 1 (ω)ω −2 dω is the integrated oscillator strength of all optical excitations above ω 2 . The best fit of ∆ s ǫ 1 (ω) in the spectral region (0.8 -2.5 eV) with Eq. 7 is shown by the green dashed line in Fig.4b. It gives ∆ s W (Ω c ) ≈ +360 Ω −1 cm −2 K −1 and ∆ s ǫ ∞ ≈ -10 −4 K −1 . Although these absolute values are very approximate, the signs of both parameters indicate that the spectral weight is taken from the region above 2.5 eV and added to the region below 0.8 eV. V. SLOPE-DIFFERENCE SPECTRAL ANALYSIS Now we present the full data analysis without using the approximation (7). The technique we present here is a modification of the temperature-modulation spectroscopy 31,32,45 . Because of the KK relation (5), we can model the slope-difference dielectric function with the following dispersion formula: ∆ s ǫ(ω): ∆ s ǫ ∞ is responsible for the high-frequency electronic excitations, while each Lorentzian term represents either an addition or a removal of spectral weight, depending on the sign of A i . We emphasize that this model is just a parametrization. The number of oscillators N has to be chosen in order to get a good fit of experimental data. The physical meaning of some oscillators, taken alone, may not be well-defined. However, the essential feature of the functional form (8) is that it preserves 46 the KK relation (5). We include the infrared ∆ s R(ω) to the fitting procedure by making use of the following relation 11,45 : The method which we use to determine the 'sensitivity functions' (∂R/∂ǫ 1,2 )(ω, T c ) from the experimental data is described in Appendix A2. The green solid line in Fig.4a and 4b denotes the best fit 47 of ∆ s ǫ 1 (ω) and ∆ s ǫ 2 (ω) and, simultaneously, ∆ s R(ω). One can see that all essential spectral details are well reproduced. The corresponding parameter values are collected in Table I. The first term in (8) combines the condensate and the narrow (γ < 100 cm −1 ) quasi-particle peak, while the remaining oscillators mimic the redistribution of spectral weight at finite frequencies. The slope-difference integrated spectral weight for the model (8): is presented as a green curve in Fig.4c. It gives ∆ s W (Ω c ) ≈ +770 Ω −1 cm −2 K −1 , which is about two times larger than the rough estimate in Section IV. In order to test how robust this result is, we did two more fits of the same data, with an extra imposed con- The resulting 'best-fitting' curves are shown in Fig.4a, 4b and 4c in blue and red, respectively. One can clearly see that the models with ∆ s W (Ω c ) ≤ 0 fail to reproduce the experimental spectra, most spectacularly the high-frequency spectrum of ∆ s ǫ 1 (ω). This is not surprising, since the imposed constraint changes the sign of the leading term ∼ ω −2 in formula (7). To exclude the possibility that the failure to get a good fit with the mentioned constraint would be a spurious result caused by the limited number of Lorentz oscillators used for fitting the data, we have also used the KK-constrained variational dielectric model 33 , which is, in simple terms, a collection of a large number of adjustable oscillators, uniformly distributed in the whole spectral range, including the region above 2.5 eV. Thus we are confident that the dispersion model is able to reproduce all significant spectral features of the true function ∆ s ǫ 1 (ω). However, these efforts did not improve the quality of the fit. From this analysis we conclude that our experimental data unequivocally reveal the superconductivity-induced increase of W (Ω c , T ) in optimally doped Bi2212, confirming the statements given in Refs.1,2. An alternative approach is to fit the total set of spectra at every temperature 1,48 using a certain KK-consistent model with temperature-dependent parameters. There is a variety of possibilities to parameterize the dielectric function. However, it was shown by one of us 48 that every model which reproduces satisfactorily not only the spectral features, but also the temperature dependence of the directly measured R(ω) and ǫ(ω), gives a net increase of the low-frequency spectral weight below T c . VI. DISCUSSION A. Absolute change of the spectral weight Having found that the integrated spectral weight W (Ω c , T ) exhibits an extra increase below T c , we want to evaluate its absolute superconductivity-induced change, continued into the low-temperature region 34 : where W n (Ω c , T ) is the 'correct' extrapolation of the normal-state curve below T c . In principle, W n (Ω c , T ) can be measured, when the superconducting order parameter is suppressed by an extremely high magnetic field (of the order of hundred Teslas). Unfortunately, such large fields are currently prohibitive for accurate optical experiments. However, an order-of-magnitude estimate and, simultaneously, an upper limit of ∆W (Ω c , T ) at zero temperature can be obtained by the formula This gives ∆W (Ω c , 0K) ∼ 7 · 10 4 Ω −1 cm −2 , which is about 1% of the total low-frequency spectral weight W (Ω c , 0K)≈ 7 · 10 6 Ω −1 cm −2 . Since the temperature dependence of ∆W (Ω c , T ) is expected to saturate somewhat below T c , a more realistic estimate is smaller by about a factor of 2 to 5, i.e. between 0.2 and 0.5% of W (Ω c , 0K). These rough margins are suggested by the temperature dependence of the ab-plane 49 and caxis 8 penetration depths. Although this is a relatively small fraction, it is nevertheless significant 1 in the context of the theories where the superconducting transition is driven by the lowering of the kinetic energy 14,15 . B. The origin of the spectral weight transfer The low-frequency integrated spectral weight W (Ω c , T ) should not be confused with the intraband spectral weight: where σ 1,intra (ω, T ) is the conductivity due to intraband transitions. A legitimate question is whether the observed increase of W (Ω c , T ) below T c is due to the transfer of SW from the interband transitions to the intraband (Drude) conductivity i. e., by the increase of W intra (T ), or is simply caused by an extra narrowing of the Drude peak in the SC state 3,39 , without changing W intra (T ). The distinction between the intraband and interband spectral weights can be made in theoretical models, but in experimental spectra their separation is not unique, because of the unavoidable overlap between these spectral ranges. Nevertheless, the temperature and spectral behavior of the optical constants suggests a likely scenario. Fig.5a shows the slope-difference conductivity ∆ s σ 1 (ω), which is obtained by the most detailed multi-oscillator fit 33 and the superconductivity-induced narrowing is given by a simple change of γ, we get ∆ s σ 1 (ω) = (4π) −1 ω 2 p (ω 2 − γ 2 )(ω 2 + γ 2 ) −2 ∆ s γ. This shape (with γ = 0.14 eV) matches the experimental curve above 0.15 eV quite well (the blue dashed line), even though the real shape of the conductivity peak is much more complicated. Below 0.12 eV the drop of the conductivity is caused by the opening of the SC gap. Thus, a peak of ∆ s σ 1 (ω) at 0.13 -0.14 eV, where the SC-induced change of σ 1 is even positive, is probably a cooperative effect of the narrowing of the Drude peak and the suppression of conductivity in the gap region. This peak corresponds to a dip in the spectrum ∆ s R(ω) (Fig.4a). Fig.5b depicts ∆ s W (ω), which is obtained by the integration of ∆ s σ 1 (ω) of Fig.5a. There is almost no superconductivity-induced change of σ 1 (ω) above 0.8 eV up to at least 2.5 eV. Correspondingly, ∆ s W (ω) is a positive constant in this region (Fig.4c), showing no trend to vanish right above 2.5 eV. Therefore, the scenario where the narrowing of the Drude peak is fully responsible for the observed SC-induced increase ∆ s W (ω), requires an assumption that a large portion of the Drude peak extends to energies well above 2.5 eV. Given the fact that the bandwidth is about 2 eV, such a scattering rate seems to be unrealistically large. Accordingly, ∆ s W (ω), which corresponds to the discussed Drudenarrowing model (dotted line in Fig.5b) accounts for only about one-third of the actual value at the cut-off energy Ω c . It suggests that, at least in optimally doped Bi2212, a more plausible explanation is a superconductivity-induced spectral weight transfer from the interband transitions to the intraband peak. While the redistribution of SW below 2.5 eV is experimentally well determined, the observation of the details of the interband spectral weight removal, which is most likely spread over a very broad range of energies, is beyond our experimental accuracy at the moment. C. The difference between Bi2Sr2CaCu2O8 and YBa2Cu3O6.9 The results of this article, based on our full data analysis, refer to Bi2212 near optimal doping. It is interesting to analyze the picture of spectral weight transfer in other compounds. In Ref. 3 the results of ellipsometric measurements on detwinned single crystal of optimally doped Y123 were reported. The authors conclude that the intraband spectral weight decreases in the SC state, following exactly the same reasoning, as in the case of Bi2212. Although we think that the model-independent arguments of Ref. 3 are not justified (see Section III), and actually fail to give the right answer in the case of Bi2212, we do not rule out the possibility that the spectral weight transfer in Y123 might be quite different from Bi2212. An experimental indication of such a possibility is that the temperature-dependent curves of ǫ 1 (T ) show an upward kink at T c (see Fig.1c-d of Ref.3), while in our data of Bi2212 the kink is downward 50 . Our data on twinned Y123 films at optimal doping (T c = 91 K) show a similar effect 48 . The approximate formula (7) suggests that the sign of ∆ s W (Ω c ) might be different in the two compounds. However, a more careful analysis is needed for definite conclusions, especially because of strong temperature-dependent interband transitions in Y123 around 1 -2 eV 3,48 . A striking feature of Y123, is that along the direction of the chains (b-axis) the upward kink of ǫ 1 (ω) at T c (∆ s ǫ 1 (ω)) is much larger than perpendicular to the chains (compare Figs. 1 D and S3 B of Ref.3). It suggests that the charge dynamics in the chains, or even, the charge redistribution between the chains and the planes 51 has a strong influence on the SC-induced spectral weight transfer. VII. SUMMARY We presented a detailed analysis of the optical data, published earlier in Ref.1. By taking advantage of a high temperature resolution, we determine the kinks (slope changes) at T c of directly measured optical quantities -reflectivity R(ω) below 0.75 eV and ellipsometrically measured ǫ 1 (ω) and ǫ 2 (ω) at higher energies. The Kramers-Kronig constrained modelling of the slope-difference spectra clearly shows an extra gain of the lowfrequency integrated spectral weight a result of the superconducting transition. This gain is not compensated at 2.5 eV and somewhat higher energies, which suggests, that it is mostly caused by the spectral weight transfer from the interband towards intraband transitions and only partially by the narrowing of the Drude peak. We found no serious discrepancies between the experimental data of Refs. 3 and 1, insofar they relate to the same compound (Bi2212). In our opinion, the opposite conclusions drawn by the authors of Ref. 3 are, at least in part, caused by an incorrect data analysis. As a concluding remark, the picture of the SC-induced spectral weight transfer in cuprates is far from being completed. A recent study 13 suggests that in the overdoped regime the spectral weight transfer is conventional (BCSlike), while it is unconventional (opposite to BCS-like) in the optimally-and underdoped side. It is also not clear how individual features of certain compounds (for example, chains in YBa 2 Cu 3 O 6+x , structural distortions etc.) affect this subtle effect. Further experiments should clarify this issue. According to the definition (4), the determination of ∆ s R(ω), involves the calculation of ∂R(T )/∂T above and below T c . The numerical derivatives, calculated straightforwardly from the data points shown in Fig.2, are rather noisy, with the exception of some frequencies, where the signal is especially good. In order to limit the statistical noise, we can take advantage of the large number of temperatures measured, and use the following procedure. For each frequency, a curve R(T ) is fitted to a second-order polynomial P low (T ) between T low and T c , and another polynomial P high (T ) between T c and T high , where T low < T c and T high > T c are some selected temperatures. For the optimally doped Bi2212 we used T low = 30 K, T c = 88 K and T high = 170 K. The polynomial curves below and above T c are shown by respectively blue and red curves in Fig.(2). The superconductivity-induced slope change is calculated as We estimate the error bars of ∆ s R by varying T low and T high in certain reasonable limits (20-40 K and 150 -200 K respectively). These error bars reflect mostly the systematic uncertainties of the numerical procedure. Exactly the same method was applied to determine ∆ s ǫ 1 (ω) and ∆ s ǫ 2 (ω) from the temperature-dependent curves, shown in Fig.3.
2014-10-01T00:00:00.000Z
2002-03-22T00:00:00.000
{ "year": 2002, "sha1": "12f504077c9a451b1fb59f7bd3bbb86c1ce82add", "oa_license": null, "oa_url": "https://archive-ouverte.unige.ch/unige:24075/ATTACHMENT01", "oa_status": "GREEN", "pdf_src": "Arxiv", "pdf_hash": "12f504077c9a451b1fb59f7bd3bbb86c1ce82add", "s2fieldsofstudy": [ "Physics" ], "extfieldsofstudy": [ "Physics", "Medicine", "Chemistry" ] }
214766225
pes2o/s2orc
v3-fos-license
Beyond the Simple Copper(II) Coordination Chemistry with Quinaldinate and Secondary Amines. Copper(II) acetate has reacted in methanol with quinaldinic acid (quinoline-2-carboxylic acid) to form [Cu(quin)2(CH3OH)]∙CH3OH (1) (quin− = an anionic form of the acid) with quinaldinates bound in a bidentate chelating manner. In the air, complex 1 gives off methanol and binds water. The conversion was monitored by IR spectroscopy. The aqua complex has shown a facile substitution chemistry with alicyclic secondary amines, pyrrolidine (pyro), and morpholine (morph). trans-[Cu(quin)2(pyro)2] (2) and trans-[Cu(quin)2(morph)2] (4) were obtained in good yields. The morpholine system has produced a by-product, trans-[Cu(en)2(H2O)2](morphCOO)2 (5) (morphCOO− = morphylcarbamate), a result of the copper(II) quinaldinate reaction with ethylenediamine (en), an inherent impurity in morpholine, and the amine reaction with carbon dioxide. (pyroH)[Cu(quin)2Cl] (3) forms on the recrystallization of [Cu(quin)2(pyro)2] from dichloromethane, confirming a reaction between amine and the solvent. Similarly, a homologous amine, piperidine (pipe), and dichloromethane produced (pipeH)[Cu(quin)2Cl] (11). The piperidine system has afforded both mono- and bis-amine complexes, [Cu(quin)2(pipe)] (6) and trans-[Cu(quin)2(pipe)2] (7). The latter also exists in solvated forms, [Cu(quin)2(pipe)2]∙CH3CN (8) and [Cu(quin)2(pipe)2]∙CH3CH2CN (9). Interestingly, only the piperidine system has experienced a reduction of copper(II). The involvement of amine in the reduction was undoubtedly confirmed by identification of a polycyclic piperidine compound 10, 6,13-di(piperidin-1-yl)dodecahydro-2H,6H-7,14-methanodipyrido[1,2-a:1′,2′-e][1,5]diazocine. Introduction The coordination chemistry of copper is very rich because of its biological roles [1][2][3] and diverse practical applications, e.g., as catalysts, fungicides, and pesticides [4]. The chemistry of no other transition metal surpasses that of divalent copper with Nand O-donor ligands [4]. The reactivity of copper in its metalloenzymes and proteins rests mostly in its redox-active character, as they are involved in electron transfer, oxygen transport, and oxidation of important substrates such as amines, L-ascorbic acid, galactose, etc. [4,5]. Under oxidizing conditions in the cell, copper exists as Cu 2+ , whereas reducing conditions favor the Cu + form. Different behavior of the two states may be traced to differences in their hardness: whereas the Cu 2+ ion is classified as a borderline Lewis acid, the reduced counterpart, the Cu + ion, is an exemplary soft acid [6]. Hard Nand O-donors dominate the coordination chemistry of divalent state, whereas the Cu + ion favors ligands with soft donor atoms, such as phosphorus, sulfur, or iodine. Because of the spherical symmetric d 10 configuration, the Cu + ion lacks any LFSE and it has, as a consequence, no preference for a specific coordination environment [5]. On the opposite, the Cu 2+ environment [5]. On the opposite, the Cu 2+ ion usually displays a distorted octahedral environment with four tightly bound donors in a plane and two occupying more distant sites above and below this plane. In the limit, the elongation of the axial bonds often results in a square-planar geometry [5]. The described distortions of the coordination polyhedra are due to the operating Jahn-Teller effect, a characteristic of a metal complex with the d 9 electron configuration [7]. The ligands and their spatial distribution can induce changes in the reduction potential of the metal ion and thereby influence its oxidation state [8]. The research on the Cu 2+ and Cu + model systems with a common ligand environment can give information about the metal's mode of action at the active sites in enzymes [8]. Our current study involves copper(II) complexes with quinaldinate and alicyclic secondary amines as auxiliary ligands. Quinaldinic acid, with a rational name quinoline-2-carboxylic acid (shown in Scheme 1), is a biological molecule, mostly known for its role in tryptophan metabolism [9]. Its anionic form, abbreviated as quin -, readily forms complexes with many transition metal ions and has, therefore, found use in their quantitative gravimetric determination [10,11]. Structurally characterized complexes with quinaldinate reveal a bidentate chelating manner through pyridine nitrogen and carboxylate oxygen as a prevailing coordination mode [12][13][14][15][16][17][18][19][20][21][22]. Examples of a bridging mode through two or all three donor atoms are not very common [23][24][25][26]. The quinaldinate was introduced into our reaction system through the [Cu(quin)2(H2O)] starting material, one of the rare copper(II) quinaldinate compounds known prior to this report [27]. The choice of auxiliary ligands, pyrrolidine, morpholine, and piperidine (Scheme 1), was governed by their ability to bind uniformly in a monodentate manner. The group shares a highly basic character and complete miscibility with a large number of solvents. Besides, their NH moiety makes them good hydrogen bond donors. Such an ensemble of ligands is of interest also from the viewpoint of crystal engineering and chemical recognition. Herein, we present products of the [Cu(quin)2(H2O)] reactions with the selected amines. The compounds were characterized by X-ray structure analysis on a single crystal and infrared vibrational spectroscopy. A series of novel compounds may be divided into two groups. The first one is comprised of the desired amine complexes, [Cu(quin)2(pyro)2] (2), [Cu(quin)2(morph)2] (4), [Cu(quin)2(pipe)] (6), [Cu(quin)2(pipe)2] (7), [Cu(quin)2(pipe)2]•CH3CN (8), and [Cu(quin)2(pipe)2]•CH3CH2CN (9). The second group, where products of several unexpected reactions were assembled, illustrated both the reactivity of copper(II) and, as a consequence, the unpredictability of the reaction outcome. [Cu(quin)2Cl] -, a complex with chloride, was obtained as a pyrrolidinium (3) or a piperidinium (11) salt, when the corresponding amine reacted with dichloromethane, used as a solvent. trans-[Cu(en)2(H2O)2](morphCOO)2 (5) (en = ethylenediamine) was a result of the quinaldinate displacement with the morpholine impurity, ethylenediamine. The counter-anion of 5 was product of yet another inadvertent reaction: the one between morpholine and carbon dioxide. The piperidine reaction system presents itself as the most enigmatic of all. It yielded, apart from three metal complexes, a polycyclic piperidine compound 10, which was a product of a complicated electron transfer between amine and Cu 2+ . It is to be noted that copper(II)-assisted transformations of organic substrates often accompany the coordination chemistry of this metal ion [28][29][30][31][32][33][34][35]. Our study provides the basis for the ongoing work on the reduction of copper(II) with piperidine and its homologs. Synthetic Considerations Based on the previous study of related zinc(II) complexes with quinaldinate [36], [Cu(quin) 2 (H 2 O)] was expected to undergo a straightforward substitution of water with the amine ligand. At fairly mild conditions, the displacement reactions should result in complexes with one or two amine ligands, [Cu(quin) 2 (amine)] and [Cu(quin) 2 (amine) 2 ]. With the latter composition, two geometric isomers are possible. To our surprise, the behavior of the three chosen amines, in spite of their high likeness, was profoundly different. Unless stated otherwise, the described reactions were carried out at ambient conditions. In the case of morpholine, a virtually insoluble [Cu(quin) 2 (morph) 2 ] (4) with a trans disposition of ligands was the first isolated solid. The violet color of the second crystalline phase ruled out its composition to be any of the desired heteroleptic copper(II) complexes with quinaldinate and amine. Namely, complexes containing these two ligands are typically blue to green. Results of the X-ray structure analysis took us by surprise as the compound was identified as trans-[Cu (en) 2 (H 2 O) 2 ](morphCOO) 2 (5). Two of its components called for further explanations. The first one was morphylcarbamate, a counter-anion, which forms upon the morpholine reaction with carbon dioxide. This reaction was hardly without precedence [37][38][39][40][41]. morph + CO 2 → morphCOOH morphCOOH + morph → morphH + + morphCOO − The first product, 1-morpholinecarboxylic acid, reacts with the excess of morpholine and a salt is formed. Owing to the rich electron density on oxygen atoms, the carbamate ion was reported to be unstable. It could attain stability with the delocalization of electrons through hydrogen bonds [40]. The same mechanism is apparently at work in the case of [Cu(en) 2 (H 2 O) 2 ](morphCOO) 2 (5), where the carboxylate moiety participates in hydrogen bonding interactions. The other, a more intriguing component of 5, is the ligand ethylenediamine. The most imminent question concerns its source. Ethylenediamine was identified as a common impurity in morpholine, with its content in the 0.006-0.081% w/w range, depending upon the supplier [42]. In a control experiment, when we used a recently acquired morpholine, our results were reproduced. The filtrate, which gave single crystals of both [Cu(quin) 2 (morph) 2 ] (4) and trans-[Cu(en) 2 (H 2 O) 2 ](morphCOO) 2 (5), contained a very small amount of copper(II). The amount of ethylenediamine was apparently large enough to replace the bound quinaldinates. The end result of the competition between the two chelating ligands provides yet another evidence of a huge affinity of copper(II) towards ethylenediamine, which is distinguished by its high conformational flexibility, a property that quinaldinate lacks. The pyrrolidine reaction system behaved in a predicted manner as it produced a trans isomer of [Cu(quin) 2 (pyro) 2 ] in good yield. The crystalline solid, labelled 2, is poorly soluble in the majority of solvents with the exception of dichloromethane and chloroform. Its recrystallization from dichloromethane inadvertently resulted in (pyroH)[Cu(quin) 2 Cl] (3), a product, which confirms that pyrrolidine, initially present as a ligand in the copper(II) complex, reacted with the solvent. With dichloromethane, commonly employed as a solvent in synthesis and extraction processes, undesired reactions with primary and secondary aliphatic amines were observed previously, in particular when the solutions were left to stand for extended periods [43][44][45]. Later studies have confirmed that pyrrolidine rapidly reacts with dichloromethane at room temperature with the major products being 1,1 -methylenebis(pyrrolidine), known as aminal, and pyrrolidinium hydrochloride. Although aminal was not isolated in our case, its formation finds ample evidence in the literature [43]. The in situ formed chloride, which has many times proved to be competitive with other monodentate ligands, coordinated to copper(II). Piperidine was expected to react in an analogous manner [46]. However, the reaction that yielded (pipeH)[Cu(quin) 2 Cl] (11) differs from the one that afforded 3. Dichloromethane was added directly to the reaction mixture that normally produces copper(II) quinaldinate complexes with piperidine (please see the Materials and Methods). Yet, the reason for its introduction lies in the in situ formation of a sufficient amount of chloride that would allow its coordination to copper(I), a reduced form of the metal, and hopefully, crystallization of the cuprous complex. As will be shown presently, the piperidine/acetonitrile/Cu(II) mixture experiences a reduction of metal ions. Instead, a small number of crystals of (pipeH)[Cu(quin) 2 Cl] (11) was isolated from an orange-yellow solution that retained its color when exposed to the air. The color and its stability speak of the presence of the tetrahedral [CuCl 4 ] 2− , which can be of an orange color [4,47]. A huge excess of chloride makes a complete substitution in the copper(II) coordination sphere a highly probable process. Unfortunately, the presence of the [CuCl 4 ] 2− ions introduces an ambiguity about the copper(II) reduction. Namely, the colors of the reduced solution and of the one containing the tetrahedral [CuCl 4 ] 2− ions are very similar. Irrespective of the actual situation, our goal, the isolation of copper(I) complex, was not achieved by using this synthetic strategy. The prototypic reaction of copper(II) starting material with piperidine in acetonitrile requires further discussion. Within a few minutes after the complete consumption of [Cu(quin) 2 (H 2 O)], a blue solid, identified as [Cu(quin) 2 (pipe)] (6), started to precipitate. If no care is exercised, the transient mono-substituted complex reacts further with amine to trans-[Cu(quin) 2 (pipe) 2 ], which crystallizes in two forms, in a non-solvated one or with acetonitrile solvent molecules. Pure [Cu(quin) 2 (pipe) 2 ] (7), which has a more condensed structure, as compared to the channel-like structure of [Cu(quin) 2 (pipe) 2 ]·CH 3 CN (8), can be obtained reproducibly at solvothermal conditions. The result is in line with the expectation that more forcing conditions afford a denser structure with a lower level of solvation [48]. The most remarkable characteristic of the piperidine system is a change of color from green to deep red-brown that sets in after 3 to 4 days of stirring. On exposure to the air, the color promptly changed to green. The first change can be explained by the reduction of Cu 2+ to Cu + , and the second by the Cu + re-oxidation with the elemental oxygen. Our attempts at the isolation of the reduced metal species were not met with success. Apart from the addition of dichloromethane (see above), the concentration of the copper starting material was also increased. The only solid that precipitated from the red-brown solution, kept in a closed container, was a mixture of crystalline [Cu(quin) 2 (pipe) 2 ] (7) and [Cu(quin) 2 (pipe) 2 ]·CH 3 CN (8). In another attempt, a different starting material, CuCl 2 ·2H 2 O, was used. Although the change allowed isolation of a highly crystalline mono-piperidine complex 6, the behavior of a modified system was essentially the same. Another notable feature of the piperidine system is that it acquires a distinct odor of ammonia gas. In one instance only, colorless crystals of 10 (Scheme 2) grew from a reaction mixture that was kept at 5 • C for 2 months. Compound 10 lacks a complete characterization as it was not available in pure form, and a reproducible bulk synthesis was not achieved. Its true identity was shown by the X-ray diffraction analysis on a single crystal. The polycyclic compound 10 consists of four whole piperidine rings, fused together with five methine/methylene carbon atoms. Compound 10 gives an undisputed proof of piperidine involvement in the one-electron reduction of copper(II). The reduction of Cu 2+ with piperidine in acetonitrile has been reported previously [49]. An electron transfer from nitrogen lone pair to Cu 2+ ion was confirmed by ESR spectroscopy as the initial step in the reaction [50], yet at the time, a detailed knowledge of the free radical formed could not be drawn. Interestingly, no free radicals could be detected in a similar reaction of pyrrolidine. The amine radical cations, which form upon the electron loss from the parent amine, are known to display several modes of reactivity, which include C-C cleavage, hydrogen atom abstraction, the formation of reactive iminium ions, etc. [51]. The structure of 10 implies that a series of reactions with some involving the radical species was at work. Their complexity could be an answer as to why we did not succeed, after numerous attempts, at repeating the preparation of 10. An important question also pertains to the nature of the reduced metal species. Failure at its isolation suggests that our system lacks ligands that stabilize the reduced state. The Cu + ion is known to prefer soft ligands such as phosphines or iodide [6]. Furthermore, complexes with saturated N-donor ligands, as exemplified by piperidine, are generally less stable than the ones with unsaturated/aromatic ligands [4]. Contrary to the expectations, the literature reveals several copper(I) complexes with piperidine and none with quinaldinate, as demonstrated by [Cu(pipe)2Cl] [52], [{Cu(PPh3)(pipe)X}2] (X -= halide) [53] and [Cu4(pipe)4I4] [54,55]. Pertinent to our discussion is a dark red compound with polymeric [Cu2I3] -ions, which crystallized with the {(Hquin)2H} + counter-cations [56]. The acidic medium during its synthesis prevented the formation of quinaldinate and its subsequent coordination to copper(I). Its color was explained in terms of a charge-transfer electronic transition tailing into the visible region. The role of the solvent, acetonitrile, in the reduction, should not be overlooked. Acetonitrile has been reported to effectively solvate the Cu + ion, thereby making it more stable towards disproportionation or oxidation with oxygen [57]. It should be emphasized that our pyrrolidine or morpholine reaction mixtures did not show color changes that would suggest a reduction of metal ions. Solid State Structures Relevant structural features of the novel compounds are given first, whereas comparison with related literature examples is at the end of this section. The methanol complex, [Cu(quin)2(CH3OH)], crystallizes with one solvent molecule of methanol per formula unit. Solvent molecules render the turquoise crystals of [Cu(quin)2(CH3OH)]•CH3OH (1) unstable: crystals lose their luster rapidly when not in contact with the mother liquor. The copper(II) ion of [Cu(quin)2(CH3OH)] features a five-coordinate environment that consists of two bidentate N,O-chelating quinaldinates and a methanol molecule (Figure 1). With a chelating coordination of quinaldinate, a five-membered metallacycle is formed. The N2O3 donor set defines vertices of a square pyramid with the quinaldinate donors in its basal plane and the methanol oxygen occupying its axial site. The copper(II) ion is lifted ca. 0.17 Å above the basal plane towards the methanol oxygen. The τ parameter value, 0.05, agrees well with the square-pyramidal environment [58]. As shown in Figure 1, the relative disposition of quinaldinates is trans. A dihedral angle of 21.01(3)° is formed between the best planes of quinaldinates. The longest bond, i.e., the copper-to-methanol bond with the value of 2.2806(14) Å , is a result of the Jahn-Teller effect. It is slightly shorter from 2.33 Å , the average value observed for the Cu-alcohol bonds [59]. Both types of methanol molecules participate in hydrogen bonding interactions ( Figure 2 An important question also pertains to the nature of the reduced metal species. Failure at its isolation suggests that our system lacks ligands that stabilize the reduced state. The Cu + ion is known to prefer soft ligands such as phosphines or iodide [6]. Furthermore, complexes with saturated N-donor ligands, as exemplified by piperidine, are generally less stable than the ones with unsaturated/aromatic ligands [4]. Contrary to the expectations, the literature reveals several copper(I) complexes with piperidine and none with quinaldinate, as demonstrated by [Cu(pipe) 2 Cl] [52], [{Cu(PPh 3 )(pipe)X} 2 ] (X − = halide) [53] and [Cu 4 (pipe) 4 I 4 ] [54,55]. Pertinent to our discussion is a dark red compound with polymeric [Cu 2 I 3 ] − ions, which crystallized with the {(Hquin) 2 H} + counter-cations [56]. The acidic medium during its synthesis prevented the formation of quinaldinate and its subsequent coordination to copper(I). Its color was explained in terms of a charge-transfer electronic transition tailing into the visible region. The role of the solvent, acetonitrile, in the reduction, should not be overlooked. Acetonitrile has been reported to effectively solvate the Cu + ion, thereby making it more stable towards disproportionation or oxidation with oxygen [57]. It should be emphasized that our pyrrolidine or morpholine reaction mixtures did not show color changes that would suggest a reduction of metal ions. Solid State Structures Relevant structural features of the novel compounds are given first, whereas comparison with related literature examples is at the end of this section. The methanol complex, [Cu(quin) 2 (CH 3 OH)], crystallizes with one solvent molecule of methanol per formula unit. Solvent molecules render the turquoise crystals of [Cu(quin) 2 (CH 3 OH)]·CH 3 OH (1) unstable: crystals lose their luster rapidly when not in contact with the mother liquor. The copper(II) ion of [Cu(quin) 2 (CH 3 OH)] features a five-coordinate environment that consists of two bidentate N,O-chelating quinaldinates and a methanol molecule (Figure 1). With a chelating coordination of quinaldinate, a five-membered metallacycle is formed. The N 2 O 3 donor set defines vertices of a square pyramid with the quinaldinate donors in its basal plane and the methanol oxygen occupying its axial site. The copper(II) ion is lifted ca. 0.17 Å above the basal plane towards the methanol oxygen. The τ parameter value, 0.05, agrees well with the square-pyramidal environment [58]. As shown in Figure 1, the relative disposition of quinaldinates is trans. A dihedral angle of 21.01(3) • is formed between the best planes of quinaldinates. The longest bond, i.e., the copper-to-methanol bond with the value of 2.2806(14) Å, is a result of the Jahn-Teller effect. It is slightly shorter from 2.33 Å, the average value observed for the Cu-alcohol bonds [59]. in the crystal lattice in an interesting fashion: all the quinaldinates are nearly parallel and are coplanar with the 1 0 -1 lattice plane. π•••π stacking interactions may be recognized between the aromatic planes. The structures of [Cu(quin)2(pyro)2] (2), [Cu(quin)2(morph)2] (4) and [Cu(quin)2(pipe)2] (7) are similar: the coordinatively saturated copper(II) centre is six-coordinate with two bidentate chelating quinaldinates and two amine ligands in a relative trans disposition (Figures 3 and 4). The N4O2 donor set occupies vertices of a distorted octahedron. A frequently-encountered "4 + 2" pattern in the Both types of methanol molecules participate in hydrogen bonding interactions ( Figure 2). The methanol ligand is hydrogen-bonded to the carboxylate of an adjacent complex molecule. A centrosymmetric dimer, {Cu(quin) 2 (CH 3 OH)} 2 , is thereby formed. To this dimer, two solvent molecules of methanol are attached via O-H· · · COO − hydrogen bonds. The dinuclear assemblies pack in the crystal lattice in an interesting fashion: all the quinaldinates are nearly parallel and are coplanar with the 1 0 −1 lattice plane. π· · · π stacking interactions may be recognized between the aromatic planes. The structures of [Cu(quin)2(pyro)2] (2), [Cu(quin)2(morph)2] (4) and [Cu(quin)2(pipe)2] (7) are similar: the coordinatively saturated copper(II) centre is six-coordinate with two bidentate chelating quinaldinates and two amine ligands in a relative trans disposition (Figures 3 and 4). The N4O2 donor set occupies vertices of a distorted octahedron. A frequently-encountered "4 + 2" pattern in the The structures of [Cu(quin) 2 (pyro) 2 ] (2), [Cu(quin) 2 (morph) 2 ] (4) and [Cu(quin) 2 (pipe) 2 ] (7) are similar: the coordinatively saturated copper(II) centre is six-coordinate with two bidentate chelating quinaldinates and two amine ligands in a relative trans disposition (Figures 3 and 4). The N 4 O 2 donor set occupies vertices of a distorted octahedron. A frequently-encountered "4 + 2" pattern in the coordination bonds may be observed with the longest bonds to the quinaldinate nitrogen atoms. As the bite angle of the chelating ligand is limited, the distortion is restricted. The complex molecules of 2, 4, and 7 are centrosymmetric. Interestingly, in the structures of all, the asymmetric unit contains two halves of two complex molecules. For each pair, almost the same metric parameters are displayed with minor differences existing in the orientation of amine ligands. The similarity in the complex molecules of 2, 4, and 7 are reflected in their packing arrangements. For all, the complex molecules are hydrogen-bonded via N-H· · · COO − interactions into infinite chains. Within such a chain, each complex molecule forms four hydrogen bonds with two adjacent molecules. A section of a chain of hydrogen-bonded [Cu(quin) 2 (pyro) 2 ] molecules in 2 is shown in Figure 5. [60]. In the structures of 8/9, there are no π•••π stacking interactions with centroid-centroid distances below 4.0 Å . Furthermore, their packing is such to produce hydrophobic channels that accommodate solvent molecules of acetonitrile or propionitrile. The channels provide a facile escape route for solvent molecules when the crystals are taken out from the mother liquor. We crystallized another copper(II) compound with piperidine, [Cu(quin)2(pipe)] (6). The [Cu(quin)2(pipe)] complex features a five-coordinate metal environment, which consists of two bidentate chelating quinaldinates and a single piperidine ligand ( Figure 6). The analysis of the N3O2 coordination sphere by the method of Addison et al. gave a τ descriptor equal to 0.36 [58]. The coordination polyhedron takes the appearance of a distorted square pyramid with N,O-donors of one quinaldinate, piperidine nitrogen and oxygen of the other quinaldinate in its basal plane, and the remaining quinaldinate nitrogen at its apex. The coordination bonds differ from those determined for a six-coordinate complex, [Cu(quin)2(pipe)2]. With a smaller number of donors in 6, the bonds are shorter. The greatest discrepancy is observed in the Cu-N(quin -) bonds. In [Cu(quin)2(pipe)2], the Cu-N bonds exceed 2.4 Å , whereas in a five-coordinate [Cu(quin)2(pipe)], these bonds are significantly shorter as they occupy a 2.1065(13) to 2.2635(14) Å interval. The longest one, a result of Jahn-Teller distortion, is formed to the nitrogen at the axial site. With only five coordination bonds in the [Cu(quin)2(pipe)] complex, the quinaldinates adopt a more twisted conformation. A non-planarity of the five-membered chelate ring may be given by the Cu-Nring-C-CCOO torsion angle, which in 6 amounts to 12.94(16)° and 14.82(16)°. The torsion angles in [Cu(quin)2(pipe)2] are significantly smaller, e.g., the largest one is 7.78(14)° in 9. For 6, no short intermolecular interactions may be observed among the [Cu(quin)2(pipe)] molecules. Namely, the N-H•••COOcontact exceeds 3.1 Å . This comes as a surprise as with the amine coordination to copper(II), a partial positive charge of the amine hydrogen is increased and the N-H moiety made a better hydrogen bond donor [61]. In all piperidine complexes, the amine adopts a chair conformation with the NH hydrogen in the axial position. The [Cu(quin) 2 (pipe) 2 ] complex also crystallizes with solvent molecules of acetonitrile or propionitrile, [Cu(quin) 2 (pipe) 2 ]·CH 3 CN (8) and [Cu(quin) 2 (pipe) 2 ]·CH 3 CH 2 CN (9), with their structures being isomorphous. The metric parameters of the complex molecules in 8 and 9 are essentially the same and not different from those of 7. The supramolecular connectivity is similar to that in 7: the [Cu(quin) 2 (pipe) 2 ] molecules are linked in all three structures via N-H· · · COO − hydrogen bonds into chains. A close survey reveals an important difference between the chain structure of 7 and 8 (or 9). In 7, the molecules constituting the chains are in two different orientations. Conversely, the molecules in 8 are fully aligned. Dissimilarities in the chains impart different packing motifs. The packing of chains in 7 is dense with moderately short π· · · π stacking interactions [an arene· · · arene type, Cg· · · Cg = 3.6889(11) Å, dihedral angle = 0.02(9) • , interplanar distance = 3.4213(8) Å, and offset angle = 22.0 • ] occurring among adjacent chains [60]. In the structures of 8/9, there are no π· · · π stacking interactions with centroid-centroid distances below 4.0 Å. Furthermore, their packing is such to produce hydrophobic channels that accommodate solvent molecules of acetonitrile or propionitrile. The channels provide a facile escape route for solvent molecules when the crystals are taken out from the mother liquor. We crystallized another copper(II) compound with piperidine, [Cu(quin) 2 (pipe)] (6). The [Cu(quin) 2 (pipe)] complex features a five-coordinate metal environment, which consists of two bidentate chelating quinaldinates and a single piperidine ligand ( Figure 6). The analysis of the N 3 O 2 coordination sphere by the method of Addison et al. gave a τ descriptor equal to 0.36 [58]. The coordination polyhedron takes the appearance of a distorted square pyramid with N,O-donors of one quinaldinate, piperidine nitrogen and oxygen of the other quinaldinate in its basal plane, and the remaining quinaldinate nitrogen at its apex. The coordination bonds differ from those determined for a six-coordinate complex, [Cu(quin) 2 (pipe) 2 ]. With a smaller number of donors in 6, the bonds are shorter. The greatest discrepancy is observed in the Cu-N(quin − ) bonds. In [Cu(quin) 2 (pipe) 2 ], the Cu-N bonds exceed 2.4 Å, whereas in a five-coordinate [Cu(quin) 2 (pipe)], these bonds are significantly shorter as they occupy a 2.1065(13) to 2.2635(14) Å interval. The longest one, a result of Jahn-Teller distortion, is formed to the nitrogen at the axial site. With only five coordination bonds in the [Cu(quin) 2 (pipe)] complex, the quinaldinates adopt a more twisted conformation. A non-planarity of the five-membered chelate ring may be given by the Cu-N ring -C-C COO torsion angle, which in 6 amounts to 12.94(16) • and 14.82(16) • . The torsion angles in [Cu(quin) 2 (pipe) 2 ] are significantly smaller, e.g., the largest one is 7.78(14) • in 9. For 6, no short intermolecular interactions may be observed among the [Cu(quin) 2 (pipe)] molecules. Namely, the N-H· · · COO − contact exceeds 3.1 Å. This comes as a surprise as with the amine coordination to copper(II), a partial positive charge of the amine hydrogen is increased and the N-H moiety made a better hydrogen bond donor [61]. In all piperidine complexes, the amine adopts a chair conformation with the NH hydrogen in the axial position. (Table 1). Conversely, the Cu-N bonds vary a lot, from the shortest 1.9865 (17) The amine-to-copper(II) bond lengths in our compounds are rather similar. In a series of piperidine complexes, the one in a five-coordinate [Cu(quin) 2 (pipe)] (6) is slightly shorter than the ones in the [Cu(quin) 2 (pipe) 2 ] compounds 7, 8, and 9. On the whole, the bonds compare well with those in related compounds. Rare copper(II) complexes with pyrrolidine show the bond length to be dependent upon the location of the donor. In [Cu(Ibu) 2 (pyro) 2 ]·H 2 O (Ibu − = a deprotonated form of ibuprofen) with a square-planar distribution of donors, the Cu-N bond is 1.998(2) Å [63]. The pyrrolidine ligands of [(pyro) 3 Cu(µ 2 -OH) 2 Cu(pyro) 3 ] 2+ , which occupy equatorial sites in a distorted square-pyramid are at 2.035(1)-2.062(1) Å, whereas the apical one binds at 2.329(1)-2.344(1) Å [64]. Similarly, the Cu-N bond in related copper(II) piperidine complexes can be as short as 2.028(2) Å in case of a square-planar geometry [65], whereas it lengthens to 2.259(4) Å in a square-pyramidal species with amine located at the axial site [66]. Morpholine typically binds in a monodentate manner through nitrogen, as exemplified by the [Cu(diketonate)(morph) 2 (5) features a six-coordinate environment, which comprises of two bidentate chelating ethylenediamine ligands and two water molecules (Figure 8). The distribution of the N 4 O 2 donors resembles best an elongated octahedron whose square-plane is defined by the ethylenediamine donor atoms and the water oxygen atoms occupying its axial sites. The Cu-N bond lengths are in the 2.0142(15)-2.0200(15) Å range, whereas the Cu-O bond is 2.5384(14) Å. Since the complex is centrosymmetric, one ethylenediamine ligand is in δ and the other in a λ configuration [5]. The overall metric parameters of the copper(II) complex in 5 do not differ from those in other trans-[Cu(en) 2 (H 2 O) 2 ] 2+ compounds [69][70][71][72]. The morphlycarbamate, the counter-anion of 5, is in a chair conformation (Figure 9). Its nitrogen atom has a partial sp 2 character with the exterior CNC angles being close to 120 • . Due to the ring constraints, the interior angle is smaller, 114.37(15) • . The NC 3 group is not strictly planar as its nitrogen atom lies ca. 0.17 Å out of the carbon atoms plane. Recent survey of the CSD revealed only three compounds with morphylcarbamate ions [38,40,73]. The metric parameters of the morphCOO − ion of 5 are very similar to those. Molecules 2020, 25, x FOR PEER REVIEW 11 of 24 occupying its axial sites. The Cu-N bond lengths are in the 2.0142(15)-2.0200(15) Å range, whereas the Cu-O bond is 2.5384(14) Å . Since the complex is centrosymmetric, one ethylenediamine ligand is in δ and the other in a λ configuration [5]. [69][70][71][72]. The morphlycarbamate, the counter-anion of 5, is in a chair conformation (Figure 9). Its nitrogen atom has a partial sp 2 character with the exterior CNC angles being close to 120°. Due to the ring constraints, the interior angle is smaller, 114.37(15)°. The NC3 group is not strictly planar as its nitrogen atom lies ca. 0.17 Å out of the carbon atoms plane. Recent survey of the CSD revealed only three compounds with morphylcarbamate ions [38,40,73]. The metric parameters of the morphCOOion of 5 are very similar to those. The cyclic compound 10, 6,13-di(piperidin-1-yl)dodecahydro-2H,6H-7,14-methanodipyrido[1,2-a:1',2'-e] [1,5]diazocine, contains four nitrogen atoms, which are part of six six-numbered rings ( Figure 10). Four rings are joined together in a chain-like manner. The remaining two rings are attached via their nitrogen atoms, N2 and N3, to the fused part. The joined rings share two nitrogen atoms among them, N1 and N4. The fused part of the molecule may be viewed as two piperidine rings linked with five carbon atoms. All nitrogen atoms are in trigonal-pyramidal environments, whereas the carbon atoms are sp 3 -hybridized and in tetrahedral environments. Six carbon atoms, all belonging to the central two rings, are chiral. Five rings are in the usual chair conformation, whereas one, the internal N1 ring, is in the boat conformation. The dimensions of the heteronuclear rings were compared to dimensions [69][70][71][72]. The morphlycarbamate, the counter-anion of 5, is in a chair conformation (Figure 9). Its nitrogen atom has a partial sp 2 character with the exterior CNC angles being close to 120°. Due to the ring constraints, the interior angle is smaller, 114.37(15)°. The NC3 group is not strictly planar as its nitrogen atom lies ca. 0.17 Å out of the carbon atoms plane. Recent survey of the CSD revealed only three compounds with morphylcarbamate ions [38,40,73]. The metric parameters of the morphCOOion of 5 are very similar to those. The cyclic compound 10, 6,13-di(piperidin-1-yl)dodecahydro-2H,6H-7,14-methanodipyrido[1,2-a:1',2'-e] [1,5]diazocine, contains four nitrogen atoms, which are part of six six-numbered rings ( Figure 10). Four rings are joined together in a chain-like manner. The remaining two rings are attached via their nitrogen atoms, N2 and N3, to the fused part. The joined rings share two nitrogen atoms among them, N1 and N4. The fused part of the molecule may be viewed as two piperidine rings linked with five carbon atoms. All nitrogen atoms are in trigonal-pyramidal environments, whereas the carbon atoms are sp 3 -hybridized and in tetrahedral environments. Six carbon atoms, all belonging to the central two rings, are chiral. Five rings are in the usual chair conformation, whereas one, the internal N1 ring, is in the boat conformation. The dimensions of the heteronuclear rings were compared to dimensions The cyclic compound 10, 6,13-di(piperidin-1-yl)dodecahydro-2H,6H-7,14-methanodipyrido[1,2-a: 1 ,2 -e] [1,5]diazocine, contains four nitrogen atoms, which are part of six six-numbered rings ( Figure 10). Four rings are joined together in a chain-like manner. The remaining two rings are attached via their nitrogen atoms, N2 and N3, to the fused part. The joined rings share two nitrogen atoms among them, N1 and N4. The fused part of the molecule may be viewed as two piperidine rings linked with five carbon atoms. All nitrogen atoms are in trigonal-pyramidal environments, whereas the carbon atoms are sp 3 -hybridized and in tetrahedral environments. Six carbon atoms, all belonging to the central two rings, are chiral. Five rings are in the usual chair conformation, whereas one, the internal N1 ring, is in the boat conformation. The dimensions of the heteronuclear rings were compared to dimensions of piperidine in [Cu(quin) 2 (pipe)] (6). The N2 and N3 rings display somewhat shorter C-N bonds, i.e., 1.447(2)- 1.463(2) Å vs. 1.488(2)- 1.489(2) Å observed for 6. The same observation pertains to the peripheral N1 and N4 rings. Significant lengthening was observed for the C-C bonds in the inner two rings, i.e., the longest bond amounts to 1.559(2) Å for 10 vs. 1.526(3) Å for 6. In the solid state structure of 10, the molecules are held together by weak intermolecular interactions. Infrared Spectra The Table 2. These are in good agreement with the assignments reported previously for related quinaldinate complexes of other transition metals, as exemplified by 1642 and 1366 cm -1 found for [Zn(quin)2(1-methylimidazole)2] [74]. The νas(COO -) absorption occupies, in view of a great similarity of compounds, a surprisingly wide range, from 1676 cm -1 observed for [Cu(quin)2(pipe)] (6) to 1611 cm -1 for (pipeH)[Cu(quin)2Cl] (11). The spectra of 7, 8, and 9, compounds that contain the [Cu(quin)2(pipe)2] complex, feature the νas(COO -) absorption at 1624-1626 cm -1 . Although the νas(COO -) frequency appears to be sensitive to the immediate environment of the Cu 2+ ion, a more direct correlation exists with the involvement in hydrogen bonds. The highest frequency is observed for [Cu(quin)2(pipe)] (6), the only compound in the series with carboxylate moiety not engaged in strong intermolecular interactions. Large splitting values ∆, i.e., a difference between the νas(COO -) and νs(COO -) frequencies, in the 255-335 cm -1 range in the spectra of our compounds are as expected for a monodentate carboxylate coordination [75]. The presence of the amine ligands is confirmed by an absorption band of medium intensity at ca. 3200 cm -1 whose origin lies in the ν (N-H) vibration. In addition, several weaker bands in the 2990-2850 cm -1 range, due to the stretching vibrations of the aliphatic C-H bonds, may be seen. The position of the ν (N-H) band in piperidine compounds 6-9 shows correlation to the lengths of intermolecular contacts that involve the NH group. [Cu(quin)2(pipe)] (6) reveals a band at 3232 cm -1 , at higher energy when compared to ca. 3206 cm -1 (observed for 8 and 9) or 3170 cm -1 (7). The NH moiety in 6 does not participate in stronger intermolecular interactions. The opposite is true for 7, 8, and 9 with NH engaged in a hydrogen bonding interaction with the carboxylate oxygen. Infrared Spectra The infrared spectra of title compounds are dominated by the absorptions of the quinaldinate ligands. Both the positions and intensities of the bands that originate from the normal modes of quinaldinates are very similar. Without exceptions, all spectra show a set of four absorption peaks with a medium to strong intensity at 1568, 1513, 1461, and 1435 cm −1 , as exemplified by the spectrum of [Cu(quin) 2 (CH 3 OH)]·CH 3 OH (1). By far, the most intense bands pertain to the ν as (COO − ) and ν s (COO − ) absorptions. Their positions in the spectra of our compounds are listed in Table 2. These are in good agreement with the assignments reported previously for related quinaldinate complexes of other transition metals, as exemplified by 1642 and 1366 cm −1 found for [Zn(quin) 2 (1-methylimidazole) 2 ] [74]. The ν as (COO − ) absorption occupies, in view of a great similarity of compounds, a surprisingly wide range, from 1676 cm −1 observed for [Cu(quin) 2 (pipe)] (6) to 1611 cm −1 for (pipeH)[Cu(quin) 2 Cl] (11). The spectra of 7, 8, and 9, compounds that contain the [Cu(quin) 2 (pipe) 2 ] complex, feature the ν as (COO − ) absorption at 1624-1626 cm −1 . Although the ν as (COO − ) frequency appears to be sensitive to the immediate environment of the Cu 2+ ion, a more direct correlation exists with the involvement in hydrogen bonds. The highest frequency is observed for [Cu(quin) 2 (pipe)] (6), the only compound in the series with carboxylate moiety not engaged in strong intermolecular interactions. Large splitting values ∆, i.e., a difference between the ν as (COO − ) and ν s (COO − ) frequencies, in the 255-335 cm −1 range in the spectra of our compounds are as expected for a monodentate carboxylate coordination [75]. The presence of the amine ligands is confirmed by an absorption band of medium intensity at ca. 3200 cm −1 whose origin lies in the ν(N-H) vibration. In addition, several weaker bands in the 2990-2850 cm −1 range, due to the stretching vibrations of the aliphatic C-H bonds, may be seen. The position of the ν(N-H) band in piperidine compounds 6-9 shows correlation to the lengths of intermolecular contacts that involve the NH group. [Cu(quin) 2 (pipe)] (6) reveals a band at 3232 cm −1 , at higher energy when compared to ca. 3206 cm −1 (observed for 8 and 9) or 3170 cm −1 (7). The NH moiety in 6 does not participate in stronger intermolecular interactions. The opposite is true for 7, 8, and 9 with NH engaged in a hydrogen bonding interaction with the carboxylate oxygen. The spectra of the amine salts are markedly different from those of the parent amine ligands. In the spectra of pyrrolidinium and piperidinium salts of the [Cu(quin) 2 Cl] − ion, the region of both ν(N-H) and ν(C-H) absorptions is masked by two broad bands centred at ca. 2950 and 2460 cm −1 . Their intensity and shape reflect an extensive hydrogen bonding that involves the NH 2 + group. In addition, the spectra of both 3 and 11 reveal a peak that protrudes from the ν as (COO − ) band at ca. 1590 cm −1 . The latter appears in the region for the NH 2 + bending vibrations [76]. The infrared spectrum of [Cu(quin) 2 (pipe) 2 ]·CH 3 CH 2 CN (9) revealed a weak band at 2244 cm −1 , which can be attributed to the ν(C≡N) of the lattice propionitrile. The absence of the absorption peak at this wavenumber in the spectrum of [Cu(quin) 2 (pipe) 2 ]·CH 3 CN (8) is consistent with a rapid loss of acetonitrile on removing the crystalline solid from the mother liquor. Conversion of [Cu(quin) 2 (CH 3 OH)]·CH 3 OH (1) into the Aqua Complex The conversion was monitored by IR spectroscopy (Figure 11). The infrared spectrum of [Cu(quin) 2 (CH 3 OH)]·CH 3 OH (1) features absorption bands at 3276, 2929, 2828, 2807, 1042, and 1026 cm −1 whose origin lies in the vibrations of methanol [76]. Their intensity rapidly diminished with time, confirming the loss of weakly-bound methanol. A 5-min exposure of the sample to the air atmosphere resulted in a spectrum with the intensity of methanol peaks reduced by ca. 30%. With further exposure, the methanol peaks completely disappeared. Instead, a broad band at ca. 3300 cm −1 , ascribed to the ν(O-H) vibrations of water, started to gain in intensity. In addition, a shift of the ν as (COO − ) band from 1645 cm −1 for [Cu(quin) 2 (CH 3 OH)]·CH 3 OH (1) to 1631 cm −1 for the product, [Cu(quin) 2 (H 2 O)], could be observed. Different ν as (COO − ) frequencies for the methanol and aqua complexes are yet another demonstration of the influence of the hydrogen bonding over the position of the ν as (COO − ) bands, described in the preceding section. In both compounds, the neutral O-donor is engaged in intermolecular interactions with the carboxylate moiety. In the aqua complex [27], the corresponding O· · · O contacts are by ca. 0.1 Å shorter than in the methanol complex. With stronger hydrogen bonds in [Cu(quin) 2 (H 2 O)], the ν as (COO − ) frequency is shifted to a lower energy. The conversion into the aqua complex was completed after one hour. An explanation for the facile conversion was sought for in the solid state structures of the methanol and aqua complexes. Although no apparent reasons could be disclosed, certain structural features were brought to our attention. In spite of the likeness of the O-ligands, the overall structures are markedly different. The complexes have a different spatial distribution of the ligands in the first coordination sphere of the metal ion. The quinaldinates of [Cu(quin) 2 (CH 3 OH)]·CH 3 OH (1) are nearly parallel, whereas those in [Cu(quin) 2 (H 2 O)] are at an angle of approximately 58 • [27]. Different overall shapes of complex molecules impart different packing arrangements. Whereas the solid state structure of 1 consists of dimeric {[Cu(quin) 2 (CH 3 OH)] 2 (CH 3 OH) 2 } assemblies with their quinaldinates parallelly aligned and held together by weak π· · · π stacking interactions, the [Cu(quin) 2 (H 2 O)] molecules are linked with stronger intermolecular contacts, i.e., the O-H· · · COO − hydrogen bonds, into an infinite 2D-array. The main stimulus for the conversion probably lies in the fact that water fulfils the role of a stronger ligand and of a better hydrogen bond donor than methanol. The end result is the aqua complex with a very stable structure. Weak intermolecular forces and the apparent ease of the spatial rearrangement of quinaldinate ligands in 1 must also be recognized as the contributing factors. General All manipulations and procedures were conducted in air. First reactions were carried out with an old batch of piperidine and morpholine in originally sealed bottles, sold 30 years ago by Ventron. During the course of this study, new chemicals were purchased from Sigma Aldrich. With the exception of acetonitrile, the chemicals were used as received. Acetonitrile was dried over molecular sieves, following the published procedure [77]. The IR spectra were recorded from 4000 to 400 cm -1 with a Bruker Alpha II FT-IR instrument. The solid samples were analyzed on the single reflection ATR accessory. Elemental analyses (C, H, N) were performed by the in-house facility on a Perkin- General All manipulations and procedures were conducted in air. First reactions were carried out with an old batch of piperidine and morpholine in originally sealed bottles, sold 30 years ago by Ventron. During the course of this study, new chemicals were purchased from Sigma Aldrich. With the exception of acetonitrile, the chemicals were used as received. Acetonitrile was dried over molecular sieves, following the published procedure [77]. The IR spectra were recorded from 4000 to 400 cm −1 with a Bruker Alpha II FT-IR instrument. The solid samples were analyzed on the single reflection ATR accessory. Elemental analyses (C, H, N) were performed by the in-house facility on a Perkin-Elmer 2400 II instrument. The thermal analysis of [Cu(quin) 2 (CH 3 OH)]·CH 3 OH (1) was performed on a Mettler Toledo TG/DSC 1 instrument. Crystals of 1 were removed from the mother liquor, placed for a few seconds on a filter paper and then into a platinum crucible. Their mass was 4.1823 mg. The carrier gas was argon at a flow rate of 50 mL min −1 . The sample was heated from 20 to 800 • C at a rate of 10 • C min −1 . The baseline was subtracted. PXRD data for [Cu(quin) 2 (H 2 O)], our starting material, were collected on a PANalytical X'Pert PRO MD diffractometer using a Cu-K α radiation (λ = 1.5406 Å). X-ray Structure Determination Single crystal X-ray diffraction data were collected on an Agilent SuperNova diffractometer with molybdenum (Mo-K α , λ = 0.71073 Å) or copper (Cu-K α , λ = 1.54184 Å) micro-focus sealed X-ray source at 150 K. Each crystal was placed on a tip of a glass fiber using silicone grease and then mounted on the goniometer head. Data processing was performed with CrysAlis PRO [78]. The structures were solved with Olex software [79] using ShelXT [80] and refined using the least squares methods in ShelXL [81]. Anisotropic displacement parameters were determined for all non-hydrogen atoms. Details on the second polymorph of [Cu(quin) 2 (H 2 O)] were given in the preceding section. The solvent molecules in [Cu(quin) 2 (pipe) 2 ]·CH 3 CN (8) and [Cu(quin) 2 (pipe) 2 ]·CH 3 CH 2 CN (9) were disordered over the inversion center. For 9, the disorder was successfully resolved using PART −1 instruction. In case of 8, the disorder could not be modelled and the contribution of the disordered solvent to the scattering factors was, therefore, accounted for by the SQUEEZE program [82]. In all structures, the NH or NH 2 + hydrogen atoms and the OH hydrogen atoms of methanol and water were located from difference Fourier maps and refined with isotropic displacement parameters. The remaining hydrogen atoms were added in calculated positions. Programs Platon [83], Ortep [84], and Mercury [85] were used for crystal structure analysis and preparation of figures. Crystallographic data are collected in Tables 3 and 4 (9), -1984551 (10) and -1984552 (11). These data can be obtained free of charge via http://www.ccdc.cam.ac.uk/conts/retrieving.html (or from the CCDC, 12 Union Road, Cambridge CB2 1EZ, UK; Fax: +44 1223 336033; E-mail: deposit@ccdc.cam.ac.uk). Conclusions Reactions of copper(II) quinaldinate under mild conditions with selected alicyclic secondary amines, pyrrolidine, morpholine, and piperidine, produced desired amine complexes with the [Cu(quin) 2 (amine)] or trans-[Cu(quin) 2 (amine) 2 ] compositions. The {Cu(quin) 2 } structural fragment underwent substitution reactions with ethylenediamine, an impurity in morpholine, producing trans-[Cu(en) 2 (H 2 O) 2 ](morphCOO) 2 with morphylcarbamate as counter-anions. The morphCOO − ions and the anionic complex of (pyroH)[Cu(quin) 2 Cl] and (pipeH)[Cu(quin) 2 Cl] give evidence of the amine reactivity towards carbon dioxide or dichloromethane, respectively. Both interfering reactions are known. Despite the high similarity of the used amines, the behavior of piperidine towards copper(II) differed. In acetonitrile, the metal ions were reduced. The active role of the amine in electron transfer was confirmed by the X-ray structure analysis of a polycyclic piperidine derivative, not known prior to this work. Its formation, which probably involves a series of radical reactions, invites further investigation. Such studies are underway.
2020-04-03T19:14:15.750Z
2020-03-30T00:00:00.000
{ "year": 2020, "sha1": "cbbae1c4fddaeed85efb089fe6e3f367866aaa58", "oa_license": "CCBY", "oa_url": "https://www.mdpi.com/1420-3049/25/7/1573/pdf", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "57558706763bc36d6eac0bc343c1eedfb73ca1b8", "s2fieldsofstudy": [ "Chemistry" ], "extfieldsofstudy": [ "Chemistry", "Medicine" ] }
62504851
pes2o/s2orc
v3-fos-license
Mapping of Themes Pertaining to Operations Management: a Refined Analysis Based on the Perceptions of Researchers, Lecturers and Practitioners An article published at Revista de Gestão (REGE), an academic journal by the University of Sao Paulo, in 2013, proposed the mapping of Operations Management themes based on the editorial space provided in major journals and conference proceedings in the area. Based on such proposal, the current study conducted a survey to capture the importance assigned to those themes by researchers, lecturers and practitioners and how they categorized the themes into broader groupings. A factor analysis was performed with the data collected by means of the survey and several statistical tests were also carried out in order to assess the strength of the constructs and to confirm the dimensions proposed in the referred mapping of Operations Management themes, allowing for its refinement. The factor analysis resulted in nine factors, seven of which very closely resemble the constructs presented in the previous paper. Thus, the results obtained herein confirm most of the previously obtained mapping, providing a further step in the discussion of the themes that are relevant to the area of Operations Management. THE PRACTICE OF MAPPING AND CLASSIFYING STUDY THEMES Surveys on scientific research require some form of thematic classification that allows the themes covered by the studies published in a particular area to be mapped.Many authors prefer to use pre-existing categorizations as a basis for mapping the themes within their areas of study. The first issue of Revista de Administração de Empresas (RAE) of 2013 brought the results of a forum on the Brazilian scientific production in the field of Business Administration for the first decade of this century (BERTERO et. al., 2013).Five papers were published there that, in addition to other findings and recommendations, map the themes that were discussed in the papers published in the country's scientific journals for the following areas: Organizational Behavior (SOBRAL; MANSUR, 2013), Human Resource Management (MASCARENHAS; BARBOSA, 2013), Finance (LEAL et. al., 2013), Operations Management (PAIVA and BRITO, 2013) and Marketing (MAZZON;HERNANDEZ, 2013).Beuren et al. (2007) In an analysis of the scientific research in information systems produced between 1990and 2003, Hoppen and Meirelles (2005) structured a mapping framework for the addressed themes following a classification scheme for the IS literature proposed by Barki et al. (1993) consisting of nine main themes.These authors, in turn, recalled that "in June 1988 MIS Quarterly published a classification scheme of IS keywords.The development of this scheme was intended to provide a description of the discipline, introduce a common language, and enable research of the field's development" (BARKI et al. 1993, p. 209). In an analysis of the scientific research produced between 1990 and 2003 in accounting, Cardoso et al. (2005) developed a thematic classification system composed of eleven main themes, which was adapted from the topics proposed by the AAA (American Accounting Association) and the EAA (European Accounting Association).According to these authors, it Peinado, Graeml BBR, Braz. Bus. Rev. (Engl. ed., Online), Vitória, v. 13, n. 2, Art. 4, p. 82 -104, mar.-apr. 2016 www.bbronline.com.br is important to map the knowledge of academic papers published in a particular area because it enables the evaluation and reflection about these works. In another analysis of the national and international scientific publications on franchising produced between 1998 and2007, Melo andAndreassi (2010) developed their own thematic classification system, consisting of 24 main themes.The authors found that the Brazilian scientific production could be framed in only nine themes, while the international scientific production needed a mapping base of nineteen themes. In another study that analyzed the Brazilian scientific production between 1991 and 2002 regarding business strategy, Bertero, Vasconcelos and Binder (2003) also developed their own thematic classification system for the addressed subjects, with an initial mapping consisting of ten themes.According to the authors, this first mapping was unable to frame the entirety of the themes addressed in 101 articles out of a total of 303 analyzed articles.Thus, a new classification was developed that included fourteen additional themes.Martinez (2013) reviewed the recent academic literature about results management in Brazil with the purpose of identifying the main research themes in the Brazilian context and the results that interested users, regulators and those who prepare financial demonstrations. THE PRACTICE OF MAPPING AND CLASSIFYING STUDY THEMES IN OPERATIONS MANAGEMENT The mapping and classification of themes specifically concerning the field of Operations Management have occupied many researchers over the years.Scudder and Hill (1998) used the themes adopted by the Journal of Operations Management (JOM) as a basis for a mapping of the themes addressed in empirical studies on Operations Management published in thirteen journals between 1986 and 1995.These authors consider that the classification of the published articles into themes allows the identification of topics that concern Operations Management and the extent to which they are being developed.Arkader (2003), analyzing the scientific production in Operations Management in Brazil, also mapped the themes addressed in the surveyed articles based on a list of topics, tools and approaches previously suggested by JOM.It should be highlighted here that the list of JOM's topics, which was used as an initial reference by the above mentioned authors, is not divided in subareas comprising the themes included in the mapping.This could make it difficult to work on constructs for the refinement of the list and confirmation of such constructs by means of statistical methods such as factor analysis. METHODOLOGICAL PROCEDURES The survey participants were asked to answer a questionnaire with Likert-scale questions sent via email.The questionnaire consisted of 45 questions designed to cover the various themes of Operations Management, as identified in the mapping proposed by Peinado and Graeml (2013).This questionnaire attempted to understand the importance given to these themes by researchers, lecturers and practitioners but also sought to confirm the "thematic dimensions" of the original mapping through a factor analysis, adopting a procedure similar to that of Tan and Wisner (2003).These authors collected data through a survey consisting of 44 questions using a Likert scale.The responses were subjected to factor analysis leading to the emergence of four primary themes: supplier assessment practices, new product development practices, just-in-time (JIT) practices and quality practices.Perhaps the main difference between the study by Tan and Wisner (2003) and the one described in this article is that those authors considered only the importance of each theme according to senior managers in the manufacturing industry, whereas the present study also collected information from scholars who study and teach Operations Management. The 45 questions in the present study pertained to ten of the eleven major themes presented in the consolidated thematic mapping proposed by Peinado and Graeml ( 2013 Management) was not included in the questionnaire because it is not a thematic category of the area, but on the area, which reduces its importance to practitioners, although it remains valuable to researchers and lecturers. For each question, the respondents were asked to explain the degree of perceived importance for a topic; importance was determined by means of a seven-point Likert scale, as previously mentioned, ranging from "little important" to "extremely important", as shown in the Appendix.Respondents were instructed to leave the item blank if they had no formed opinion or were unaware of the topic. One intention of the survey was to generalize the results through a quantitative analysis of collected data, as suggested by Babbie (2001).Face validity was obtained through a pre-test of the questionnaire with respondents who occupy Operations Management positions in their companies.A group of fifteen respondents was used for this purpose, comprising eight professionals with management degrees and seven with production engineering degrees.It was felt that there was no need to validate the survey questions with researchers and lecturers because the survey was structured almost directly from the consolidated thematic mapping presented by Peinado and Graeml (2013), which had its origin in academic journals.Nevertheless, a pilot test was conducted with seventy lecturers to determine the dispersion of answers along the Likert scale. Selection of researchers for the sample To compose the sample subgroup of researchers, 36 researchers who have published more than five articles on Operations Management themes in Brazilian academic journals of relevance, from 2001-2010, were invited to participate in the study, as this level of production suggests that they are quite involved and are knowledgeable of the themes in the area.Contact information for the researchers of this sample subgroup was obtained by consulting the Lattes curriculum of each one of them.Twelve researchers participated by answering the questionnaire, thus achieving a 33.3% response rate for the population of researchers who met the selection criteria. Selection of lecturers for the sample For the composition of the subgroup of lecturers, lecturers of subjects related to Lecturers of subjects directly related to Operations Management in the Business Administration programs selected for the study, based on the above criteria, were identified by means of a preliminary search on the programs' websites.When the subject taught by the lecturers was provided on the program's Internet page, the search and selection process ended there.However, in many cases, only the name of the lecturer was provided, which increased the investigative effort, requiring that the Lattes curriculum of all lecturers in the program be consulted to identify those who taught Operations Management subjects. The 250 lecturers from the 57 institutions selected for the study were contacted through the link provided in the Lattes curriculum.Where the educational institutions' website provided an email address to contact the lecturer, the invitation to participate in the survey was also sent to that address. Selection of practitioners for the sample To compose the sample subgroup of practitioners, a list was used consisting of 1,300 Operations Management professionals, primarily represented by managers, supervisors or production coordinators who work in automotive companies, generally suppliers of major automakers, with industrial plants located in Brazil.The professionals are part of a registry created by the authors over recent years.This sample subgroup is justified by the belief that companies operating in the automotive industry have a mature Operations Management process as a result of high expectations for their day-to-day performance and through compliance with the numerous quality standards required by automakers. Procedures for collecting data from lecturers and researchers The procedure used for data collection in this study followed the data collection methodology recommended by Graeml and Csillag (2008).According to them, incorporating the Internet into routine work makes it easier to use the conveniences provided by the web to conduct data collection through questionnaires. For the pilot procedure, an invitation was sent via email with a link to the survey available on SurveyMonkeya website that specializes in implementing electronic questionnaires through the webto 70 lecturers randomly chosen from among those who met the criteria discussed in item 3.1.This invitation, or the subsequent reminder email or even a thank-you one, triggered the participation of 36 lecturers. The results of the pilot test provided no reason for changes in the data collection instrument or the procedure for sending the invitation; therefore, the exact same procedure was followed for the 36 researchers and remaining 180 lecturers, obtaining 67 additional replies. Summing the questionnaires returned by the pilot group and the main sample, which was possible because no changes were required in the procedure or content of the questionnaire, 103 completed questionnaires were obtained. Taking into account that 286 emails were sent, with seven returned as undelivered to reduce the number of possible respondents to 279, the return rate was approximately 37%, which is considered to be quite satisfactory.The fact that the sample involved lecturers and researchers accustomed to the academic environment, where this type of study is commonly conducted, may have contributed to the higher than usual return rate. During the data collection process, 35 respondents returned the emails expressing thanks for the invitation and confirming that the survey had been completed.In addition, 87 of the participants responded affirmatively to the last question, which asked if they would like to receive the study results, which is also evidence that participants were interested in the study. Procedures for collecting data from practitioners An email containing an invitation with a link to the study on SurveyMonkey was sent to practitioners registered in the database (approx.1,300 PROCEDURES FOR DATA PROCESSING AND ANALYSIS The data on an ordinal scale of responses (from "little important" to "extremely important") were converted to an interval scale, from 1 to 7, to allow the calculation of mean values.This procedure has been used in studies based on Likert-scale questionnaires for ease of analysis. The data obtained in the study were statistically analyzed using SPSS software (Statistical Package for the Social Sciences), version 15.Firstly, a descriptive analysis of each of the 45 quantitative variables was performed through EDA (Exploratory Data Analysis), which basically consists of exploring the data through graphical techniques, as recommended by Dancey and Reidy (2006) and Field (2009).Then, a factor analysis was used to verify the robustness of the constructs and confirm the dimensions (thematic categories) for Operations Management.Principal component analysis was used to extract factors, and sampling adequacy was assessed using the Kaiser-Meyer-Olkin (KMO) and Bartlett's sphericity tests.The factor analysis was complemented with varimax rotation, to make the variables more "loaded" on specific factors, thus facilitating their interpretation (DANCEY; REIDY, 2006;HAIR JR. et al., 2005).Variables with factor loadings of less than 0.4 were excluded from the study.No variable had a negative loading factor that needed to be interpreted in a reverse way.After defining which variables fell into each factor, the reliability of the generated factors was evaluated using Cronbach's alpha.The variables that reduced the reliability of the scale were discarded. ANALYSIS AND CONFIRMATION OF THE DIMENSIONS OF THE OPERATIONS MANAGEMENT THEMES Considering that there were no significant differences detected in the importance assigned to the different Operations Management themes, based on the stratification of respondents into researchers, lecturers and practitioners, it was decided not to perform this discrimination when analyzing the dimensions (themes) of Operations Management.Thus, all 207 responses in the survey were considered in the exploratory factor analysis, which sought to ensure that the responses given to questions that related to subjects associated with the same theme had a clear pattern, as it would be expected. An analysis of the frequency histograms and box plots generated allowed all 45 interval type variables of the set to be considered acceptable.Table 2 shows the results of the exploratory factor analysis.The Bartlett's sphericity test (p -value < 0.001) tests the hypothesis that the correlation matrix is the identity matrix, whose determinant is equal to one (FIELD, 2009).Such test is used to analyze the correlation matrix as a whole.The fitting of the sample to each of the individual factors can be assessed using the Kaiser-Meyer-Olkin test (KMO = 0.886).The KMO test checks the value of the correlation among variables.If the value is insufficient, i.e., KMO is close to zero, the use of a factor analysis technique may be unsuitable.On the other hand, if the value is close to one, factor analysis may be correctly employed (DANCEY; REIDY, 2006;HAIR JR. et al., 2005).These results allow the use of factor analysis as an exploratory technique for the intended study. From the 45 items originally used in the questionnaire, four (variables 1, 2, 7 and 21) contributed negatively to the reliability of the factors they were related to, and thus they were discarded.According to the interpretation criteria and varimax rotation, nine factors were adopted, with a total explained variance of 70.47%.Manufacturing project with the technology employed for the manufacture of the product, which would explain the repositioning of variable 31. Table 3 shows the results of the descriptive statistics and complements the results of the factor analysis; it displays the mean obtained for the factors, the cumulative variances, the index of internal consistency (Cronbach's alpha) and Pearson's correlations with the other factors.Cronbach's alpha is used to measure the reliability of the constructs, i.e., the internal consistency of responses given by different respondents with respect to the same construct (FIELD, 2009). FINAL CONSIDERATIONS Starting with the objective of analyzing the importance attributed to Operations Management themes through a direct approach to researchers, lecturers and practitioners, it was possible to make an interesting contribution to academia and to company management. This contribution consists of the empirical identification of nine thematic categories through a factor analysis, representing a refinement of the thematic map presented in a previous paper published at REGE (Revista de Gestão da USP) in 2013 (PEINADO; GRAEML, 2013).Therefore, the proposal to improve the consolidated mapping of Operations Management themes resulting from this work contributes to a better understanding of what researchers, lecturers and practitioners in the area perceive to be relevant and how they think that the themes relate to each other. used the thematic classification proposed by ANPAD [Associação Nacional de Pós-Graduação e Pesquisa em Administração (the National Association of Graduate Studies and Research in Administration)] in a survey on the comptrollership approach found in papers published in EnANPAD and in the University of Sao Paulo's Conference in Comptrollership and Accounting from 2001 to 2006.Cardoso et al. (2007) also used the thematic classification proposed by ANPAD in a profile analysis of the cost accounting studies presented in EnANPAD between 1998 and 2003. ): (1) Operations strategy, (2) Routine operations management, (3) JIT -Lean manufacturing, (4) Quality management, (5) Logistics and supply chain; (6) Ergonomics and work organization; (7) Environmental sustainability of operations, (8) Project management and product development; (9) Innovation and technology management, and (10) Service operations.The last category of the original themes mapping (Teaching and research in Operations These nine categories are (1) Operations management and lean manufacturing, (2) Logistics and supply chain, (3) Work organization and environmental sustainability, (4) Service Operations; (5) Management and development of projects, products and services, (6) Innovation and technology management, (7) Quality management, (8) Service strategy and sustainable supply chains, and (9) Information technology for production management. Correa, Paiva and Primo (2010) that involve some sort of mapping of Operations Management themes,Correa, Paiva and Primo (2010)searched for the themes discussed by Brazilian researchers in prestigious foreign journals of the field, among which: Journal of Mapping of Themes Pertaining to Operations Management: a Refined Analysis Based on the Perceptions of Researchers, Lecturers and Practitioners 86 BBR, Braz.Bus.Rev. (Engl.ed., Online), Vitória, v. 13, n. 2, Art. 4, p. 82 -104, mar.-apr.2016 www.bbronline.com.brPeinado, Graeml BBR, Braz.Bus.Rev. (Engl.ed., Online), Vitória, v. 13, n. 2, Art. 4, p. 82 -104, mar.-apr.2016 www.bbronline.com.br Curso -CPC) from the National Institute for Educational Research and Studies (Instituto Nacional de Estudos e Pesquisas Educacionais -Inep) were invited to participate in the survey.It is noteworthy to say that the Inep assessment is currently the primary tool adopted by the Ministry of Education (Ministério da Educação -MEC) for measuring the quality of Business Administration programs.This selection criterion identified 137 lecturers, who were invited to participate in the study, and 56 (40.9%) responded the questionnaire.To address the third part of this sample group, lecturers from 20 Business Administration programs that achieved grade 3 in the CPC from Inep were invited to participate in the study.This third criterion aimed to allow the inclusion of average quality programs.Following this selection criterion, 52 lecturers were invited to participate, from whom eleven (21.2%) responded the questionnaire.These sample subgroups were used to identify any differences in perceptions among lecturers from programs with different profiles. Operations Management in undergraduate courses from 57 educational institutions were invited to participate based on three criteria.The first criterion sought to ensure the participation of lecturers from educational institutions that focused on scientific production in Operations Management, which is measured based on the number of articles published by researchers affiliated with the institutions in scientific journals of national relevance from 2001-2010.This selection criterion was considered to be appropriate due to the belief that educational institutions more prolific in scientific publication in the area of Operations Management offer Business Administration programs that consider Operations Management to be an important area for training their undergraduate students.Following this selection criterion, 61 lecturers were invited to participate, from whom 24 (39.3%)responded the questionnaire.The second criterion sought to include lecturers from Business Administration programs of recognized excellence.Thus, lecturers from all 27 Business Administration programs that achieved grade 5 in the Preliminary Program Rating (Conceito Preliminar de Peinado, Graeml BBR, Braz.Bus.Rev.(Engl.ed., Online),Vitória, v. 13, n. 2, Art. 4, p. 82 -104, mar.-apr.2016 www.bbronline.com.br Mapping of Themes Pertaining to Operations Management: a Refined Analysis Based on the Perceptions of Researchers, Lecturers and Practitioners 90 BBR, Braz.Bus.Rev.(Engl.ed., Online),Vitória, v. 13, n. 2, Art. 4, p. 82 -104, mar.-apr.2016 www.bbronline.com.br emails were returned as unable to be delivered to recipients, which reduced the number of possible respondents to just over 800.A total of 104 full surveys were obtained, representing 12.7% of the potential respondents; this rate was still considered satisfactory.Table1summarizes the return rates obtained for each stratum of the sample. Table 2 -Matrix with Variables from the Mapping of Obtained Themes and Factors In summary, nine groupings emerged from the factor analysis; the first seven factors have a composition structure for their variables quite close to the proposal in the consolidated mapping table presented by Peinado andGraeml (2013).The data shown in Table 2 indicate Peinado, Graeml BBR, Braz.Bus.Rev. (Engl.ed., Online), Vitória, v. 13, n. 2, Art. 4, p. 82 -104, mar.-apr.2016 www.bbronline.com.br that only four variables were not incorporated into the initially proposed themes, and eventually formed two new thematic sets: factor 8, which grouped the variables 28 and 37, was called Service strategy and sustainable supply chains, and factor 9, which grouped variables 12 and 35, was named Information technology for Operations Management.It is also observed that the variables related to the Routine operations management and Lean Manufacturing themes were combined into a single factor; the same occurred with the variables of the themes Ergonomics and work organization and Environmental sustainability of operations.Variable 31, Manufacturing project, did not remain in factor 5, related to Operations management and product development, but joined factor 6, which addresses Innovation and technology management.It is possible that the respondents related BBR, Braz.Bus.Rev.(Engl.ed., Online),Vitória, v. 13, n. 2, Art. 4, p. 82 -104, mar.-apr.2016 www.bbronline.com.br Table 3 -Descriptive Statistics and Correlations Extracted from the Factor Analysis As shown, the first seven factors had a Cronbach's alpha higher than 0.8.Only the last two factors had values close to 0.65, which were still considered to be satisfactory for the purposes of this study.Together, the nine factors explain 70.47% of the cumulative variance.Such figures indicate the validity of using the generated factors to replace the variables they represent.The result shows that the mapping of Operations Management themes obtained through the factor analysis is well aligned with the mapping of themes previously obtained by Peinado andGraeml (2013)who had analyzed the editorial space provided by journals and conferences addressing that area; the Peinado and Graeml (2013) mapping served as a basis for the preparation of the survey questionnaire.The factor analysis allowed that mapping to be enhanced, with the proposition of a new version for the consolidated mapping of Operations Management themes, as shown in Table4.Mapping of Themes Pertaining to Operations Management: a Refined Analysis Based on the Perceptions of Researchers, Lecturers and Practitioners 98 BBR, Braz.Bus.Rev. (Engl.ed., Online), Vitória, v. 13, n. 2, Art. 4, p. 82 -104, mar.-apr.2016 www.bbronline.com.brBBR, Braz.Bus.Rev. (Engl.ed., Online), Vitória, v. 13, n. 2, Art. 4, p. 82 -104, mar.-apr.2016 www.bbronline.com.brBBR, Braz.Bus.Rev. (Engl.ed., Online), Vitória, v. 13, n. 2, Art. 4, p. 82 -104, mar.-apr.2016 www.bbronline.com.br Table 4 - New proposal for the mapping of Operations Management themes Source: authors, based on the study results
2019-01-22T11:29:36.719Z
2016-03-01T00:00:00.000
{ "year": 2016, "sha1": "747ef6ae84063002f6d6f44f5de03e10d1a89c9d", "oa_license": "CCBY", "oa_url": "https://bbronline.com.br/index.php/bbr/article/download/147/227", "oa_status": "GOLD", "pdf_src": "Anansi", "pdf_hash": "747ef6ae84063002f6d6f44f5de03e10d1a89c9d", "s2fieldsofstudy": [ "Business", "Education" ], "extfieldsofstudy": [ "Psychology" ] }
259449034
pes2o/s2orc
v3-fos-license
Mental Workload Evaluation For PMM Outbound Student In X University (UNIX) Using NASA-TLX Method namely INTRODUCTION Merdeka Belajar Kampus Merdeka (MBKM) is one of the policies of the Minister of Education and Culture. Kampus Merdeka is an autonomous and flexible form of learning in higher education so that a learning culture can be created that is innovative, unfettered, and in accordance with student needs (Tohir, 2020). The MBKM program has eight activities in accordance with Permendikbud Number 3 Year 2020 Article 15 paragraph 1, namely Internships, Teaching Assistance in Education Units, Research/Research, Humanitarian Projects, Entrepreneurial Activities, Independent Studies/Projects, Building Villages/Real Work Courses Thematic, and Student Exchange. One of the MBKM programs that students are interested in is the Pertukaran Mahasiswa Tanah Air Nusantara (PERMATA) which has been implemented since 2014. Figure This program allows student to study in different university and majors. It was attended by students of Engineering Faculty from UNIX in State and Private University throughout Indonesia. By participating PMM, the student have overload credits because they have to finish double credit from homebased and destination university. This condition affects the burden on students both physically and mentally. This mental burden of students has an impact on student achievement academically, so the mental burden analysis is needed. In this study, an evaluation of the mental workload was carried out using the NASA-TLX approach. The result found that in general 75% of students experienced a very high workload. The percentage of students who take PMM in State University as much as 50% is in the category of high mental workload and 50% is very high category. The percentage of students who take PMM in Private University is 25% in the very high category and 75% is in the high category. From this study, UNIX can have some consideration about providing the brief (tips and tricks) before PMM for student mental preparation and choosing the Private University for destination university. The PERMATA program has been running since 2014 and continues to improve from year to year. In 2019 the PERMATA program was refined to PERMATA SAKTI due to the use of information technology in the credit transfer system. In 2021, this program was refined again and named the Program Mahasiswa Merdeka (PMM). PMM is a student exchange from one regional cluster to another that can provide experiences of diversity and a credit transfer system for a maximum of 20 credits (Tim Pertukaran Mahasiswa Merdeka Kemendikbud RI, 2021). In this program, students are given the opportunity to attend lectures at other universities and even other study programs. MBKM provides flexibility for students to be able to take part in the learning process outside of higher education for one to three semesters according to interests outside the study program (Kemendikbud, 2021). From this program, students are expected to have the opportunity to innovate creatively in order to compete with other universities in the ASEAN region. With this policy, it is hoped that universities can develop the quality of education (Haryati, 2012). In addition, the independent learning program is expected to improve the quality of human resources (Baro'ah, 2020). PMM is implemented in almost all universities in Indonesia, including UNIX. In 2021, there are 66 UNIX students participating in the outbound PMM program. Figure 2 shows the distribution of the number of students in each faculty. Figure 2. Number of Students of the UNIX PMM Program The UNIX outbound PMM program requires UNIX students to attend courses at UNIX as the homebased university and at the destination university. This policy makes the workload of students also increase when participating PMM. In addition, the differences in culture both in terms of the learning process, lecturers, and PMM classmate are the challenges for UNIX students. The challenges faced by PMM UNIX outbound students can increase the physical and mental workload. According to Widiasih and Nuha (2018), the physical and mental workload of students must be managed properly so that learning objectives can be achieved optimally. In addition, physical and mental workloads need to be considered so as not to affect productivity in the student learning process (Bilawat, 2019). Mental burdens that are not managed properly can cause psychological problems that will automatically interfere with physical activity (Deasyanti & Muzdalifah, 2021). Any changes or additions to learning patterns, such as the PMM Program, can cause a mental burden among students (Febrilliandika & Nasution, 2020). Therefore, it is necessary to evaluate the mental burden of UNIX outbound PMM students so that the program can run well in accordance with the objectives. Approach that can be taken to evaluate mental workload is NASA-TLX Questionnaire (National Aeronautics and Space Administration Task Load Index). NASA-TLX was developed based on the need for measurements of mental, physical, time, frustration level, performance, and level of effort. There are Economics and Bussiness Faculty Teacher and Education Faculty Engineering Faculty several previous research using NASA-TLX approach. This method used to evaluate the mental workload of train drivers using the NASA-TLX development method, namely RNASA-TLX. The information obtained from this study is that there is a higher mental burden on the afternoon and evening trips for train drivers (Muslimah & Hastuti, 2017). Research using the NASA TLX method was also carried out by Rahajeng at the Yogyakarta Tiki Company. The result is that there is a mental workload between operators so that a proposal for equal distribution of mental load is given (Rahajeng, 2021). Research conducted by Rusindiyanto gave the same result, namely an imbalance in the provision of mental workloads between work divisions at PT. Single Paint Djaja Indah (Rusindiyanto et al., 2016). The NASA TLX approach is mostly used to measure the workload of operators and employees but there are still few studies that evaluate the mental workload of students using this approach. In addition, the evaluation of the MKBM program, especially the PMM program, has never been carried out. Later, this study can be used as a consideration and corrective for UNIX and other private university. RESEARCH METHOD NASA-TLX was first developed by Sandra G. from NASA-Ames Research Center and Lowell E. Staveland in 1981 (Simanjuntak, 2010). In this method there are 6 components that are measured, namely Physical Demand (PD), Mental Demand (MD), Temporal Demand (TD), Performance (P), Effort (U), and Frustration Level (FL). The definition for each component is as follows. 1. Physical Demand (PD) assesses how much this job requires physical activity. 2. Mental Demand (MD) assesses how much this job requires mental and perceptual activity (counting, remembering, comparing, etc.). 3. Temporal Demand (TD) assesses how much time pressure is on this job. 4. Performance (P) measures the level of success in work. 5. Effort (U) measures how much effort is required to complete the job. 6. Frustration Level (TF) measures the level of frustration caused by the work done. The measurement steps using the NASA-TLX method are as follows (Rusindiyanto et al., 2016). The respondent for this research is the PMM student in 5 th semester. The age is about 21-22 years old with male to female ratio is 50:50. The ethical approval are given in this study. It declares in the brief of questionnaire that the data form the respondent will be kept and never be shared to others. The questionnaire is spread of virtually to the Engineering Faculty Student who participated in PMM. The questionnaire consists of two sections. Section A is to collect the weight and section B is to collect the rating. Weighting In the weighting stage, 15 combinations of the six factors above are presented. Students are asked to choose which of the two given factors is considered more important or more dominant based on the activities experienced during the PMM activity. The criteria of importance and dominance are subjective based on respondent opinion about their experience and activity. The form of pairwise comparison of the combination of factors is shown in Table 1. The results of the combination are processed using Expert Choice software which will be calculated based on the pairwise comparison method to determine the weight of each indicator. The total weight for each indicator is 1. The consistency of filling the combination is validated using the inconsistency value obtained from the Expert Choice software, where the inconsistency value should not be more than 10% (Young et al., 2008). Rating At the rating stage, students are asked to give a score from 0-100 to each indicator. There are 6 indicators that must be filled by student. The indicators and questions show in Table 2. How much physical effort is required for your job? Temporal Demand (TD) How much pressure do you feel with regard to time to do your job? Performance (P) How much your success rate in doing your job?? Frustration Level (FL) How much anxiety, feelings of pressure, and stress do you feel regarding time to do your job? Effort (E) How much physical and mental work is required to complete your work? The results of the rating stage are used to determine the average workload (mean weighted workload). The rating classification for each component is shown in Table 3. Determination of Weighted Workload (WWL) WWL is determined by adding up the results of multiplying the weights obtained in step (a) with the rating given in step (b). While the average WWL is obtained from the division of WWL with a total combination of indicators, 15. The classification of the total workload uses the categories in Table 2. The mathematical equation for calculating WWL is as follows. = ∑ rating i 6 =1 × weight i (1) The results of this WWL can be used to determine the mental workload of each student, as well as to find out which indicators have the greatest role in determining the mental workload of students during PMM. The method section structure should: describe the materials used in the study, explain how the materials were prepared for the study, describe the research protocol, explain how measurements were made and what calculations were performed, and state which statistical tests were done to analyze the data. The method must clear with the location and time of the research, the population, and sample of the study, the research variables, and the research data. RESULTS AND DISCUSSION In this study, data processing and analysis were carried out for 3 criteria, namely PMM in general, PMM in State University, and PMM in Private University. On the general criteria, respondents filled out a questionnaire based on what they felt when participating in the PMM Outbound program in general, both at Private and State University. PMM at State University respondents filled out a questionnaire based on what they felt when participating in the PMM Outbound program at Private University (PU). Meanwhile, at PMM State University respondents filled out a questionnaire based on what they felt when participating in the PMM Outbound program at the State University (SU). There are 6 indicators assessed on the NASA TLX questionnaire, namely Physical Demand (PD), Mental Demand (MD), Temporal Demand (TD), Performance (P), Effort (U), and Frustration Level (FL). In the NASA TLX method, there are two types of questionnaires that must be filled out by respondents, namely a Level of Importance Questionnaire and a questionnaire about the assessment of the score rating on each indicator. The questionnaire data on the importance of the respondents were processed using expert choice software to obtain the weight of each criterion. Figure 3 is an example of processing one respondent's data using expert choice software. Data for all respondents for general criteria, PU, and SU were obtained in Expert Choice. The weight of each criterion is used to obtain the product value, namely the relationship between rating and weight and the weighted workload (WWL) which is the total product value of the six indicators. After obtaining the WWL value, the workload category for each respondent can be determined. There are 5 categories of workload, namely Low, Medium, Quite High, High, and Very High. Table 4 shows the results of the calculation of WWL for PMM in General, PU, and SU. There is a difference in the number of respondents in PMM Outbound at SU because not all respondents follow PMM Outbound at SU. There are 4 students who take part in PMM Outbound only at PU. From the WWL data and the criteria in Table 4, it can be seen that in general 75% of students experience a high mental workload when participating in PMM Outbound at PU and 25% have a very high mental workload. In PMM Outbound at PTS, 75% of students experience a high mental workload and 25% of students have a very high mental workload. While in PMM Outbound at SU, 50% of students experience a high mental workload and the remaining 50% experience a very high mental workload. Figure 4 shows the percentage of students' mental workload when taking PMM. Figure 4. Percentage of mental workload From the WWL data, further analysis can also be carried out to see which indicators have the greatest product value. Figure 5, Figure 6, and Figure 7 show the graph and percentage of product value in the PMM program in general, in PU, and in SU. Based on the results obtained, there are some implications for student and UNIX. For the students, they have to pay more attention to Temporal Demand (TD). It can be concluded form the result shown that Temporal Demand (TD) or Time be the most important indicator. This study gives suggestion for the student to give more attention for the time management. So, the student able to undergo and finish the PMM with low or medium mental workload. In addition, UNIX can take some consideration about giving the brief (tips and tricks) for student before participating the PMM to prepare mentally and choose more private university for destination university. The respondent declared that they took more than 24 credits in one semester from origin and destination university. So, UNIX can also reconsider the credit policy that must be taken by students who take part in the PMM program so that the mental workload of students is balanced because excessive workload can cause stress (Fahamsyah, 2017). Either simultaneously or partially individual stress variables, group stress, and organizational stress has a significant effect to their performance (Amrianah, 2019). CONCLUSION The results of the analysis using the NASA-TLX method, in general, 25% of student have very high mental workload and 75% of student have high category mental workload. From this result, UNIX can take some consideration about giving the brief for student before participating the PMM to prepare mentally. 50% of students who take PMM in State University in the high mental workload category and the rest is in the very high category. The percentage of students who take PMM in State University as much as 25% is in the very high category and 75% is in the high category. For further PMM, UNIX can choose more private university for destination university. From this study, it can be concluded the mental workload for PMM in general. For further study, it could be more detail to discuss about student mental workload for PMM in every single activity.
2023-07-11T02:46:57.376Z
2022-04-20T00:00:00.000
{ "year": 2022, "sha1": "eea4397bc9b2e4d95023298002f278c59b673fa0", "oa_license": "CCBYSA", "oa_url": "http://journal3.uad.ac.id/index.php/spektrum/article/download/28/21", "oa_status": "GOLD", "pdf_src": "Anansi", "pdf_hash": "10c85e00a85b555f0a5bd01b4e88d725e465b830", "s2fieldsofstudy": [ "Education" ], "extfieldsofstudy": [] }
225397241
pes2o/s2orc
v3-fos-license
Analysis of arc surges in electric networks of the Arctic region This work is devoted to the urgent issue of the analysis of arc surges in electric networks of the Arctic region of the Russian Federation, using the electric power system of the Kola Peninsula as an example. The article presents an approach to the development of a computational model of a power system in the Matlab software environment, the Simulink simulation application. When compiling the calculation model, the following points were taken into account: the model was in a three-phase formulation; the inductances of overhead power lines were taken into account; non-linear arc resistance was taken into account; grounding resistance was taken into account; the magnetic coupling between the phases and interphase capacitances were taken into account. Based on the results of the model, graphs of transients in the system were obtained for various operating modes. The correctness of the calculation model was checked on the following points: on the voltage, it is necessary that the phase voltages on the buses of the substation ΦC-24 correspond to the phase voltages at all points of the model and be shifted relative to each other by an angle of 120 °; current, current at all points of the system, the current must be symmetrical; according to the current of a single-phase short circuit, it is necessary that the value of the short circuit current obtained using the calculation model corresponds to the current value obtained in the analytical calculations. In the study of arc surges, more than 60 calculations were performed. The performed calculations showed the dependence of the overvoltage ratio on the fault location and the uneven distribution of overvoltages in the considered sections. An analysis of the results obtained in the work allows us to conclude that it is advisable to apply this approach to other electric power systems of the regions of the Russian Federation and other countries. Relevance The main factors determining the maximum overvoltage during single-phase earth faults are: voltage at the emergency phase at the time of primary ignition of the arc, the moment of extinction of the arc and voltage of re-ignition of the arc [1], [2], [3]. It is also necessary to take into account the peculiarities of the development of arc overvoltages in an electric network with long sections of overhead power lines, i.e. in the presence of elements with a relatively large inductance and resistance. Therefore, it is necessary to draw up a calculation model that takes into account these parameters. When compiling a calculation model, the following requirements are imposed on it [1], [2], [3], [4], [5]: Requirements for the design model of arc overvoltages • for the model of arc overvoltages, it is necessary that the model be in a three-phase formulation; • since If there are overhead power lines (the presence of elements with a relatively large inductance), then the inductance of these air inserts must be taken into account; • account must be taken of the nonlinear resistance of the arc, i.e. the model must be on positive and negative polarity; • transients and overvoltages are affected by grounding resistance at the location of insulation damage. First of all, this refers to the support of overhead lines. Therefore, the model must take into account the grounding resistance; • since the section is connected to the supply transformer, therefore, there is a connection with the system, it is advisable to take into account the magnetic coupling between the phases, therefore a three-phase model of the transformer with galvanic isolation and with a grounded high side winding is required; • transient processes during an arc fault to earth can be affected by interphase capacitances of the network, therefore it is necessary to take into account interphase capacitances in the model of air inserts. However, the presence of a load at the connection points of transformer substations has a shunting effect on the interphase capacitance, therefore, it is necessary to take into account the load resistance between the phases [6], [7], [8], [9], [10], [11]. Design model development To develop the calculation model, the Matlab software environment, the Simulink application, was used. The power system of the Kola Peninsula was chosen as the object of study. Figure 1 shows the design model diagram for a 10 kV network section connected to the first bus system of the substation ПС-24 power substation. The model performed the replacement of cable and overhead lines with elements with distributed parameters. At node 0, capacities of the Ф-5 feeder are included, and partially the capacities of the remaining feeders. We turned up the Ф-4 feeder and presented it in the form of a single П -cell (line capacities are assigned to node 6). In the diagram, the 110 kV network and the energy source at the supply substation are represented by a block of branches 1-23 (nodes 1-12). The resistance of the branch 4 -corresponds to the grounding resistance of the substation ПС-24. The model diagram of a 10 kV network section connected to the first substation ПС-24 bus system includes 61 nodes and 163 branches. Model check The correctness of the calculation model is checked on the following points: 1. in terms of voltage, it is necessary that the phase voltages on the buses of the substation ПС-24 correspond to the phase voltages at all points of the model and be offset relative to each other by an angle of 120 °; 2. by current, the current at all points of the system, the current must be symmetrical; 3. by the current of a single-phase short circuit, it is necessary that the value of the short circuit current obtained using the calculation model corresponds to the current value obtained in analytical calculations [9][10][11]. The implementation of this test is confirmed by the voltage and current waveforms for the Ф-10 feeder, shown in Figure 2 From the current waveform (graph c) it was found that the amplitude value of the short circuit current on the first bus system is = 19.78 A. In the study of arc surges, more than 60 calculations were performed. Including in phase A in nodes No. 16, 20, 27, 30, 36 and 43 of the calculation model shown in Figure 1. These nodes in the calculation model correspond to specific points on the working diagram of sections of the 10 kV electric network connected to substation ПС-24. Characteristic of arc overvoltages on the first bus system of distribution point РП-7 Consider the case of modeling a short circuit in phase A on the first bus system of the distribution point РП-7 (in node 16 of the calculation model). The characteristic oscillograms of voltages and currents obtained for this case, as well as the arising overvoltages, are shown in Figs. 3 and 4. The graph c) shows the arc current at the place of damage, the following time points are indicated here: tl -the arc ignition moment; t2 -the moment of possible arc extinction; t3 -moment of absence of extinction of the secondary arc due to the fact that the extinction peak exceeds the phase voltage t4 -arc extinction time; t5 -the moment of reignition of the arc. The multiplicity of overvoltages during the first arc burning is less than during repeated ones. This is explained by the fact that at the time t3 there was a decrease in the neutral bias voltage, but at the moment of re-ignition of the arc t5 the neutral bias voltage increased and this led to an increase in the 5th International Conference "Arctic: History and Modernity" IOP Conf. Series: Earth and Environmental Science 539 (2020) 012148 IOP Publishing doi:10.1088/1755-1315/539/1/012148 8 overvoltage ratio. If the arc in the time interval tl -t4 were ignited and extinguished immediately without re-ignition, then an increase in the overvoltage ratio would not have occurred. Figure 4 shows that the levels of overvoltage during a short circuit in node 16 reach values from 20.9 kV to 21.92 kV, and the greatest overvoltages occur on the overhead line Л-62, the magnitude of which is 21.92 kV. Due to the large volume of the obtained voltage and current waveforms, the most characteristic waveforms were given, and the magnitude and calculated values of the overvoltage multiplicity in the 10 kV network section connected to the first substation bus system PS-24 are summarized in Table 1. The performed calculations showed the dependence of the overvoltage ratio on the fault location and the uneven distribution of overvoltages in the considered sections. The specific number of thunderstorm ceilings of the insulation of overhead lines of 10 kV per 1 km was determined taking into account direct lightning strikes in a line, including reverse ceilings and induced overvoltages. The performed calculations of arc overvoltages and estimation of the number of overlapping insulation lines showed the need for measures to limit arc overvoltages on sections of the 10 kV network of substation ПС-24. The developed model showed the reliability of the calculations, which, in turn, will allow it to be used to analyze the electrical networks of the entire Arctic region.
2020-08-20T10:06:55.346Z
2020-08-13T00:00:00.000
{ "year": 2020, "sha1": "ab91148f052109b0a16be0aaa1ef0de1028c8552", "oa_license": null, "oa_url": "https://doi.org/10.1088/1755-1315/539/1/012148", "oa_status": "GOLD", "pdf_src": "IOP", "pdf_hash": "1bf8567fb8e35b5e054a41977b99af8c15ed8ba0", "s2fieldsofstudy": [ "Engineering", "Environmental Science" ], "extfieldsofstudy": [ "Physics", "Geology" ] }
237632903
pes2o/s2orc
v3-fos-license
The Impact of the COVID-19 on the Construction Industry in Vietnam The COVID-19 pandemic has generated a wide range of socio-economic disruption, which causes devastating in numerous aspects. Our knowledge of the true health of the construction industry under the ravage of COVID-19 outbreak is largely based on very limited data. This study aims to assess the impact of pandemic on the construction industry through an investigation in Vietnam. Data were collected through 129 respondents whose online questionnaire survey completed according to their recent direct or indirect participation in delivering construction projects during the spread. The implications of COVID-19 on the construction industry were examined based on simple percentage analysis and Relative Importance Index approaches. Three principal facets of the construction industry were considered: firms' business activities, project performance, and workforce demand. The findings highlighted the multilevel, multidimensional nature of the epidemic consequences on the construction sector. Notably, the revenue and profitability, in a general sense, have decreased during the COVID-19 period, while most of the production and business costs had remained unchanged. Further, the pandemic was argued to impair construction practitioners' incomes and mental health and sabotage projects' schedule and cost. Article History Received: 12 January 2021 Received in revised form: 31 March 2021 Accepted: 17 May 2021 Published Online: 31 August 2021 Introduction It is the beginning of 2021, but the world is more uncertain than ever. At the beginning of the year 2020, many people expected a plump 2020 number to bring more confidence and optimism than 2019, a year full of changes. However, the reality has proved fierce. The Covid-19 pandemic, as of this writing, has caused nearly 40 million infections, more than 1 million 113,000 deaths in 235 countries, areas, or territories (The World Health Organization, 2020). The World Bank asserted that the Covid-19 pandemic had caused the most profound global crisis in decades, and the final consequences are still ambiguous (The World Bank Group, 2020). Accordingly, a minimum of 5.2% of the global economy will be in decline. The vast majority of emerging markets and developing economies will decline because of the pandemic, and it will also cause lasting damage to labour productivity and potential output. The latest data shows that the global economic recovery has slowed down, although there have been signs of improvement since the middle of this year (The World Bank Group, 2020). As noted by PricewaterhouseCoopers (2020), the Covid-19 outbreak has brought unprecedented challenges, which are expected to have a significant impact on Vietnam's economic development this year. The building sector, as opposed to other sectors, saw 4.5% growth in the first six months of the year, slightly higher than the 4.37% in the first quarter and yet lower than the 7.85% growth during the same period last year, showing that the industry is sluggish and reluctant in its recovery (Can, 2020). Covid-19 has become a hot topic and has attracted much attention from the academia. In addition to the medical studies of Covid-19, researchers worldwide have proved responsive to academic publications upon the impact of Covid-19 on multiple areas. For example, education (Daniel, 2020), gender equality (T. M. Alon, M. Doepke, J. Olmstead-Rumsey, & M. Tertilt, 2020), small business outcomes (Bartik et al., 2020), strategies for mitigation and suppression (Walker et al., 2020), and refugee camps (Truelove et al., 2020). Multiple studies have been undertaken and published on built environment domain, but not as many as those in other areas. On the other hand, many authors, e.g. (Loayza & Pennings, 2020), (Ataguba & Ataguba, 2020), (Gerard, Imbert, & Orkin, 2020), claimed that the crisis could even be worse in low-income countries. Despite being a developing country with a high poverty rate, Vietnam is coping very well with the pandemic (P. L. Dinh & Ho, 2020;Le et al., 2020). Scholars and systematic assessments are essential as they will help stakeholders grasp the situation and impacts of the Covid-19 outbreak on socio-economic development (H. H. Dinh, 2020). This paper will draw an overall picture of and Vietnamese construction industry's performance, revealing the patterns of the professionals' thinking in the pandemic context. Considering the global spread of the coronavirus and the economic contractions, the empirical evidence in Vietnam is expected to yield valuable insights for other countries and regions to gain a solid recovery. The goal of this study assessed the impact of the Covid-19 pandemic on construction activities through data collected in an investigation in Vietnam. Accordingly, research objectives were specified as follow: • To determine the aspects of the construction industry that have been impacted by the Covid-19 pandemic. • To evaluate the impact of the Covid-19 pandemic on business activities of construction enterprises. • To evaluate the impact of the Covid-19 pandemic on the construction workforce. • To evaluate the impact of the Covid-19 pandemic on the performance of construction projects. • To recommend measures against the negative effect of the Covid-19 pandemic on the construction industry. 2 Background of the study and its variants will trigger waves of crisis over the coming months or worse, possibly for several years. The world, having said that, must adjust in order to accept this, as it would necessitate the transformation to the new normal (Välikangas & Lewin, 2020). We need to increase our ability to update and adapt to a continually changing situation in a way that has never been seen before. The Vietnamese government is in excellent control of the epidemic. However, its intervention has been widely criticised for the lack of systematicity and intensity (H. H. Dinh, 2020;Le et al., 2020). The number of deaths of COVID-19 is worrying, and this positively affects the mental health of the citizens such as anxiety over the unstable job, job loss, income reduction or even death (Gruber et al., 2020;Otu, Charles, & Yaya, 2020). Until the world fights Covid-19, the topic of the epidemic/pandemic's impact on economic sectors has not been particularly appealing to scholars. The world in the past few decades, on account of the development of medical science, has been free from fear and concern for pandemics (Burkle, 2020;Lum & Tambyah, 2020). There have been also many opinions that people, including scholars, show disregard for the impact of diseases on industries (Dixon, McDonald, & Roberts, 2002;Karlsson, Nilsson, & Pichler, 2014). Nevertheless, some epidemics/pandemics were significantly examined in association with industries' performance (see Table 1). (Phimister, 1973), Poultry (Obayelu, 2007) Health Insurance (Jim Toole & CERA, 2010) Tourism (Page, Yeoman, Munro, Connell, & Walker, 2006), (Rassy & Smith, 2013), (Page & Yeoman, 2007) Finance (Maldin et al., 2005) HIV/AIDS Construction (Meintjes, Bowen, & Root, 2007), (Bowen, Dorrington, Distiller, Lake, & Besesar, 2008), (Harinarain & Haupt, 2014) Mining (Matangi, 2006) Tourism (Zengeni & Zengeni, 2012) Generic Oil and gas (Flynn, Kaitano, & Bery, 2012) Using institutional audit methodology, Meintjes et al. (2007) drew our attention to the burden of HIV/AIDS pandemic on the South African construction industry. The most significant and most visible impact is the cost increase, i.e. increased financial outlays and decreased productivity. The measures, especially from the CIDB side, had been available, but were not practical and even caused many concerns. In the same vein, Bowen et al. (2008) adopted a quantitative approach pointed out the high prevalence of the virus in the workforce and its correlation with the South African construction industry structure. The findings from this study suggest that the high HIV/AIDS prevalence rate can have a deleterious effect on not only the construction industry but the South African economy as well. The meagre number of published studies in this theme, to a certain extent, reflects the ambiguous and loose link between the construction industry and the epidemiology. As McGrail, Rickard, and Jones (2006) argue: 'There is often a long period from manuscript commencement to submission, revision, acceptance and, finally, publication.' Not to mention, in construction management research, the methods used in data collection such as interviews, focus groups and questionnaires are considered very time-consuming (Alshenqeeti, 2014;Gill, Stewart, Treasure, & Chadwick, 2008;MacLean, Meyer, & Estable, 2004). It seems that publishing in the construction sector has not been as fast and responsive as some other fields. Drawing on a case study of the ultra-rapid delivery of speciality field hospital, Luo, Liu, Li, Chen, and Zhang (2020) provide an indepth analysis of the synthesis of product, organisation, and process (POP) approach and building information modelling (BIM). Megahed and Ghoneim (2020), at the same time, highlight the need to envision in what shape post-disaster architecture and antivirus cities might be. From a legal perspective, Hansen (2020) explores the potential of the Covid-19 outbreak as a force majeure in popular suites of a construction contract, i.e. NEC, JCT and FIDIC. Both Megahed and Ghoneim (2020) and Hansen (2020) adopted document review, former one exposing lessons in architecture and urban development, whereas the latter offers legal advice upon pandemic-related force majeure. Meanwhile, Araya (2020) using agent-based modelling approach looks into the impact of the outbreak on construction project, stimulating the spread of COVID-19 among workers. In a similar vein, Afkhamiaghda and Elwakil (2020) propose a preliminary model and set of indexes of coronavirus spread into the construction site and workforce, implicating the urgency to diffuse cutting-edge technologies (e.g. Internet of Things, robotics). These studies, while both using the context of construction industry in a pandemic situation, delved into very different themes using distinct methods. Several studies have attempted to investigate the impact of COVID-19 on the local and regional construction industry. The reviews are not yet adequate due to the uncertainty of the current situation (Gamil & Alhagar, 2020). Bsisu (2020) investigates the impact of COVID-19 pandemic on Jordanian civil engineers and construction industry. Taken together, these findings would seem to suggest that engineering designers could work from home with reasonable performance. In contrast, site engineers do not believe that after the lockdown is lifted, construction workers will adhere to social distancing and wear essential personal protective equipment. Gamil and Alhagar (2020) claim that the most prominent impacts of Covid 19 are the suspension of projects, labor impact, job loss, time overrun, cost overrun, and financial implications. The findings shed light on the consequences of the sudden pandemics and raise awareness of the most critical impacts that cannot be overlooked. However, Ogunnusi, Hamma-Adama, Salman, and Kouider (2020) explain that some construction sectors of Sub-Saharan Africa (SSA) are exploring the opportunity that emanated with COVID-19 compared with many other nations of the world. Indigenous manufacturing is one of the promising sectors in the SSA with the interference to supply chain globally, emphasising the significance of fostering the local capacity to encourage industrial construction. Since COVID-19 causes the worldwide recession (T. Alon, M. Doepke, J. Olmstead-Rumsey, & M. Tertilt, 2020; Gallant, Kroft, Lange, & Notowidigdo, 2020;Guerrieri, Lorenzoni, Straub, & Werning, 2020), it is predictable if there will be many studies on the relationship between the construction industry and the economic recession. In the past, however, an inconsiderable amount of literature was published on that umbrella of topics, notably: employment (Hadi, 2011), crisis management (Sfakianaki, Iliadis, & Zafeiris, 2015), profitability (Yoo & Kim, 2015), and government influence (Tansey & Spillane, 2014). A growing body of literature has examined construction performance in developing countries such as Cambodia (Durdyev, Mohamed, Lay, & Ismail, 2017), Ghana (Kissi, Agyekum, Adjei-Kumi, Caleb, & Micheal, 2020), Ethiopia (Ofori, 2018). However, very little is known about the true health of the construction industry under the ravage of Covid outbreak. To the best of our knowledge, the literature has not discussed the multilevel and multidimensionality of epidemic implications. This study hopes to create a foundation for the pandemic's publications and policies' impact on the economy in general and construction industry aspects in particular. This is of utmost importance to prepare stakeholders for similar massive disasters of the future, preventing Covid19-like shocks and rethinking developing industries' resilience to global catastrophic uncertainties. Research Methodology The present study was conducted based on a questionnaire survey aimed at effectively collecting all the necessary data. The questionnaire was composed of two main parts. The first part contained demographic information of the participants (i.e., qualifications, positions, professional experience, and role in the construction project) whose primary purpose was to describe the participants to ensure reliability and strengthen research findings effectively. The second part included the list of identified aspects among the construction sector that has been impacted by the COVID-19 pandemic (i.e., business activities of construction firms, construction workforce, and construction project performance). Participants were selected to answer an online survey based on their previous direct or indirect participation in the implementation of construction projects in Vietnam during the COVID-19 pandemic. Pilot Test Before distributing the questionnaire, a pilot study was carried out to verify the questionnaire and ensure that the information returned by the construction workforce would be appropriate to the goals of the present study. This stage was carried out by sending the questionnaire project to five experts with many years of experience and comprehensive knowledge on this subject. They assessed the validity of the questionnaire content, evaluated on the readability of the linguistics, and recommended additional factors in the questionnaire. After receiving their comments, the questionnaire was slightly changed. Measurement Method For analysing data, this study used the simple percentage analysis combined with the Relative Importance Index (RII) method to measure the impact of the COVID-19 pandemic on construction activities. The RII method was used by numerous studies (i.e., (Alaghbari, Al-Sakkaf, & Sultan, 2019; Gunduz & Abdi, 2020;Hiyassat, Hiyari, & Sweis, 2016;Jarkas, 2015;Jarkas, Kadri, & Younes, 2012). The RII index was calculated based on Equation (1): Where: Wi is the rating given to each factor by the participant ranging from 1 to 5; Xi represented the percentage of respondents scoring and reflected the order number for the respondents; i is the order score ranging from 1 to 5. For RII approach, the sample size was determined according to the following formula with a reliability of 95% (Hogg, Tanis, & Zimmerman, 2010): Where: n is a sample size of limited population; m is a sample size of unlimited population; P is the degree of variance between the elements of the population (usually P = 0.5); ℰ is tolerance (±3%, ±4%, ±5%,); z is the distribution value corresponding to the reliability of choice (95% confidence, z value is 1.96); N is the total number of responses collected. Sampling and Data collection To determine the sample size needed through following formula in which z = 1.96; P = 0.5; pick ℰ = 0.04 (4%). By using formulas (2) and (3) with value m = 600 and N = 129 the number of samples needed for this study is 107 samples. Survey data was collected from a sample of respondents who had lately engaged with construction project(s) in Vietnam. The distribution of respondents appears to provide a rather diversified perspective from different positions in projects (i.e., project managers, site supervisors, design engineers, consulting engineers, architects, and authorities). An online questionnaire distributed a total of 150 samples through email. Only 129 answers were received, and 123 qualified responses (average age is 29.33, SD=5.911) for research that is more than the required sample size (107 samples), representing an effective rate of 82.0%. The first part was to collect respondents' demographic information, including their categories of gender, education levels, work experience, organisation involvement in the construction projects, the role of the participant in the construction project, and project characteristics (i.e., type of project, type of project fund, and project capacity). Table 2 presents the demographic of the respondents under the investigation. Results and Discussions In this study, two software applications were applied to examine the findings, which are MS Excel 365 and SPSS 20. The analysis results have been calculated and assessed based on their simple percentage, and RII. This section consists of three main items which include the impact of the Covid-19 pandemic on business activities of construction enterprises, construction workforce, and performance of construction projects. The impact of the Covid-19 pandemic on business activities of construction firms In the context of the global and domestic economies facing numerous challenges as a result of the complicated and protracted development of the Covid-19 pandemic, which has had a significant impact on the production and business activities of construction enterprises in various ways. As provided in Figure 1 For the construction sector, the cost of construction work consists of 60-70% of material costs, labor costs account for 10-20%, and the remaining 10-20% refers to machinery costs (El-Gohary & Aziz, 2014;McTague & Jergeas, 2002). Figure.4 to Figure 7 demonstrate the impact of Covid-19 on production and business costs of construction enterprises, which involve direct materials costs, direct labor costs, machinery ownership and operating costs, and indirect costs (i.e., company management cost, construction site management cost, delivery cost, and bank interest payments). Accordingly, most construction companies explained that although have affected Covid-19 pandemic, their production and business costs have remained unchanged, while the numbers of enterprises showed that their production and business costs have decreased. The construction industry employs a large number of laborers compared to other industries, hence, labor demand for construction companies is significantly affected by the Covid-19 spread. The findings from the study of (Araya, 2020) indicated that the construction workforce of a construction project may be reduced by between 30% and 90% due to the COVID-19 pandemic. As provided in Figure 8 and Figure 9, the labor demand of enterprises has fluctuated during the Covid-19 existence. The majority of construction firms revealed that their permanent labor demand has remained unchanged, at 57.14% and 51.14% of public and private sectors respectively. However, only 6.82% of private-owned companies showed that their permanent work demand has decreased, whereas, this figure for public-owned companies is higher, at 17.14%. In contrast, the percentage of contractual labor demand of private enterprises has increased, accounting for 56.82%, while this number of public enterprises is only 37.14%. This finding indicates that private construction companies tend to use more of temporary labor during the Covid-19 pandemic than public construction companies. As demonstrated in Table 3, the Covid-19 has a significant impact on numerous aspects of organisation and management of construction enterprises. The findings indicate that 'planning change' is ranked the 1 st position, which proves that the Covid-19 has a high impact on planning change of both public and private enterprises (RII=3.00 and 3.27 respectively). This finding is supported by (Gamil & Alhagar, 2020) who demonstrated that construction planning is likely to be significantly affected during the Covid-19. With RII=2.89, 'policy-making change' is ranked 2 nd by the respondents working in the public sector, while this aspect is ranked 3 rd by those working in the private sector (RII=3.20). In contrast, participants working public sector ranked 'reward and well-being program' was 3 rd position (RII=2.60), whereas, participants working public sector ranked this aspect was 2 nd position (RII=3.26). The findings reveal that these two aspects of organisation and management of construction enterprises are noticeably affected by the Covid-19 pandemic. Communication in an enterprise is ranked 4th in both public and private sectors with RII=2.54 and 2.70, in turn, this indicates that this aspect is moderately affected by the Covid-19. Several different aspects such as work culture, competencies, reputation, of construction enterprises are ranked at the end positions (RII=2.53, 2.50, and 2.23 respectively), which proves that the Covid-19 pandemic have a low impact on these aspects of construction enterprises. The impact of the Covid-19 pandemic on construction workforce The labor market is being significantly affected by the Covid-19 pandemic, in which, its impact indicates the effect on the income of millions of construction laborers worldwide. Figure 10 illustrates the impact of Covid-19 on income of construction workforce. Accordingly, most respondents showed that their incomes have remained unchanged even though affected by the Covid-19, at 52.85%. 44.32% of respondents working in private sector explained that their incomes have decreased, while this figure for public sector is higher, at 48.57%. Only 2.27% of laborers working in private enterprises demonstrated that their incomes have increased during the Covid-19 spread, whereas, no laborer is working in public enterprises showed that their incomes have increased. This finding is supported by the RII result, which ranks income as the most affected in the first position with RII=3.33 (Table 4). The result proves that Covid-19 has a very high impact on construction workforce's incomes. Although their incomes are affected by the Covid-19, very few respondents have received support from their companies. As provided in Figure 11, only 7.95% of private-owned enterprises introduce on reward and well-being programs to support their employees, while this number of public-owned enterprises is higher, at 17.14%. In contrast, 51.14% of respondents working in private sector showed that their companies more difficult carried out to support programs in their workforce, whereas, this figure for public sector is 42.86%. As provided in Table 4, with RII = 3.49, factor of 'mental health' is ranked the first position by the respondents working in the public sector, while respondents working in public sector assessed this factor is the second position (RII=3.25). This finding indicates that the Covid-19 pandemic has a significant impact on psychology of construction workforce. In fact, during the Covid-19 spread worldwide, many countries have experienced lockdown and quarantine time in a long time, which is the main cause negatively affected by human mental health (Torales, O'Higgins, Castaldelli-Maia, & Ventriglio, 2020;Xiong et al., 2020). The surveyed respondents ranked 'motivation' as the third position in both private and public sectors (RII=3.11 and 3.31 respectively), which proves that the Covid-19 has a moderately affected work motivation of construction workforce. Factors of 'productivity' and 'physical' of labors are ranked at the end with RII=3.02 and 2.90 respectively, which reveals that the Covid-19 has a low effect on productivity and physical of construction workforce. Recently, although Vietnamese construction labor productivity has been enhanced (Hai & Van Tam, 2019;Nguyen, Van Tam, Dinh, & Quy, 2020;Van Tam, Huong, & Ngoc, 2018;Van Tam, Quoc Toan, Tuan Hai, & Le Dinh Quy, 2021), the existence of Covid-19 can make a negative impact on construction productivity improvement. The finding is supported by (Alenezi, 2020) who demonstrated that the Covid-19 is a cause make low productivity of construction workers. In Oman, construction companies are also reducing their staff, and the workforce is mostly unemployed (Al Amri & Marey-PÃ, 2020). The impact of the Covid-19 pandemic on construction project performance Numerous construction projects globally are being impacted by the Covid-19 in various aspects. Figure12 to Figure 15 show the impact of Covid-19 on performance of construction project in terms of schedule, quality, cost, and safety. The surveyed respondents indicated that most construction projects have fallen behind schedule due to the effect of the Covid-19 pandemic, accounting for 61.79% of the total, while only 3.25% of respondents explained that construction projects have run ahead of schedule. This finding is supported by the results of the RII method ( In contrast, the majority of respondents indicated that project quality has been remained and safety has been ensured within construction process. In particular, 82.93% of respondents explained that the quality of construction project is remained unchanged, while only 13.82% of respondents showed that project quality has been decreased. Besides, 73.17% of respondents demonstrated that safety of construction process has been ensured compared to only 14.63% of respondents suggested that safety of construction process has been decreased. These findings are supported by RII' results which ranked safety and quality of construction project at the end positions with RII=2.24 and 2.09 respectively, which proves that the Covid-19 have a low impact on safety and quality of construction project during the Covid-19 spread. The surveyed respondents ranked 'stakeholders communication' and 'environmental issue' are 3 rd and 4 th position (RII=2.80 and 2.27, in turn), which reveals that the Covid-19 pandemic has a moderate impact on stakeholder communication and environmental issue within construction process. These findings are further supported by (Alenezi, 2020) who explained that the Covdi-19 is a primary cause that makes poor safety conditions, poor scheduling and planning of project, and poor communication with other parties. Conclusions and Recommendations Construction activities, without exception, have also been affected by the pandemic. In general, the effects are negative and multilevel ( Figure 16). In terms of organisational level operations, there has been a clear decline in overall revenue. Although some types of costs have been remain or reduced, they are caused by freezing in operation or shortage of contracts. Demand for labor has shown signs of increasing during the epidemic. This is quite surprising because it is the opposite of some other areas such as: gastronomy, tourism, service, or nonfood retail (Spurk & Straub, 2020), leisure & hospitality and retail trade (Kurmann, Lale, & Ta, 2020). Having said that, construction businesses suffer in many ways. There is not much difference between the public and private sectors when the three most affected dimensions are composed of Planning change; Policy-making change and Reward and well-being program. Meanwhile, the reputation was considered almost not very affected. This is quite surprising compared to other studies where the authors believe that reputation could suffer a lot during times of crisis (Abimbola et al., 2010;Patterson, 1993;Šontaitė-Petkevičienė, 2014). According to the majority of the respondents, employee income and welfare were said to have decreased. This is understandable because the company's revenue has been hit hard in the context of the bleak market and the local shutdown in many places. This situation is likely to last, and even worsen, when the economic shock is on, not much different from the Spanish flu pandemic of 1918 (Brainerd & Siegler, 2003). In addition to income, both public and private practitioners believe that their psychology and motivation are seriously affected by the outbreak. This finding suggests a real need for models and policies related to mental health care at workplace. At project level, schedule, cost and communication were considered to be under the most pressure. We need to pay attention to this finding because the trio has long been seen as influential greatly to the success of the project (Andersen, Birchall, Jessen, & Money, 2006;Belout & Gauvreau, 2004 Taken together, these results provide a rather panoramic picture of the construction industry in the Covid-19 era. A pandemic like an atomic bomb has already exploded, but has left many serious and persistent consequences. Research such as this paper, although still sketchy, will serve as the foundation for policymakers and management to envision those consequences, towards building countermeasures to help revive the economy and the construction industry in particular. Although the Vietnamese government introduced several supportive policies, construction businesses have also actively implemented solutions to maintain their business and operation activities and constrain the COVID-19 pandemic's negative impacts. Accordingly, many solutions have been implemented by construction enterprises such as cutting staffs; reducing workers' wages; reducing bonus and welfare regimes; reducing other costs (e.g. advertising, training); delaying in payment of wages and allowances to employees; negotiating on late payment of bank interest negotiating for advance payment; applying for a specific mechanism for businesses from the state. However, these solutions are still temporary, challenging to bring into play longterm effectiveness in the face of complicated developments of the COVID-19 epidemic. Therefore, in order to help construction businesses to ensure construction activities and towards longterm sustainable values in the case of the COVID-19 pandemic existing, the authors proposed several recommendations as follows: (1) Restructuring the management system; rearranging human resources; providing a flexible working framework; developing a job management program in case employees work at home, helping keep tasks being interrupted, ensuring quality and productivity, and preventing risks caused by COVID-19. (2) Developing immediate and long-term scenarios and measures to ensure regular, continuous, and less interrupted construction work during the COVID-19 outbreak. Both ensure labour safety, safety against epidemics, and ensure project implementation efficiency. (3) Promoting the application of information technology to production and business activities to accelerate digital transformation in enterprises. From the work of planning, finance, management, human resources, salary, and site management. Stakeholders should integrate all work systems to build a system throughout the business. It is advised that the management integrate most business processes in business operations from the enterprise level to the management and implementation of construction site level. (4) Ensuring a sustainable supply chain: enterprises should control and assess the degree of cooperation of suppliers, essential partners of the business, assess the availability of resources, the ability to cooperate, and the readiness of these partners to respond to diseases. Identifying vendors, backup, or alternative partners in an existing partner cannot support the business. Evaluating contract terms and insurance policies to ensure coverage within the contract's scope related to the delayed transfer of resources. (5) Ensuring financial safety: Enterprises should control the actual cash flow that is regularly circulating to minimise the possibility of a shortage of cash flow due to the decline in revenue; ensuring the financial supply for activities at enterprises and construction sites to be carried out continuously. The impact on working capital in the supply chain should be appraised. Carefully review debt obligations to identify a possible breach of contract (penalty due to late progress, late payment of interest) and evaluating potential consequences. Actively connect with lenders and other stakeholders in the project to ensure payments are received on time and proactively rearrange debts and alternative financing sources. Assessing the consequences that occur when work is interrupted, delaying, and reviewing insurance policies to assess the likelihood of compensation for production disruptions and clarify coverage of coverage in an outbreak continue to have complicated developments. This study has some limitations which have to be pointed out. This study was a local, not global, study conducted using webbased questionnaire survey data. The article does not include quantitative parameters to help the readers understand the extent of the damage. The depth of the study is also limited when there are no statistical analyses of correlation, causality or regression. In conclusion, the results of this study highlight the multilevel, multidimensional nature of epidemic impact on the construction sector. It would be beneficial to determine the key impacted areas in order to develop industry-wide policies on dealing with catastrophic events and developing in a new normal. Our study, being of an exploratory and interpretive nature, raises a number of opportunities for future research, for instance: status of construction workers' suffering and their solicitation; safe working solution in the context of an epidemic; or even rethinking construction industry reform strategies.
2021-09-24T15:39:09.040Z
2021-01-01T00:00:00.000
{ "year": 2021, "sha1": "6a601f92b902c365b4355f4fe2c3a28d0602ce14", "oa_license": "CCBYNCSA", "oa_url": "https://ijbes.utm.my/index.php/ijbes/article/download/745/242", "oa_status": "GOLD", "pdf_src": "MergedPDFExtraction", "pdf_hash": "d3ad9f278dc4c075240531ade1284115ef439713", "s2fieldsofstudy": [ "Business" ], "extfieldsofstudy": [ "Medicine" ] }
262995638
pes2o/s2orc
v3-fos-license
Effect of a prediction tool and communication skills training on communication of treatment outcomes: a multicenter stepped wedge clinical trial (the SOURCE trial) Summary Background For cancer patients to effectively engage in decision making, they require comprehensive and understandable information regarding treatment options and their associated outcomes. We developed an online prediction tool and supporting communication skills training to assist healthcare providers (HCPs) in this complex task. This study aims to assess the impact of this combined intervention (prediction tool and training) on the communication practices of HCPs when discussing treatment options. Methods We conducted a multicenter intervention trial using a pragmatic stepped wedge design (NCT04232735). Standardized Patient Assessments (simulated consultations) using cases of esophageal and gastric cancer patients, were performed before and after the combined intervention (March 2020 to July 2022). Audio recordings were analyzed using an observational coding scale, rating all utterances of treatment outcome information on the primary outcome–precision of provided outcome information–and on secondary outcomes–such as: personalization, tailoring and use of visualizations. Pre vs. post measurements were compared in order to assess the effect of the intervention. Findings 31 HCPs of 11 different centers in the Netherlands participated. The tool and training significantly affected the precision of the overall communicated treatment outcome information (p = 0.001, median difference 6.93, IQR (−0.32 to 12.44)). In the curative setting, survival information was significantly more precise after the intervention (p = 0.029). In the palliative setting, information about side effects was more precise (p < 0.001). Interpretation A prediction tool and communication skills training for HCPs improves the precision of treatment information on outcomes in simulated consultations. The next step is to examine the effect of such interventions on communication in clinical practice and on patient-reported outcomes. Funding Financial support for this study was provided entirely by a grant from the Dutch Cancer Society (UVA 2014-7000). Findings 31 HCPs of 11 different centers in the Netherlands participated.The tool and training significantly affected the precision of the overall communicated treatment outcome information (p = 0.001, median difference 6.93, IQR (−0.32 to 12.44)).In the curative setting, survival information was significantly more precise after the intervention (p = 0.029).In the palliative setting, information about side effects was more precise (p < 0.001). Interpretation A prediction tool and communication skills training for HCPs improves the precision of treatment information on outcomes in simulated consultations.The next step is to examine the effect of such interventions on communication in clinical practice and on patient-reported outcomes. Introduction Esophageal and gastric cancers are high incidence cancers that cause more than 1.3 million annual deaths worldwide. 1An array of treatment options is available both in the curative and palliative setting, comprising different combinations of chemotherapy, radiotherapy and surgery or best supportive care (BSC). 2,3The outcomes of these options differ significantly in terms of survival, risk of side effects and complications, and expected health-related quality of life (HRQoL). 2,4,57][8] But, irrespective of the specific patient role or decision-making process, it is essential for healthcare providers (HCPs) to thoroughly inform patients about potential outcomes of different treatment options.0][11][12][13] However, currently HCPs underuse clinical outcome data to inform patients on treatment and treatment-related outcomes. 14,15or outcome information to benefit patients, it must be evidence-based, i.e., relying on the best available and most up-to-date evidence.Furthermore, patients desire outcome information to be sufficiently precise, i.e., offering clarity, concreteness, and substantial details. 16 However, actual treatment outcomes can significantly vary among patients, depending on specific patient Research in context Evidence before this study We systematically searched the literature on existing clinical prediction models for treatment of patients with esophageal and gastric cancer.We included several search terms for 'esophageal cancer' or 'gastric cancer' in combination with search terms for 'prediction model', 'survival', 'adverse events' and 'quality of life' to search databases of MEDLINE, EMBASE, PsycINFO, CINAHL, and The Cochrane Library (January, 1st 2000-February 6th, 2017).47 models were found varying in predicted outcomes, but mostly aimed at survival after curative resection.We were unable to perform meta-analysis due to inadequately reported model calibration and considerable bias in the reported studies.Furthermore, most models lacked external validation, indicating an impediment in applying the models in clinical practice.Moreover, only few models predicted probabilities of side effects or complications and none focused on patient's health-related quality of life, despite its relevance.We concluded there is a clear need for new prediction models for outcomes of esophageal and gastric cancer and for more investigation on their applicability in clinical practice.To fill this gap, we developed a prediction tool, with underlying validated models, and supporting communication skills training on the use of this tool in clinical practice. Added value of this study Our current clinical trial shows the effect that clinical application of such prediction models can have on the communication of health care providers about treatment outcomes.Providing health care providers with a tool presenting clear and easy-to-understand visualizations of personalized treatment outcome data, together with training equipping them with the right skills on information giving that is precise and tailored to a patients' information needs and understanding, affects their information giving in a simulated setting.Application of such an intervention can result in patients receiving information that is more precise, more supported by visualizations, more personalized to clinical characteristics and more tailored to individual patients' needs. Implications of all the available evidence We provided the first evidence for the effects of clinically applying prediction models in esophageal and gastric cancer treatment.As we found promising results for the use of the prediction tool in simulated practice, the next step is to investigate the effects on health care providers' communication in real-life clinical practice.We are currently assessing this effect as part of the same stepped wedge clinical trial (the SOURCE trial), in addition to effects of the combined intervention on patient-reported outcomes, such as their knowledge about the expected treatment outcomes and their evaluation of the decision.If similar effects are found at real-life outpatient clinics, health care providers should be encouraged to implement the tool and training in their daily clinical practice.characteristics, such as age and performance status, or tumor characteristics, such as number and location of metastases. 17,18Additionally, patients may vary in their personal information needs and preferences dictating the type, amount, and level of detail they wish to receive, and are able to process.Thus, to effectively inform patients, outcome information must not only be evidencebased and precise, but also personalized to clinical characteristics, and tailored to individual patients' preferences.][21][22] Because to date no personalized and clinically applicable aids exist for HCPs treating patients with esophagogastric cancers, 23 we iteratively developed an online prediction tool (named 'Source'), for use in the consultation room. 22Additionally, we developed supporting communication skills training (CST) for HCPs to assist them in improving the information that they communicate to patients about treatment outcomes, as CST has previously been proven to be effective in changing oncology HCPs' communication behaviors. 22,24Source' is a web-based prediction tool which shows visualizations of personalized data on survival, side effects and complications, and HRQoL, making use of underlying prediction models and meta-analyses.22,[25][26][27][28][29][30][31] The blended CST equips HCPs with the ability to effectively convey complex risk and benefit information to patients in a tailored way, using the Source tool during decision making.22 Both tool and training underwent pilot testing with promising preliminary evaluation results.22 This study aims to investigate the effect of the tool and training on the way HCPs inform patients about treatment outcomes.Primary outcome is the (numerical) precision with which outcome information is given.Secondary outcomes are 1) other characteristics of the communicated outcome information itself, such as the use of visualizations or natural frequencies, 2) communication approaches used by HCPs during the consultation, such as information personalization (to clinical characteristics) and tailoring (to individual preferences), and 3) HCPs' self-reported satisfaction, intentions and evaluation of the intervention. Design This study is part of a multicenter pragmatic steppedwedge trial (the SOURCE trial, NCT04232735) investigating the combined effect of the online prediction tool and CST on HCPs' communication of treatment outcomes.The trial examines the effect of the combined intervention in simulated consultations as well as in real life.This paper reports on the effects of the combined intervention in a simulated setting, see Fig. 1 for the design. Due to the limited number of available subjects within our time window, we had to opt for a design that offers enhanced statistical power compared to a randomized controlled trial.As such, centers were geographically grouped into four parties and the combined intervention was introduced sequentially to each party (non-randomized).As is characteristic of the stepped-wedge design, the intervention was introduced at different moments in time to the four parties, eliminating potential effects in time due to unexpected situations such as the COVID-19 pandemic. Setting and participants Participants were HCPs in surgical, radiation and medical oncology, treating patients with esophageal or gastric cancer and regularly performing treatment information consultations with these patients.A treatment information consultation was defined as a consultation in which one of the HCPs' main goals is to inform the patient on the outcomes of treatment(s), for example when decisions about treatment have to be made.Oncologists-in-training were also considered eligible, as in the Netherlands they work under supervision yet communicate with patients largely independently. 32 Sample size The SOURCE trial was powered to detect a medium sized effect (Cohen's d = 0.5) of the combined intervention in real-life consultations, assuming an intracluster correlation (ICC) of 0, and a power of 80%.The intervention was considered successful if a significant difference (α = 0.05) was observed in the precision of information about treatment outcomes provided by HCPs (primary outcome).This resulted in a required sample size of 21 HCPs, i.e., clusters, who would each include real-life consultations with 6 patients (3 preintervention measurements, 3 post-intervention measurements).In addition, 2 Standardized Patient Assessments (SPAs) per HCP were conducted (1 preintervention, 1 post-intervention).The SPAs were used for current analysis of the effects in a simulated setting, analysis of real-life consultations will be reported on in a separate paper. Recruitment The surgical, radiation and medical oncology departments of academic and nonacademic hospitals were approached through existing networks until at least 30 HCPs were recruited, considering a possible drop out of 30%.HCPs were informed about the study, received an information letter, and were asked for written informed consent. Online prediction tool ('source') 28][29][30][31]33 The tool is designed to be used by HCPs (i.e., physician-assisted) during decision-making consultations.HCPs can tailor the type and amount of information and the types of visualizations to the needs and preferences of an individual patient.Source was developed using an iterative, user-centered approach, involving HCPs as well as patients, patient advocates and field experts. 22 Communication skills training (CST) The blended CST consisted of an e-learning module, two face-to-face group sessions and an individual online booster feedback session, see Fig. 1.Learning goals were for the HCP to 1) be able to name the most important do's and don'ts for treatment outcome communication (risk and benefit information; knowledge), 2) have a positive outlook on using numbers to inform patients and on their ability to inform patients in an evidence-based, precise, personalized and tailored manner (attitude), 3) be able to use and incorporate the Source tool in their clinical practice (skills) and 4) be able to provide information that is tailored to patients' informational needs and level of understanding (skills). 22HCPs were informed on all functionalities of the tool and its underlying data though the e-learning.5][36][37] Three out of twenty face-to-face sessions were online, via Zoom, due to COVID-19 regulations. The training was accredited by the Netherlands Association of Internal Medicine, the Dutch Association of Physician Assistants and the Dutch Nurse Specialists Registry. Standardized patient assessments (SPAs) The standardized cases reflected either a scenario of a patient with metastatic gastric cancer opting for palliative treatment (medical oncologists) or of a patient with localized esophageal cancer opting for curative treatment (surgical and radiation oncologists), who met with the HCP to discuss available treatment option(s).Two scripts describing the background of two rather highly educated patients (accountant or archeologist) were attached to both scenarios, 32 resulting in four different cases (see Appendix 2).HCPs received a simulated medical file.Background stories and actors were counterbalanced, i.e., randomly assigned between pre-and post-intervention for each HCP.Two professional male actors were instructed to play the cases in a standard way, which was not overly emotional, 32,38 and not to initiate discussion of treatment outcomes.The patient script included a set of standard questions and a few "if then" rules, including the instructions to ask about survival benefits or risks of side effects only when certain outcomes of treatment were addressed by the HCP.All patient scripts and medical files, based on those of a previous RCT, were further developed in a multidisciplinary team and adjusted based on a pilot study. 22,32,38PAs took place in consultation rooms at the hospitals' outpatient clinics and online, via GoToMeeting, Skype or Zoom, due to COVID-19 restrictions.Whether the consultations occurred in person or online was kept constant between pre-and postmeasurements.SPAs were either audio-recorded (in person) or video-recorded (online) and stored safely according to the General Data Protection Regulation (GDPR) and were end-to-end encrypted.Time intervals between the SPA's and the start of the training period varied as a result of COVID-19 restrictions between 1 and 6 months (T0-T1) and between 4 and 9 months (T1-T2).Total duration of the training period (elearning-booster feedback session) was ±9 weeks for each HCP. Sample characteristics Participants: at T0, HCPs reported their gender, age, nationality, function (medical specialist, oncologist-intraining, physician assistant, nurse specialist), expertise (surgical, radiation or medical oncology), years of experience (including residency) and receipt of communication skills training during medical school and/or residency (yes/no). SPAs: HCPs rated the SPA's perceived realism and comparability to clinical practice on a Visual Analogue Scale (VAS), at both pre-and post-intervention SPAs (T0, T2). Primary outcome Observed (numerical) precision of communicated outcome information was considered the primary outcome.As a validated measure for this outcome was not at hand, 39 the Outcome Information Scale for OesophagoGastric cancer (OIS-OG) was developed.For this scale, treatment outcome information was grouped in four distinct outcomes categories: A) survival, B) side effects and complications, C) HRQoL, and D) treatment response or recurrence.These outcome categories were further broken down into information about individual items of different treatments and complaints, such as 'chemotherapy' or 'nausea', together forming a coding framework.Every utterance by the HCPs regarding outcome information was analyzed and assessed within the coding framework to evaluate the numerical precision.This assessment aimed to determine the level of richness and detail with which the information was conveyed by the HCPs on a scale from one, e.g., 'I don't know', to four, e.g., '50% of people'.For each of the coding framework's individual items, only the consultations' maximal score (i.e., most precise utterance) was included in the analysis.All consultations were independently coded by two coders, who were blinded to the experimental condition.For a full description of the OIS and the coding process, see Appendix 1. Secondary outcomes Both on the level of the SPA, as a whole, as well as on the level of the items that were coded on the primary outcome, we coded several secondary outcomes.See Table 1 for a description of all secondary outcomes. Statistical analyses Primary outcome For each of two coders, all item scores for precision were summarized per outcome category (survival, side effects and complications, HRQoL, and treatment response or recurrence) and overall (all outcome categories together).The particular items (treatment options, side effects, etc.) and the number of items that were coded as having been discussed by HCPs, differed between HCPs.As such, the scales on which the precision of information provided in the SPAs were scored, differed between HCPs.For example, a HCP who was coded to have discussed three items, each with a 1-4 Likert scale, scored on a theoretical 3-12 scale.To account for these scale differences between HCPs, the mean of the items across outcome categories was calculated and rescaled to a 0-100 scale, taking into account the number of items that were coded for each HCP.The formula for this transformation can be found in Appendix 1. Per SPA, rescaled scores were then averaged over the two coders for use in paired samples analysis of the difference between pre-and post-consultation.Based on the ordinal nature of the data and the assumed non-normal distribution of the observed scores, we used non-parametric paired samples Wilcoxon signed rank tests to test pre-and postdifferences (α = 0.05).Considering the educational nature of the intervention, we did not hypothesize lower numerical precision after the intervention a priori, and therefore specifically tested our hypothesis that numerical precision increased significantly, using one-tailed tests for the primary outcome.Analysis was performed using R, version 4.0.3, and R Studio, version 1.3. In addition to analyses of the total sample, we also performed separate analyses stratified to the curative and the palliative setting.If significant effects on the overall precision of outcome communication in either of the settings were found, further additional analyses were performed for each of the four the outcome categories separately.Effect sizes (r) were calculated by dividing the z-statistic by √N (r = 0.1-0.3small effect, r = 0.3-0.5 moderate effect, r > 0.5 large effect).Moreover, in the event of significant pre-and post-differences, the initiative taken by the simulated patient to elicit utterances on treatment outcomes was analyzed as an independent covariate to rule out the possibility that an effect of the intervention could be explained by the simulated patient taking more initiative in one of the experimental conditions.This hypothesis was tested by modeling the difference between T0 and T2 (Δ) using the patients' initiative as a covariate in a proportional odds model, which is a generalization of non-parametric models. 42he effect of patients' initiative was tested for significance, using the RMS package in R. The assumption of proportional odds was formally tested using the Brant test for each proportional odds model that was fitted. Secondary outcomes Secondary outcomes were analyzed in a similar manner as the primary outcomes, using paired samples Wilcoxon signed rank tests (α = 0.05).Secondary outcomes coded at the item level (e.g., use of supporting visualizations, use of time frames, etc.) were summarized like the primary outcome, but using frequencies instead of scale scores.For each coder, frequencies were divided by the total amount of items scored by this coder to calculate the relative frequency.The relative frequency was then averaged over the two coders.Due to the large number of secondary outcomes, all secondary analyses were accounted for multiple testing using familywise type-I error correction with the Bonferroni method.Familywise correction here implied grouping 'families' of tests together that analyzed the same subdivisions of the sample.Bonferroni correction was applied by multiplying by the total number of tests performed per family of tests. Role of the funding source Financial support for this study was provided entirely by a grant from the Dutch Cancer Society (UVA 2014-7000).The funding agreement ensured the authors' independence in designing the study, interpreting and analyzing the data, writing and publishing the report.LvdW, SK, EvA and EK had full access to all data reported in the manuscript.LvdW, HvL and ES take responsibility for the decision to submit for publication. Results After inviting a total of 56 HCPs, 40 HCPs showed interest in participation and signed informed consent.Of these, 31 HCPs from 11 academic and non-academic centers completed the SOURCE trial (participated in two SPAs and included six patients), including 11 HCPs from surgical departments, 8 from radiation oncology departments and 12 from medical oncology departments.Nine Dropouts occurred during the trial due to a variety of reasons, such as pregnancy, illness, and organizational changes.SPAs took place from March 2020 to July 2022, when all SPAs of 31 participating HCPs were collected.See Table 2 for participant and SPA characteristics. Primary outcome The Source tool and training had a significant positive effect on the precision of the overall communicated treatment outcome information (p < 0.001; r = 0.63, large effect, median difference 6.93, IQR (−0.32 to 12.44)).For curative cases, overall treatment outcome information (p = 0.013; r = 0.51, large effect, median difference 5.27, IQR (−3.91 to 11.84)) and information about survival (p = 0.029; r = 0.44, medium effect, median difference 0, IQR (0-12.38))were significantly more precise after the intervention.We did not find a significant effect on precision regarding information about side effects and complications, HRQoL and treatment response or Results from the proportional odds models did not indicate that the difference in simulated patient initiative for utterances on treatment outcomes significantly predicted the difference in overall precision between T0 and T2 for both curative and palliative cases across the outcome categories (p > 0.05 for all).For curative cases, the assumption of proportional odds was not violated. Due to the small sample size and the relatively large number of ordered categories, the Brant test could not be computed for the models for palliative cases and could therefore not be assessed. Secondary outcomes On the SPA level, the number of remarks indicating that treatment outcomes were personalized to patients' clinical characteristics was significantly higher after the intervention, for curative cases (p = 0.001, median difference 2.00, IQR (1.00-3.50))as well as palliative cases (p = 0.032, median difference 1.50, IQR (1.00-2.00)).Also the number of attempts to tailor information to individual information needs or preferences was significantly higher after the intervention for both curative (p = 0.001, median difference 2.00, IQR (1.50-4.50))and palliative cases (p = 0.004, median difference 2.00, IQR (1.75-3.00)).The duration of the consultation was not significantly different between pre-and postintervention, as were HCPs' satisfaction, their clinical behavioral intentions to use numbers or their number of attempts to check patients' current knowledge and understanding.See Table 3 for an overview. On the item level, the relative number of visualizations used in the consultation was systematically higher after the intervention for almost all outcome categories in the curative and palliative cases, although only significantly higher for overall outcome communication (cur.: p < 0.001, median difference 0.34, IQR (0.16-0.52), pall.: p = 0.045, median difference 0.51, IQR (0.48-0.56)), survival information (cur.: p < 0.001, median difference 1.00, IQR (0.94-1.00), pall.: p = 0.045, median difference 1.00, IQR (0.88-1.00)), curative HRQoL information (p = 0.045, median difference 0.29, IQR (0.00-0.45)) and palliative side effects information (p = 0.045, median difference 0.65, IQR (0.57-0.81).Also for curative cases, the relative number of natural frequencies used to indicate a treatment outcome was significantly higher after the intervention for overall outcome communication (p < 0.001, median difference 0.11, IQR (0.04-0.15)) and survival information (p = 0.045, median difference 0.58, IQR (0.12-0.79)).All other secondary outcomes coded per item were not significantly different between pre-and post-intervention for either curative or palliative cases.See Fig. 4 for an overview of secondary outcomes on the item level for curative and palliative cases.HCPs assessed the training with a 7.7 and the e-learning, as part of the training, with a 7.8 averagely (1:very bad-10: very good).See Appendix 4 for more details on training evaluation. Discussion In this stepped-wedge intervention study, we demonstrated that the combined utilization of a prediction tool and communication skills training significantly enhanced the precision of the overall information given about treatment outcomes in a simulated context of esophageal and gastric cancer.Specifically, for the curative simulated scenarios, HCPs' survival information became more precise following the intervention. For the palliative simulated scenario, HCPs' information about side effects was more precise after the intervention.Furthermore, the intervention positively influenced the personalization of outcomes to clinical characteristics and the tailoring of communication about treatment outcomes to individual patients' needs and preferences. Among all treatment outcomes displayed in the Source tool, the tool provides the most comprehensive and most personalized information for survival.Indeed, amongst other outcomes, the relative number of utterances supported by visualizations increased significantly in both the curative and the palliative setting for survival information.Importantly, in the curative setting, HCPs' extensive use of the tool combined with their acquired skills and attitudes likely resulted in them informing patients about chances of survival in a significantly more precise manner than before.For the palliative setting, inspection of the data in Fig. 3 revealed that HCPs already gave quite precise information on survival before the introduction of the intervention, leaving little room for improvement.As such, we did not observe an effect on the precision of survival information in this setting.It might be that the HCPs participating in the palliative setting, medical oncologists, were already more used to or more skilled in providing specific information about survival due to their daily clinical practice of introducing patients with treatment which has the main goal of prolonging patients' lives.Moreover, the current simulated setting involves a highly educated patient who is not overly emotional, curious for numbers and asks for more information when the HCP talks about a difference in survival between treatment options.A consult with such a patient might lack some of the more difficult realworld challenges that clinicians face when informing patients in clinical practice.Still, the training and tool did improve HCPs' skills and attitude in providing clinically personalized information, asking the right questions in order to tailor the information to individual needs and preferences, and supporting their information with visualizations. The precision of side effect and complication information varied based on the setting in which it was provided by HCPs.Interestingly, in the palliative setting, there was an improvement in precision, whereas in the curative setting, no such improvement was observed.In line, a distinct pattern was found in use of visualizations, which were used significantly more at post measurements in the palliative setting, compared to pre-measurements, whereas in the curative setting no such increase was found.This divergence in tool usage might be attributed to differences in experienced clinical applicability of the data utilized for side effects in the tool.Specifically, since data from the Netherlands Cancer Registry (NCR) were not available for side effects, mainly impersonalized meta-analysis data were presented in the Source tool except for prediction models about urological complications, severe complications, and 30-day mortality.Possibly, the improvement in radiotherapy techniques since the CROSS trial and the rapid developments in surgical techniques, improving mortality and morbidity rates for treatment of potentially curable esophagogastric cancer, might lead to particularly surgical and radiation oncologists feeling the tool complication and side effect data in the curative setting are outdated. 43In addition, the surgical risks for anastomotic leakage, for instance, seem to vary across countries in Europe and even across centers in the Netherlands, [44][45][46][47][48] possibly contributing to HCPs' experience of the tool data being a poor 'fit' to the performance rates in their own hospital.Future engagements in enlarging the registry of morbidity data and in developing and frequently updating personalized prediction models, might enhance HCPs' feeling of applicability of the data to specific patients and consequently encourage them to provide patients with precise information on side effects and complications. Interestingly, in neither the curative nor the palliative setting, the precision of HRQoL information improved.However, the relative number of utterances supported by visualizations did improve significantly for curative cases.Possibly, the information that the Source tool provided for HRQoL was not sufficiently precise for HCPs to substantiate their information with numbers.HRQoL data used in the tool are those from the Dutch nationwide quality of life registry (Prospective Observational Cohort study of Oesophageal-gastric cancer Patients). 33Yet, the variation in the available patient samples at the time of data collection was quite high, resulting in large confidence intervals and great uncertainties about the absolute and relative differences in HRQoL between treatment options in the tool.Another explanation might be that HCPs found it more challenging to provide patients with numbers that reflect subjective experiences, as is the case with patientreported outcome measures (PROMs), compared to more objective information, as is the case for the tool's data on survival, side effects, and complications. 22,49For both of the aforementioned reasons, it might be that HCPs struggled with which message to covey to patients based on these data, as part of their usual storyline of the pros and cons of treatment options.Further research on the use of PROMs in the decision-making process should give more specific guidelines on how to use precise HRQoL information to benefit patients and their decision making. Although the median difference in consultation duration suggested a change between pre and post intervention, we did not find a significant effect of the intervention on the overall consultation duration.This finding is in line with literature indicating that the application of shared decision making, of which precise information giving is an essential element, does not by definition require longer consultations. 50The finding that using the Source tool and having been trained on information giving does not necessarily prolong the duration of HCPs' consultations with patients, may facilitate future implementation of the intervention in clinical practice.In addition, HCPs' overall positive evaluation of the utility of the training may support its' transfer into clinical practice. 51Further improvements of the tool and training will be made based on feedback provided in the current trial. Altogether, the combined use of the Source tool and training has shown improvements in communication about treatment outcomes.While overall precision, personalization, and tailoring of outcome information improved, there is room for further advancement.Information giving regarding some outcome categories could still be more precise, and alternative ways of framing outcomes are still seldom utilized to clarify treatment information.Also, from the results of the current study, we do not know if the more precise, more personalized, and more tailored information that was provided by HCPs positively affects patients' understanding of treatment outcomes.Therefore, patients' comprehension of the Source tool is currently assessed in a follow-up study, particularly focusing on low health literacy patients.Nevertheless, the current results demonstrate promising effects of the tool and training on HCPs' outcome communication skills.If proven to be effective in real-life clinical practice, the use of similar tools would be imaginable in other cancer settings.In addition, further research should explore the impact of the tool and training on how treatment options are presented and the subsequent decision-making process. Our findings need to be interpreted in the light of some limitations.First, the simulated setting that is currently used, prevents drawing definitive conclusions for clinical practice, especially for the long term.The important next step, which is currently being undertaken as part of this stepped-wedge trial, is to test the effects of the intervention in real-life clinical consultations (NCT04232735).Another limitation is the limited statistical power that remains, when comparing the effects for the curative and palliative cases and for all outcome categories separately.Lastly, a part of the current study's SPAs and training components have been performed in an online, video call setting due to COVID-19 regulations.For SPA measurements as well as for training, this could have influenced the perceived realism and comparability to clinical practice.Still, both of these parameters were scored as acceptable (>6.5) by HCPs themselves.However, considering the simulated nature of the study, a 'Hawthorne effect' of HCPs intentionally increasing their precision of information at post-measurement, could not be ruled-out. 52Yet, there is little evidence for an effect on behavior, when HCPs were aware of being video recorded. 53ome strengths of the current study also deserve mentioning.Standardization of patient characteristics and elimination of confounding by patient characteristics was allowed by using actors, who followed a script, instead of real patients.This design enabled us to establish whether HCPs are able to apply the knowledge and skills gained from the intervention, before investigating whether they can transfer these skills to the actual clinical setting.Secondly, prior to using our prediction models in the current study, these models have been proven to show acceptable to good performances, have been updated multiple times and were externally validated. 27,30,54In the future, these prediction models will be kept up to date by adding new data.Lastly, whereas many prediction tools have already been developed for predicting several diseases' outcomes, not many studies have yet evaluated the effect of prediction tools on HCP's communication.To our knowledge, this is the first study evaluating the effect of such a tool on the precision of outcome communication. In conclusion, results of a tool and training to stimulate precise, evidence-based, personalized and tailored information giving about treatment outcomes are promising in a simulated setting.The next steps are to investigate the effect on patient-reported outcomes, such as their knowledge about the expected treatment outcomes and their evaluation of the decision, and to implement the tool and training in clinical practice. Fig. 1 : Fig. 1: Representation of the study design.Top: simplified design of the SOURCE trial.Bottom: detailed visualization of the current study's design, including Standardized Patient Assessments (SPAs) and the combined intervention (tool and communication skills training). Fig. 2 : Fig.2: An impression of different visualizations as used for Source (the online prediction tool).For this example a case of curable esophageal cancer was used. Fig. 4 : Fig.4: Median frequency of secondary outcomes coded on the item level, corrected for the amount of items communicated in the outcome category (relative frequency).All outcome categories and overall outcome communication are displayed separately, for curative cases (A-E) and palliative cases (F-J).Initiative for the for discussing the outcome information was coded as either coming from the Simulated Patient (SP) or from the Health Care Provider (HCP).*p ≤ 0.05, **p < 0.01, ***p < 0.001. Table 1 : Secondary outcomes of the standardized patient assessments (SPAs). (per item) One/multiple manners (e.g., positively and negatively or in multiple scenarios)Positive + negative framing: "70 out of 100 patients are alive after 5 years, this means 30 out of 100 have died" Multiple scenarios: " Use of uncertainty communication T0, T2 (per item) Present/absent "these are just chances, but I don't know what it will be like for you specifically" Table 2 : Health care provider (HCP) and standardized patient assessment (SPA) characteristics. All p-values were familywise corrected for multiple testing using Bonferroni corrections, by multiplying the p-values with the total number of tests performed per family of test.As such p-values can be larger than one and significance levels remain 0.05.a Significant at α = 0.05. Table 3 : Results for secondary outcomes coded on the level of the Standardized Patient Assessment (SPA).
2023-09-27T15:08:39.163Z
2023-09-25T00:00:00.000
{ "year": 2023, "sha1": "d5fd9ecd1a8038f72571bc2d8e55424d42f71df2", "oa_license": "CCBY", "oa_url": "http://www.thelancet.com/article/S2589537023004212/pdf", "oa_status": "HYBRID", "pdf_src": "PubMedCentral", "pdf_hash": "291b9a717c39c8c8c5712541d48fadb7eef6c9a6", "s2fieldsofstudy": [ "Medicine" ], "extfieldsofstudy": [ "Medicine" ] }
5023162
pes2o/s2orc
v3-fos-license
Effect of a paclitaxel-eluting metallic stent on rabbit esophagus The use of self-expanding metallic stents (SEMS) is the current treatment of choice for malignant gastrointestinal obstructions. A paclitaxel-eluting metallic SEMS (PEMS) may have an antitumor effect on esophageal tissue. PEMS with 10% paclitaxel or conventional SEMS were inserted into the lower esophagus of rabbits. Following the insertion of the stents for 1, 2, 4 and 6 weeks, the rabbits were sacrificed and the status of the stent insertion was examined, as well as any macroscopic or microscopic mucosal changes in the esophageal tissue. All the rabbits survived until death without any complications. No migration following stent insertion occurred. The number of cases with proximal obstruction increased in a time-dependent manner, and no significant difference was observed between the two groups. Gross histological examination showed similar tissue reaction to the stents at 1, 2 and 4 weeks, and inflammatory cell infiltrating was higher in the SEMS group at 1 and 2 weeks. However, inflammatory cell infiltration was markedly higher in the PEMS group at 4 and 6 weeks. Food-intake and weight were similar in the two groups. The results of the present study demonstrated that PEMS may serve as a safe alternative treatment strategy for esophageal obstruction. Furthermore, PEMS may inhibit the tumor growth of the esophageal wall through inflammatory infiltration and targeted drug delivery. A tumor model will be required in the future for evaluating the prognosis of patients with advanced esophageal carcinoma. Introduction Esophageal cancer is one of the most common malignances worldwide, and is especially prevalent in China and Japan (1,2). Patients with esophageal cancer have a poor prognosis due to dysphagia (3). Surgery is the only form of treatment that can provide a cure for esophageal cancer, although it is suitable for less than a third of patients due to late diagnosis, advanced progress and tumor metastasis (4). In recent decades, metallic stent insertion into the esophagus has been widely used in the treatment of esophageal cancer as it is less invasive, prolongs survival and improves life quality (5). However, conventional stents can only facilitate drainage but have no antitumor effect. Furthermore, the side-effects following stent insertion are non-negligible, and include tumor overgrowth, tumor ingrowth and granulation tissue hyperplasia at either end of the stent (6). In recent years, several studies have been carried out on the use of drug-eluting metallic stents for digestive system carcinoma, including a 5-Fu-eluting stent for esophagal cancer and a paclitaxel-eluting stent for biliary duct and esophagal cancers (7,8). The majority of the results demonstrated that self-expanding metallic stents (SEMS) combined with an antitumor drug allowed the targeting of the drug to the wall tissue and the maintenance of a controlled treatment dose over long periods of time (7,8). Paclitaxel is as a novel anti-neoplastic agent currently used to treat several types of cancer (9). Paclitaxel has been demonstrated to be effective at inhibiting the proliferation of human gallbladder epithelial cells, fibroblasts, pancreatic adenocarcinoma cells and esophageal cells (10). In addition, Jeon et al (10) reported that paclitaxel-eluting metallic SEMS (PEMS) inhibited tissue hyperplasia in the esophagus, and may manage refractory benign esophageal stricture (10). Paclitaxel exerts its pharmacological effects by binding to β-tubulin and by stabilizing the polymerized microtubules (11). Therefore, paclitaxel can be coated on the SEMS in order to provide sustained release (12). In our previous study, an esophageal squamous carcinoma was created in rabbits using an endoscopic technique (13). In addition, a previous study demonstrated that the in vitro sustained release of PEMS with 10% paclitaxel lasted for >40 days, which was sufficient for observing the effect of the drug on the rabbit esophagus (14). The aim of the current study is to evaluate the safety of PEMS in the rabbit esophagus and to investigate the effect of PEMS on esophageal tissue. Materials and methods Preparation of PEMSs. The SEMS used in the present study (Niti-S polyurethane-covered stent; Garson-Flextent, Jiangsu, China) were 16 mm long, 10 mm wide in the middle and 12 mm wide at the proximal end of the stent when fully expanded and mounted on a 7F stent introducer set custom made by Garson-Flextent. Due to the fact that the average diameter of the rabbit esophagus is ~5 mm, a stent with a 12 mm diameter flare was considered sufficient to prevent stent migration. The PEMS were loaded with 10% (wt/vol) paclitaxel (Taxol ® ; Jiangsu Hongdoushan Biological Technology Co., Ltd., Jiangsu, China) by the State Key Laboratory of Pharmaceutical Biotechnology, School of Life Sciences, Nanjing University (Nanjing, China). Following the determination of the eluting stent indices including release rates and effect on the mucosa, PEMSs with 10% paclitaxel was shown to be the most suitable choice. Animal study Stent placement. All experimental procedures were performed in accordance with the National Institutes of Health guidelines for humane handling of animals and were approved by the Committee on Animal Research at our institution (15). Male New Zealand white rabbits (n=48; Jiangsu Academy of Agricultural Science, Jiangsu, China), weighing 1.5-2.0 kg and housed in an environment with a 12-h dark:light cycle at 25˚C with free access to food and water, were randomly assigned to a PEMS group or a SEMS group (6 rabbits in each group per time-point). Due to the fact that the rabbit malignant stricture model was created recently in our previous study, a normal rabbit model (13,16) was used in the present study. A total of 48 rabbits with malignant esophageal occlusion were fasted for 24 h prior to stent implantation. Each rabbit was anesthetized by intraperitoneal injection with 95% pentobarbital sodium (35 mg/kg; Sigma-Aldrich, St. Louis, MO, USA). Each rabbit was then placed in the left lateral position. A SEMS or PEMS was introduced into the esophagus using the 7F stent introducer set. Prior to the placement of the introducer at the correct site, 1-2 ml contrast medium (Iohexol; GE Healthcare Life Sciences, Chalfont, UK) was injected into the esophagus in order to confirm the accurate position of the stent. The stent was then deployed in the lower esophagus. All endoscopic procedures were performed by two well-experienced endoscopists. Follow-up and postmortem examination. Following endoscopic stent placement, the animals were fasted for a further 24 h prior to reintroduction of their usual diet. During the follow-up, food-intake and weight were monitored. On the 1st, 2nd, 4th and 6th week following stent insertion, 6 rabbits in each group were sacrificed by intravascular air embolism. The esophagus was excised and examined grossly. Images were captured in order to examine the status of the proximal esophageal obstruction due to inflammation hyperplasia. Each stent was gently removed from the esophagus, and the esophagus was then incised longitudinally. Esophageal wall hyperemia and proximal obstruction was evaluated. Hyperemia was graded as follows: 0, hyperemia absent; 1, hyperemia present. Proximal obstruction was graded as follows: 0, normal; 1, stricture; 2, obstruction. Following gross tissue evaluation, the lesion tissue samples were fixed in 10% formalin or stored at -80˚C. Tissue samples [paclitaxel-covered segment and proximal uncovered stented segment (the part of the stent without the membrane)] were stained with hematoxylin and eosin (Wuhan Boster Biological Technology, Ltd., Wuhan, China) and examined by an experienced gastrointestinal pathologist using a CX23 Microscope (Olympus Corporation, Tokyo, Japan). Weight, food-intake, stent migration hyperemia and proximal obstruction were also recorded. A single pathologist evaluated the status of the proximal uncovered stented segment, the thickness of the epithelial layer and submucosal inflammatory cell infiltration. Thickening of the epithelial layer was defined as the distance between the tissue protruding into the lumen and the lower portion of the submucosa. The thickened epithelial layer was defined as follows: 0, normal; 1, mild; 2, severe. The degree of submucosal inflammatory cell infiltration was graded as follows: 0, none; 1, mild (scattered inflammatory cells); 2, moderate (inflammatory cell infiltration in ~half of a microscopic field); 3, severe (inflammatory cells infiltration in the majority or all of the microscopic field) (17). Two endoscopists performed the stent insertion and recorded which stent (SEMS or PEMS) was inserted. Subsequently, a pathologist blinded to the type of stent inserted examined the tissue samples both grossly and microscopically. Statistical analysis. The data are expressed as means ± standard error of the mean. Continuous variables were compared by unpaired Student t-test including food-intake following stent implantation, weight at the time of sacrifice, proximal esophageal obstruction, tissue hyperemia, thickness of each epithelial layer, and submucosal inflammatory cell infiltration. One-way analysis of variance and Fisher's exact test were used to analyze hyperemia, degree of proximal obstruction, thickness of the epithelial layer and degree of inflammatory cell infiltration in the SEMS and PEMS groups. SPSS version 13.0 software (SPSS Inc., Chicago, IL, USA) was used for all statistical analyses. P<0.05 was considered to indicate a statistically significant result. Results Stent placement and follow-up. The 48 rabbits were anesthetized and the stents were placed into their esophagus. All rabbits survived the procedure. There were no procedure-associated complications such as abdominal infection or pneumonia. All the stents were in situ and no migration occurred following stent insertion in any of the rabbits. Following insertion of the stents for 1, 2, 4 and 6 weeks, 6 rabbits were sacrificed in each group and gross and microscopic examination of the esophageal tissue was performed. The weight and food-intake was similar in the two groups. Gross and microscopic findings. The middle and lower part of the esophagus was excised from the body. Gross inspection of the excised tissue specimens revealed no perforation or bleeding in any of the rabbits. No adhesion was found between the esophagus and surrounding organs. The esophagus was then incised longitudinally. At 1 week following stent insertion, Table II. Characteristics of the 12 rabbits sacrificed 2 weeks following stent insertion. 4 and 5 rabbits with hyperemia were identified in the SEMS and PEMS group, respectively, although this difference was not significant (P>0.05), and no proximal obstruction at either end of the stent occurred in either groups. Epithelial thickness mildly increased in 3 and 5 rabbits in the SEMS and PEMS groups, respectively (P>0.05). However, inflammatory cell Table IV. Characteristics of the 12 rabbits sacrificed 6 weeks following stent insertion. infiltration was determined to be significantly more severe in the SEMS group, as compared with the PEMS group (P<0.05). At 2 weeks following stent insertion, proximal stricture occurred (Table I) in 3 rabbits in the SEMS group and 4 rabbits in the PEMS group, although this difference was not statistically significant (P>0.05). Mucosal hyperemia occurred in 2 rabbits in the SEMS group, and 5 rabbits in the PEMS group (P>0.05). There was no statistically significant difference in the thickness of the epithelia in the two groups (P>0.05). Inflammatory cell infiltration remained severe in the SEMS group and increased in the PEMS group (P<0.05). At 4 weeks following stent insertion, mucosal hyperemia occurred in 1 rabbit in the SEMS group and 4 rabbits in the PEMS group (Table II), although this difference was not statistically significant (P>0.05). Proximal stricture occurred in 4 rabbits in the SEMS group and 3 rabbits in the PEMS group (P>0.05). Epithelial thickness in the SEMS group was significantly higher, as compared with the PEMS group (P<0.05). Inflammatory cell infiltration started to decrease in the SEMS group, but increased in the PEMS group (P<0.05). At 6 weeks following stent insertion, stricture occurred in the majority of the animals, but no obstruction was observed (Table III, Fig 1A and B); the amount of stricture was not significantly different between the SEMS and the PEMS group (P>0.05). No hyperemia was observed in the rabbits of the SEMS group, although 3 rabbits in PEMS group exhibited hyperemia (P<0.05). Epithelial thickness was significantly increased in the SEMS group, as compared with the PEMS group (P<0.05). Inflammatory cell infiltration was rarely observed in the SEMS group but remained severe in the PEMS group (P<0.05) (Table IV; Fig. 2A and B). The data was compared among different time points in the two groups. In the SEMS group, mucosal hyperemia and inflammatory cell infiltration decreased over time, and proximal stricture and thickness of the epithelia increased with the time (Fig. 3A-D). Conversely, in the PEMS group, mucosal hyperemia decreased over time, and proximal stricture, thickness of the epithelia and inflammatory cell infiltration increased over time (Fig 4A and B) Discussion Esophageal carcinoma is the 6th leading cause of cancer-associated mortality and the 8th most common cancer worldwide (18,19). Early resection of the cancer leads to a good prognosis (19). However, over half of patients with esophageal cancer are not eligible for surgical resection. Therefore, treatment of advanced esophageal carcinoma remains challenging (20). In recent decades, stent deployment in the esophagus has been widely used as a palliative therapy which reduces tumor ingrowth and facilitates drainage. The SEMS is easily inserted and provides adequate drainage in the esophagus. Furthermore, PEMS has the potential to inhibit tumor growth and some positive results have been published (14,17). In 2005, Lee et al (21) reported on the effect of PEMS on normal porcine bile ducts. The results demonstrated that treatment with PEMS resulted in epithelial denudation, mucin hypersecretion and epithelial metaplasia, which led to the hypothesis that PEMS may have anti-tumor effects on malignant biliary stricture in humans (21). In 2009, another study was performed on dogs which demonstrated that the epithelial layers were thicker in the PEMS group compared with the control group, and revealed that the local delivery of paclitaxel resulted in marked histological changes that may be associated with an antitumor effect (17). Furthermore, two small retrospective clinical studies on the use of PEMS for malignant biliary stricture reported controversial results, indicating that paclitaxel was unable to inhibit tumor growth and prolong survival-time in humans (22,23). Conversely, Guo et al (24) revealed that Fu-eluting stents had prolonged release patterns and retained good integrity and stability following stent deployment. The 5-Fu concentration in stent-adjacent tissue was markedly higher compared with that found in the serum or liver (19). A B We propose that paclitaxel may also have anti-tumor effects on squamous esophageal carcinoma. Large-sized animal models were widely used in previous studies for stent research (14,17). However, these models were not usually conducted under disease conditions, and the animals were too large to be operated on and followed up. Furthermore, studies conducted on small animals, such as mice, used immunodeficient animals, and did not allow for stent deployment with an endoscope (14,17,21,22). Therefore, in present study rabbits were selected as an animal model, as they are sufficiently large to allow for the oral insertion of an ultra-slim endoscope and stent introducer set (25,26). The results presented herein revealed the safety of PEMS and SEMS in the rabbit model. No major complications, including massive bleeding, perforation or fatal infection were observed. Conversely to previous studies (27,28), no migration of the stents were observed in the present study. This may be due to the fact that stents with larger diameters were used, which enhanced the radical focus of the esophagus. The rabbit weight and food-intake was normal following stent deployment, which demonstrated that the stent did not affect the rabbits. Following the insertion of the stent in the rabbit esophagus, both PEMS and SEMS were demonstrated to cause tissue hyperemia, proximal obstruction, thickening of the epithelial layer and inflammatory cell infiltrating. In the 1st, 2nd and 4th week, hyperemia was similar both the SEMS and PEMS groups. However, in the 6th week, hyperemia was more marked in the PEMS group, as compared with the SEMS group. Hyperemia was marked in the 1st and 2nd week in the SEMS group but then decreased in subsequent weeks. Conversely, hyperemia was low in the 1st and 2nd week in the PEMS group but then increased in the following weeks. This may be due to the fact that paclitaxel had the effect of promoting inflammation thereby causing persistent tissue hyperemia. In the 1st week no proximal obstruction of the uncovered stent segments was observed in either group, although in the following weeks stricture was noted in both groups. However, there was no significant difference between the two groups. The proximal obstruction of the proximal uncovered stent is associated with mechanical stimulation between the stent and esophageal mucosa and tissue overgrowth. Following microscopic observation, it was apparent that the thickness of the epithelia was similar in the SEMS and PEMS groups in the 1st and 2nd weeks, although by 4 weeks the epithelial thickness was significantly different. In the 4th and 6th week, the epithelial layer was markedly thicker in the SEMS group compared with the PEMS group. Mavi et al (29) reported that inflammation promotes the growth of esophageal epithelia and fiber hyperplasia. However, the results of the present study were not concordant with those of previous reports (17,21). The mechanism underlying the association between inflammation and the epithelia merits further study. Inflammatory cell infiltration was markedly high in the SEMS group in the 1st and 2nd week, and decreased over the following weeks. Conversely, inflammatory cell infiltration was low in the PEMS group at the 1st and 2nd week, and increased in the 4th and 6th week. The inflammatory cell infiltration was different at different time points in the PEMS group. We think the reason for the change of inflammatory cell infiltration was the same as that of hyperemia. Notably, persistent inflammatory cell infiltration in the PEMS group also revealed the sustained release of paclitaxel at 6 weeks. The PEMS may inhibit tumor growth through the sustained released of paclitaxel, which exhibits anti-tumor effects and activates inflammation. The limitations of the present study included the fact that the experiments were carried out on normal rabbit esophagus, and results obtained from a rabbit model may not generalize to the effect of PEMS in human patients with esophageal carcinoma. In addition, the mechanism underlying the effects of sustained released of paclitaxel on normal esophageal and cancerous cells requires further study. In conclusion, endoscopic stent insertion into rabbit esophagus is safe and easily carried out. PEMS exhibited a steady release pattern of paclitaxel and may provide an alternative tool in the management of human esophageal squamous carcinoma.
2018-04-03T02:53:53.411Z
2016-09-16T00:00:00.000
{ "year": 2016, "sha1": "745408282a49597b580a78ed594f2ec61c8ca499", "oa_license": "CCBYNCND", "oa_url": "https://www.spandidos-publications.com/10.3892/etm.2016.3708/download", "oa_status": "GOLD", "pdf_src": "Anansi", "pdf_hash": "745408282a49597b580a78ed594f2ec61c8ca499", "s2fieldsofstudy": [ "Medicine" ], "extfieldsofstudy": [ "Medicine" ] }
212522944
pes2o/s2orc
v3-fos-license
Serum levels of folate and vitamin B12 in oral epithelial dysplasia Oral epithelial dysplasia (OED) is a histopathologic diagnosis associated with an increased risk of cancer. Deficiency of vitamin B12 and folate is associated with causation of certain precancerous and cancerous lesions. The aims of this study were: to evaluate the circulating levels of vitamin B12, folate status and iron, in patients with OED and to compare these levels with the values obtained in normal control subjects with and without tobacco smoking and alcohol drinking. To evaluate the circulating level of vitamin B12, serum folate, red blood cells folate and iron among OED patients. Data were collected from 120 patients with OED and 120 healthy control Subjects matched for age and gender, selected from patients with oral diseases not caused by tobacco or alcohol or related to knowing haematinic deficiency. Measurement of serum folate and vitamin B12 were carried out using radioassay. The majority of OED were graded either mild (46.7%) or moderate (40.0%) lesions and most of patients with OED were current smokers of more than 20 cigarettes per day for more than 20 years compared with normal healthy control. A significant decrease in the serum levels of folate, red blood cell folate was found in OED compared to normal tobacco smokers (p<0.05). No significant differences in vitamin B12 was found between OED cases and normal control subjects. Likewise significant differences in serum ferritin level were found between OED cases and normal drinkers of alcoholic beverages (p<0.05). And no significant differences in the TIBC level in OED compared with control subjects. These findings support the notion that OED may develop in persons who expose to tobacco smoking and have low folate level. A possible inverse association between iron concentrations and the risk of OED needs further study. Correspondence to: Mohamed A Jaber, Department of Oral Surgery, Dubai College of Dental Medicine, MBR University of Medical and Health Sciences, Dubai, P O Box-505097, United Arab Emirates, Tel: +971-044-248630, 9710505178052; E-mail: mjaber4@hotmail.com, Mohamed.jaber@dcdm.ac.ae Introduction Oral epithelial dysplasia (OED) is defined as a lesion in which part of the thickness of the epithelium replaced by cells showing varying degrees of cellular atypia and maturational disturbances [1]. OED may occur in clinically identifiable lesions including erythroplakia, leukoplakia and erythroleukoplakia [2], These clinically defined lesions have been stated to harbour an increased risk compared to normal mucosa for transformation into squamous cell carcinoma [3,4]. Studies reported transformation rates ranging from 6.6 to 36.4% after mean follow-up periods of 1.5 to 8.5 years [1,5,6]. Tobacco and alcohol use is accepted as the most important risk factors for oral potentially malignant lesions [7,8] and OED [9][10][11][12]. Exposure to cigarette smoke may result in folate deficiency via chemical inactivation and thus render the epithelium more susceptible to neoplastic transformation by the carcinogenic hydrocarbons of tobacco smoke [13]. Some aspects of diet is considered to be associated with the risk of cancer, precancer and OED [14][15][16] and intake of certain food products such as beta-carotene, vitamin E and vitamin A, or its analogues may cause regression of oral leukoplakia, thus preventing its progression to malignancy [17,18]. There is evidence that folate deficiency may be involved in the aetiology of carcinoma of oesophagus [19], bronchi [20], cervix [21], and oral cavity [22], as well as in certain experimental models of carcinogenesis [23]. Several studies have reported an association between low systemic levels of folate and/or vitamin B 12 and an increased risk of cancer and precancer in epithelial tissues [24,25]. Mucosal atrophy is a common feature of various conditions considered to increase the liability to oral cancer and precancer [26]. In experimental animals, iron deficiency lead to changes in the cell kinetics [27] and mild iron deficiency levels which are associated with increased oxidative stress, increase the risk of oral cavity cancer [28]. Epidemiological and clinical evidence suggest that folate deficiency in certain epithelial tissue, regardless of systemic folate status, may be a factor that predispose to the development of neoplasms arising from these tissues [29]. Folate supplementation thought to have resulted in correction of cellular abnormalities associated with diminished folate status [20], and profound vitamin B 12 deficiency can cause moderateto-severe oral mucosal dysplasia that resolves after correction of the vitamin B 12 deficiency [30]. Several studies have reported alterations in circulating levels of vitamin B 12 and folate in humans due to the habit of tobacco smoking or chewing [31,32]. but there is paucity of information about the role of folate and vitamin B 12 in OED, thus, the aim of this study was: to establish the circulating levels of the vitamin B 12 , serum folate, red blood cells folate, ferritin, iron, and total iron binding capacity (TIBC) in patients with OED and to compare these levels with the values obtained in normal control subjects with and without tobacco smoking. Study population The study group comprised a total of 120 patients with histologically confirmed OED (64 males, 56 females, median age 54 years, range 29-80) attending the Oral Surgery Oral Medicine Department of the College of Dentistry, Ajman University of Science and Technology, United Arab Emirates between 2002 and June 2012 were selected for the study after obtaining their informed consent. Control subjects were selected from those attending the college dental clinics with oral diseases not caused by smoking or drinking or related to known haematinic deficiency. Control participants were from same geographic area as the patients. Patients and control subjects were matched for gender, and date of birth (within 5 years). A total of 120 patients with OED and 120 control subjects were included in the study. Case and control subjects were interviewed in person and relevant data was collected in a standard, structured questionnaire. Information on prior use of tobacco and alcohol, type, site, duration of the dysplastic lesions, Dysplastic lesions have been classified microscopically according to degree of cytologic atypia and changes in architectural patterns, by a single pathologist into mild, moderate and severe dysplasia [1], treatment, lifelong occupational history, past medical history, and family history of OED and cancer were also collected. A current smoker was defined as someone who had smoked within the year preceding diagnosis, and previous smoker as some one who had smoked but had stopped more than one year prior to diagnosis. Questions regarding the major parameters of tobacco use included: type of tobacco used (filter cigarettes, cigars, pipe, roll-up and chewing tobacco/taking snuff, chew betel quid); duration of smoking in years; average number of cigarette smoked per day. Data on alcohol consumption included: type of alcoholic beverage used, amount of alcohol consumed per day (glass/daily) and total duration of drinking in years. Because most of the patients with OED (74.2%) were smokers, and because cigarette smoking can determine alterations in vitamin B 12 , and folate status and may be a confounding factor, the results from patients with OED was compared with the results from an age-matched and gender-matched control group of 59 smokers and with the results from another agematched and gender-matched control group of 61 nonsmokers. All participants in the smoker group were current smokers, and all participants in the non-smoker group did not ever smoke. Because heavy alcohol drinking can alter folate absorption and metabolism and also considered as a risk factor for head and neck carcinoma and OED, this may have a confounding effect in the study, thus any known heavy drinkers were excluded from the study. No participants who were included in the study had received folate or vitamin B 12 supplements in the last 6 months before the study. In addition, nutritional status may be the primary determinant of folate and vitamin B 12 levels so any participant with clinically evident nutritional deficiencies was excluded from the study. All subjects gave written informed consent to participate in the study. The study protocol was approved by the institutional review boards at the College of Dentistry of Ajman University of Science and Technology. Haematological assessment A venous blood sample was drawn from each patient and control subject and divided for determination of serum folate, red blood cell folate, vitamin B 12 , iron, ferritin and total iron binding capacity (TIBC). Blood samples were stabilized and frozen at -70°C until assayed. The complete blood count (CBC) included determination of haemoglobin, red blood cell, red blood cell indices, and white blood cells with differential using standard methods. All blood samples from patients and controls were drawn in the morning to provide consistency in interpretation of results. Serum folate, whole blood folate and RBC folates were measured in duplicate using standard technique [33,34]. None of the patients or controls was taking any medications at the time of testing. Data analysis Statistical procedures were carried out using the SPSS programme (version 16·0, SPSS Inc., Chicago, IL, USA for Windows). Analysis of the differences between serum folate, red blood cell folate, vitamin B 12 , iron, total iron binding capacity and ferritin among cases and controls was carried out using Student's t-test. Significance was accepted when the p-value was less than 0.05. Demographic details Age and gender distribution of the study subjects is detailed in Table 1. Most cases were male (53.3%), and the median age at diagnosis of the patients was 54 years (range 29 to 80). The majority of OED were graded as mild (46.7%) or moderate (40.0%) epithelial dysplasia. Serum folate, red blood cell folate and vitamin B 12 among cases and control subjects Mean serum levels of vitamin B 12 , folate, and red blood cells folate in normal non-smokers and smokers control subjects compared with OED are detailed in Table 2. A significant decrease in the serum levels of folate, red blood cell folate were found in OED compared to normal tobacco smokers (p<0.05). No significant differences in vitamin B 12 was found between OED cases and normal control subjects. Estimation of serum ferritin, iron, total iron binding capacity among cases and control subjects revealed low mean serum ferritin and iron in the control subjects compared with OED cases and significant differences in serum ferritin level were found between OED cases and normal drinkers of alcoholic beverages (p<0.05), but this association could reflect disease-related inflammation or comorbidity. And no significant differences in the TIBC level in OED compared with control subjects (Table 3). Tobacco and alcohol habits of subjects Tobacco and alcohol usage are detailed in Table 4. Significantly more of OED patients were current tobacco smokers of more than 20 cigarettes per day for more than 20 years compared with normal healthy control. Discussion It is generally agreed that tobacco consumption is a major aetiological factor for OED and many studies have shown an overrepresentation of tobacco smokers amongst patients with OED [8][9][10][11]. In this study tobacco smoking was recorded in at least 74.2% of patients with OED compared with 49.0% in healthy controls thus confirm the significance of tobacco smoking and alcohol consumption as risk factors in the aetiology of OED. One of the harmful effects of tobacco consumption is the alterations in the plasma/serum levels of micronutrients [13,25,32,35]. In this study a decrease in the plasma folate levels was observed in the patients consuming tobacco as compared to the non smokers, thus confirming recent observation by Almadori A vs E P=0. 6 A vs E P=0.7 12 and folate in a group of Indian patients with oral leukoplakia, furthermore, several other investigators have suggested that deficiency of folate enhances development of preneoplastic and neoplastic lesions, which are suppressed by folate supplementation [37]. Low folate level probably does not have an independent role as an initiating factor. Instead, presumably, acts synergistically with other genetic and environmental factors, such as tobacco carcinogens, making cells more susceptible to mutagens and increasing the rate of tumor progression. Some of the carcinogenic substances present in tobacco smoke 'primarily organic nitrites, cyanates, and isocyanates', have been shown to interact with folate and vitamin B 12 coenzymes, transforming them into biologically inactive compounds [32,38]. These chemical interactions may have physiological significance is supported by reports of lower circulating folate [39,40] and B 12 [41] levels in smokers and the buccal mucosal cells of tobacco smokers were shown to have a decreased concentration of folate [35]. The rationale for folate's possible protection against cancer is based on its roles in DNA synthesis and repairing damaged DNA [42,43]. Folate is involved in DNA methylation, through which it may influence gene stability and expression [43]. The benefits of folate [20,42] cobalamin [20] in reducing the risk of cancer or precancer in epithelial tissues have been reported in the literature. Eto and Krumdieck [37] in a review of the role of vitamin B 12 and folate deficiencies in carcinogenesis, observed that neither deficiency is carcinogenic by itself but that each may increase susceptibility to the action of other carcinogens. A deficiency of folate has also been reported to enhance the expression of endogenous and exogenous oncogenes [23]. It is generally acknowledged that RBC folate levels provide a more accurate indication of long term nutritional status than plasma or serum folate level, which is influenced by recent ingestion of food. The findings of this study provide evidence that inadequate reserve of folate, as reflected in RBC folate contents may enhance the effect of tobacco smoking on OED risk. Furthermore low level of folate was found to be related to an increased risk of epithelial dysplasia or carcinoma-in-situ [44,45]. These nutrients are likely to take the active role in the risk reduction effect. Vitamin B 12 deficiency reportedly has been associated with chromosomal damage to buccal mucosal cells in smokers [46] and vitamin B 12 and folate supplementation in the treatment of precancerous lesions like cervical dysplasia and bronchial metaplasia have been reported [42,47]. Nevertheless, in the current study focused on OED, differences in vitamin B 12 serum levels between OED patients and healthy control subjects lacked significance. Serum iron assay alone are of little significance without relating these to total iron binding capacity. Both these values are subject to variability and serum iron levels is also subject to diurnal variation and merely indicate the efficacy of iron transportation within the body to sites of erythropioeisis. The diurnal variation is reported to exceed 50 percent [48]. For this reason, all blood samples were drawn in the morning to provide consistency in interpretation. Low serum iron and TIBC level may indicate anaemia of chronic disease, whereas low serum iron values and an elevated TIBC represent true iron deficiency. A more accurate assay is serum ferritin level, which reflects the level of total body iron stores. In this study serum iron, total iron binding capacity and ferritin levels were all within normal limits among OED cases and the normal healthy control subjects. The biochemical changes in iron deficient epithelium including decrease cytochrome C levels and enzyme depletion have been reported [49]. Occurrence of iron deficiency is known to present in oral cancer. Haematological abnormalities in oral cancer and precancerous lesions were reported by Khanna and Karjodkar [50], and the abnormalities may be associated with the pathogenesis and progressions of oral cancer and potentially malignant lesions. It has been suggested that mucosal atrophy, increased mitotic activity, and diminished repair capacity are among the major common underlying predisposing factors in oral cancer and potentially malignant lesions [51]. It is recognised however, that in certain cases other associated deficiencies of essential nutrients and vitamins may arise and complicate the situation [52]. Nutritional factors are of great importance in maintaining the integrity of the oral mucosa [27,53] and thorough haematinic investigation is recommended in the management of potentially malignant oral lesions, particularly in patients in whom these deficiencies are prevalent [54]. These findings support the notion that OED may develop in persons who expose to tobacco smoking and have low folate level. This was a prospective investigation, allowing assessment of serum folate, B 12 , and iron status. Clearly, however, more prospective studies are needed to supply the additional pieces of information that will eventually resolve the role, if any, of the vitamins B 12 , folate and iron in the etiology of OED. Clinical trials to investigate the effectiveness of supplementation of this micronutrient in reducing the incidence of oral OED and its subsequent malignant transformation may be warranted.
2019-08-19T00:43:47.841Z
2015-01-01T00:00:00.000
{ "year": 2015, "sha1": "715f620bbb79d1151a0cb47b92ee503f9810a7d8", "oa_license": "CCBY", "oa_url": "https://www.oatext.com/pdf/DOCR-1-103.pdf", "oa_status": "HYBRID", "pdf_src": "MergedPDFExtraction", "pdf_hash": "4a6653be847a4d20d42ba6105cc3ec7eddbecb3a", "s2fieldsofstudy": [ "Medicine" ], "extfieldsofstudy": [] }
253522676
pes2o/s2orc
v3-fos-license
Impact of patient and public (PPI) involvement in the Life After Prostate Cancer Diagnosis (LAPCD) study: a mixed-methods study Objectives Standardised reporting of patient and public involvement (PPI) in research studies is needed to facilitate learning about how to achieve effective PPI. The aim of this evaluation was to explore the impact of PPI in a large UK study, the Life After Prostate Cancer Diagnosis (LAPCD) study, and to explore the facilitators and challenges experienced. Design Mixed-methods study using an online survey and semistructured interviews. Survey and topic guide were informed by systematic review evidence of the impact of PPI and by realist evaluation. Descriptive analysis of survey data and thematic analysis of interview data were conducted. Results are reported using the GRIPP2 (Guidance for Reporting Involvement of Patients and the Public, Version 2) reporting guidelines. Setting LAPCD study, a UK-wide patient-reported outcome study. Participants User Advisory Group (UAG) members (n=9) and researchers (n=29) from the LAPCD study. Results Impact was greatest on improving survey design and topic guides for interviews, enhancing clarity of patient-facing materials, informing best practices around data collection and ensuring steering group meetings were grounded in what is important to the patient. Further impacts included ensuring patient-focused dissemination of study findings at conference presentations and in lay summaries. Facilitating context factors included clear aims, time to contribute, confidence to contribute, and feeling valued and supported by researchers and other UAG members. Facilitating mechanisms included embedding the UAG within the study as a separate workstream, allocating time and resources to the UAG reflecting the value of input, and putting in place clear communication channels. Hindering factors included time commitment, geographical distance, and lack of standardised feedback mechanisms. Conclusion Including PPI as an integral component of the LAPCD study and providing the right context and mechanisms for involving the UAG helped maximise the programme’s effectiveness and impact. BACKGROUND Patient and public involvement (PPI) has the potential to increase the quality and relevance of healthcare research. Systematic reviews of the impacts of PPI on healthcare research have been published. [1][2][3] However, a lack of in-depth and accurate reporting of PPI has been recognised as a limitation in reaching evidence-based guidance on the most appropriate methods to use for successful involvement. Guidelines for the reporting of patient and public involvement in research (Guidance for Reporting Involvement of Patients and the Public, Version 2; GRIPP2) have been developed to help standardise the reporting of PPI and advance the evidence base. 4 Frameworks and models have attempted to identify factors that influence impact. The Research with Patient and Public Involvement: a Realistic evaluation (RAPPORT) study used realist evaluation drawing on Normalisation Process Theory to understand how far PPI was embedded within healthcare research in six areas: diabetes mellitus, arthritis, cystic fibrosis, dementia, public health and learning STRENGTHS AND LIMITATIONS OF THIS STUDY ⇒ This paper provides an example of reporting patient and public involvement (PPI) using the Equator Guidelines for Reporting of PPI (Guidance for Reporting Involvement of Patients and the Public, Version 2; GRIPP2). ⇒ The survey and topic guide have been informed by evidence on the impact of PPI and by realist evaluation. ⇒ The paper provides the views and experiences of both patient representatives and a varied sample of researchers involved in the Life After Prostate Cancer Diagnosis (LAPCD) study. ⇒ A convenience sample of both patient representatives and researchers was used, so results may not be generalisable. ⇒ The survey was limited to only those who were involved in this study, and therefore small numbers are reported. Open access disabilities. 5 6 They reported a context-mechanismoutcome model and suggested that six salient actions are required for effective PPI: a clear purpose, role and structure for PPI; ensuring diversity; whole research team engagement with PPI; mutual understanding and trust between the researchers and lay representatives; ensuring opportunities for PPI throughout the research process; and reflecting on, appraising and evaluating PPI within a research study. More recently, the Public Involvement Impact Assessment Framework (PiiAF) has been developed. 7 The main elements that influence public involvement in research and the impact this involvement can have are identified in PiiAF: the approaches (way in which members of the public are involved in the study), the values (values associated with public involvement by members of the research team), the focus of the research and the study design and practical issues including human and material resources. This paper reports an evaluation of the impact of PPI in the Life After Prostate Cancer Diagnosis (LAPCD) study, a large UK-wide study of men living with and beyond prostate cancer 8 using the GRIPP2 guidelines. 4 The LAPCD study aimed to explore the impact of prostate cancer on men's health and well-being, using a self-completion survey (n=35 823) and in-depth telephone interviews (n=119), to inform future service delivery and policy development. The LAPCD study had six workstreams and adopted the novel approach of dedicating one workstream to PPI (figure 1). A user advisory group (UAG) was established to lead this workstream and was integrated into the research programme from the outset. The UAG, including the Chair, comprised seven men from different parts of the UK who had experienced different stages of prostate cancer and experienced different treatments, and two representatives from Prostate Cancer UK. Each UAG meeting was attended by two researchers. Each workstream lead worked with the Chair of the UAG to develop a plan of how the UAG group would be involved. The Chair then discussed this plan with the UAG group before confirming the programme of work. The level and nature of the UAG involvement was different for each workstream. For example, it was easier to plan involvement in developing patient-facing materials or in aiding the qualitative analysis, but more difficult within the statistical analysis of the survey data or health economic data. In formulating their mode of operation, members of the UAG drew from earlier research findings from patient and public views on the impact of PPI in health research and followed the methods of Crocker et al. 9 The study set out six different types of impactful contributors for a user advisory group including the 'expert in lived experience', the 'creative outsider', the 'free challenger', the 'bridger', the 'motivator' and the 'passive presence', and reported the importance of PPI contributors should be involved as equal partners. The primary purpose of PPI in the LAPCD study was to ensure that the research was conducted and disseminated in ways useful to patients and the public and to ensure that the purpose and aims of the research were clearly understood by the patients, so that participation was facilitated. The UAG members sought to add value to the LAPCD research by offering a perspective that drew on their lived experience, both as a patient with cancer and as a patient advocate and volunteer support worker. The definition of PPI used in LAPCD was 'research being carried out 'with' or 'by' members of the public, rather than 'to', 'about' or 'for' them'. 10 The evaluation of PPI aimed to assess the 'value added' or impact of the UAG on LAPCD study and to explore the facilitating and hindering factors experienced. METHODS Sample The sample for this retrospective evaluation included all members of the UAG (described earlier) and the research team. The research team included clinical and non-clinical health service researchers of all grades from immediate postgraduate to senior team leaders, disciplines covered included statisticians, health economists, social scientists, qualitative researchers and clinicians with medical and surgical backgrounds. Realist evaluation 'theory of change' This evaluation was informed by realist evaluation. 6 Realist evaluation seeks to find the contextual conditions that make interventions effective therefore developing lessons about how they produce outcomes to inform policy decisions. Tilley outlined three investigative areas that need to be addressed when evaluating the impact of an intervention: what is the mechanism or process needed to produce the outcome, what is the context or environmental factors needed to produce particular outcomes and the outcome pattern, that is, what are the practical effects produced by causal mechanisms being triggered in a given context? 6 This informed the development of a 'theory of change' model (figure 2). Design Based on previous systematic literature reviews describing the impact of PPI on research 1 2 11 and informed by realist evaluation 6 (see figure 1), an online survey was developed in collaboration with the UAG to explore both the UAG and researchers' views on the impact that the UAG had on the LAPCD study. To enable a more in-depth evaluation of PPI, two authors (ZD and JB) developed an interview topic guide alongside the UAG. Semistructured telephone interviews were conducted with both researchers and UAG members following this topic guide. The surveys were piloted, and the interview topic guides were reviewed by three academics and three patient representatives with prostate cancer. Minor changes to the wording of the documents were made because of this process. A link to the online survey was emailed to all researchers (n=29) and UAG members (n=9 including Prostate Cancer UK members. All responses were anonymous. The survey included questions on their definition of PPI, level of user contribution to different parts of the study, how user involvement was supported, what hindered user involvement and personal benefits to both the users and researchers. Two open questions were asked to both the users and the researchers, the first to provide three examples of how the UAG added value to the study, and the second how the method of user involvement could be modified to gain an even greater impact. The questionnaire took approximately 15 min to complete. Survey results were reported using descriptive statistics. Semistructured telephone interviews were conducted with all participants who agreed to be interviewed. Interviews were conducted by ZD, an experienced qualitative researcher. Participants were asked to reflect on the contexts, environment, processes and mechanisms that influenced PPI impact, both positive (facilitating) and negative (hindering). They were asked to describe how they perceived the impacts of PPI on the study. The interviews were transcribed and inductive and deductive analyses were conducted following Braun and Clarkes (2006) six-step approach to thematic analysis. 12 This started with the familiarisation with the data by reading and re-reading the transcripts, followed by the generation of initial codes. Five initial transcripts were independently coded and then discussed within the research team and UAG, before coding the rest of the transcripts. After coding, ZD searched for themes by examining the codes and how some of them clearly fitted together into a theme. The themes with relevant quotes were then sent to the UAG for discussion. The UAG either agreed that the quotes fitted the theme that ZD had developed, suggested that specific quotes would fit better with one of the other themes ZD had developed or decided that certain quotes did not fit with any of the existing themes, suggesting a new theme. ZD and the UAG had several discussions and reflections on the themes, then a final decision about the themes was decided on by ZD and the authors. Finally, we reviewed the themes and defined the themes before writing up a first draft of the analysis. Open access Results are reported according to the GRIPP2 reporting guidelines. 4 Ethics approval This study involves human participants and was approved by Oxford Brookes University Research Ethics Committee (ref Brett 22019). All participants provided informed consent. Consent for the survey was completed on the initial page, and participants had to complete this consent process to gain access to the survey. Survey responses were anonymous. Consent for the interviews was recorded verbally just before the telephone interviews. This was recoded separately to the interview recording. Patient and public involvement Patients with prostate cancer and Prostate Cancer UK were involved in the design of the study, development of all participant facing materials used, development of survey and topic guide, analysis of qualitative data and write up of paper. RESULTS The survey was completed by 79% (n=23/29) of the researchers and 100% (n=9/9) of the service users. Interviews were conducted with 7 UAG members, 2 representatives from Prostate Cancer UK and 14 researchers. Results are ordered by facilitating and hindering context factors of PPI; facilitating and hindering mechanisms of PPI and impacts of PPI on research, researchers and service users. Selected, illustrative quotes from the interviews are reported in tables 1-3 under the same headings as the Results section. Table 1 Facilitating and hindering contexts: illustrative quotes from interviews with researchers and service users Facilitating context factors Diverse patient representatives with good leadership "It's very important to have the right sort of Chair and the right group of people because … one of the things that I found with this group is that individually we've got on and that does make a lot of difference. We all come from different backgrounds, different parts of the country and we all seem to gel as a group" (U4) "[The Chair] was absolutely a central point and … it's important to have that -somebody who is able to facilitate and involve others, not just himself, from a wider group" (R7) "We were very diverse in our way with different issues and different problems and different perspectives but maybe we were lucky but we operated brilliantly as a group…The dynamics were very good. We had a good team and I think that is absolutely critical." (U2) "So it's like any other research team you are putting together…[the UAG] needs the right background and experience, it helps if they've got communication skills and then obviously to come from a professional background that helps but it also hinders in a way because you're not going to get the full experience of people who are not professionals and of course that's a large proportion of the population" (R7) Feeling confident to get involved "I think we all had a certain background -we were used to big meetings and didn't find it intimidating" (U3) Feeling included in communications and discussions "Right from day one I felt I was involved and included and I attended the first meeting where everyone attended at [city] and I received a very warm welcome. People came and introduced themselves to me and I just felt welcomed and valued" (U5) "One [way of approaching the UAG] was at the formal meetings. Then in between times if we had specific things … I would contact [the Chair], then he would disseminate it round to the other men … That was helpful … he was the central point of contact and that worked very well … it's less confusing if you have a point of contact" (R8) Hindering context factors Time taken to be involved "There's a darn sight more work in this that I thought, that's the one thing -I became a lot more involved than I ever thought I would" (U2) "If I had a document to go through I could take two/three days. Whatever it took to go through it" (U4) "there was a huge commitment over and above, you know, the resources, you couldn't have recompensed people for what they were doing -(R3) "I think one of the difficulties is that all the other members of the research team are full time researchers and working on the project and it's quite difficult [with the UAG] -you feel quite conscious that you are taking up someone else's time when they could be doing something else" (R1) Lack of knowledge of some areas of research "Obviously, they deal with the stats, they deal with the technicalities, they deal with that stuff but I thought sometimes, you know, hold on a second, this is about improving the life of men after prostate cancer" (U2) Open access Impacts on the study and on the researchers and UAG are also summarised in figure 2, a 'theory of change' model developed by HB. The results are summarised in figure 3. Facilitating and hindering context factors The primary facilitating context factors reported by the UAG included feeling they had a clear role (a lot 100%, n=9) and an agreed set of aims and principles (a lot 78%, n=7), having enough time to contribute (a lot 100%, n=9) and feeling confident in contributing (a lot 75%, n=6). They felt valued (100%, n=9) and supported by the researchers (a lot 89% n=8, somewhat 11% n=1), and supported by the other UAG members (100%, n=9). All the users felt included in the study communications and discussions (100%, n=9). Minor hindering context factors included geographical distance between the UAG members and the research team (mainly based in Belfast, Leeds, Oxford and Southampton) and therefore travelling Table 2 Facilitating and hindering mechanisms: illustrative quotes from interviews with researchers and service users Mechanisms (processes) Facilitating mechanisms Embedding the UAG from the start "They were embedded from the start so it just kind of became second nature almost. So in terms of every time we have one of our monthly investigator meetings, there is always a section on the agenda for an update from the user advisory group and every time if we're writing a paper we've always involved [the Chair] as a co-author because he's a co-investigator on the study. So it's just sort of always been there. (R4) "We weren't just bolstered on we were embedded and that was great but we weren't embedded early enough to have a full influence or attempt to make an influence [on the protocol] to make suggestions and so on and so forth that might have been quite useful." (U1) Clear aims and guidelines "We put together Terms of Reference for ourselves and discussed that …[and], we developed a good practice guide to managing things online and how we were going to follow things up and so on and so forth. Yes, there was quite a lot of grounding and stuff and how we are going to work on this etc and I think that was time well spent actually" (U1) "I suppose it's being clear at the outset if you take for example a qualitative strand how involved are the users going to be … mapped that out a little bit more clearly" (R6) "I think the general consensus on our part of the users group was that we felt it was quite important what we had contributed, in as much as it gave them a baseline to work from so they weren't just making up questions that they thought were important rather than questions from people who had been involved … unless you've got input from people who've actually been through the process, you can sometimes end up missing some of the points" (U3) Open access inconvenience and lack of knowledge to contribute to certain areas of the research. The facilitating context factors from the researchers' perspective were involving the service users early enough (definitely 86%, n=18) and fully enough (definitely 70%, n=16), clear aims of PPI in the activity (definitely 57%, n=13), feeling well supported by research colleagues in the PPI activities (definitely 75%, n=17) and having good relationships with the UAG members (definitely 68%, n=15). Minor hindering factors included not having enough time to fully involve the UAG, not having sufficient knowledge of how to involve them, and 'worries about taking up the time of UAG members'. Researchers felt they included the UAG in communications about the study (definitely 45%, n=10). In the interviews, both the UAG members and researchers felt PPI in the LAPCD study was well structured and benefited from strong leadership, and a committed group of members, who were keen to be proactively involved in the study. A positive group's dynamic and diverse range of experiences and perspectives was also beneficial to how the UAG was able to operate and contribute to various aspects of the study. This included members' own experiences of cancer but also experiences gained from their involvement in support groups for other men with prostate cancer. The involvement of a representative from the Black and Minority Ethnic (BME) community was noted as a particular strength, although it was also acknowledged that this benefit could have been used better by greater efforts to tap into the wider network of this BME member to gain a greater understanding of the issues of this population. The previous experience, professional backgrounds and prior knowledge of UAG members were seen as an important facilitator for their individual involvement including familiarity with the various tasks involved and their general confidence within the UAG during meetings and conferences with the academics. However, this was also seen as a potential limitation, with a need for more cultural and socioeconomic diversity. It was acknowledged that recruitment of such a diverse group is difficult and that while hearing the representative voice is important, lack of confidence and skills to contribute may be a barrier to involvement. Consultation with a wider patient representative group was suggested as a possible solution. While previous experience of PPI among the researchers was variable, PPI was generally viewed as a valuable component of research studies, and researchers were open to the involvement and potential impact of the UAG on the LAPCD study. Table 3 Impacts on research, researchers and service users: illustrative quotes from interviews with researchers and service users Quotes Impacts on research "There is no doubt we added a completely different dimension to the review. I mean they're brilliant academics obviously, and in their own field they're absolutely brilliant but I think we brought them down to ground sometimes … I think we brought them down by saying 'don't forget what this is about'. (U2) "They've been involved in so many different aspects of it, in terms of giving feedback on results, and I know they have done a lot of stuff on the qualitative work streams […] identifying themes and going through comments. I think, just making sure that what we're producing is actually relevant to men is the main thing." (R4) "They have been involved in lots of parts … putting the questionnaires together … topic guides for the interviews … general feedback on what was coming out [from the transcripts]. They've had input on papers, meetings, presentations, all that sort of thing. And I think what they've been really good at is driving on the dissemination side of things and making sure the findings make a difference. They've been very active on that and made it clear that that's an expectation from them" (R6) "All I can say is if [the UAG] hadn't been here the study would have been … much thinner, less thoughtful exploration and I think they have added a dimension to it, added a richness to it" (R8) Impacts of service users "It was very well run. I was very impressed. I have nothing to compare it with but I thought it was extremely well run, well organised, well thought out and beneficial in terms of producing the result that it was intended to. I learnt a lot from it which I will take back to my workplace" (U5) "Yes, I would be quite happy to do it again, I enjoyed it and I would do it without expecting anything for it" (U5) "I think definitely it's something I'll try and bring in more in future studies I think […] I think for big studies, definitely. Big studies with lots of different kind of aspects of the things we are looking at" (R2) "It was good speaking to people. For example, the user advisory group -each one of us had prostate cancer and it was good talking to professionals and it was actually quite strange because after about three or 4 months everybody forgot that we had prostate cancer which was absolutely brilliant. So that in itself was good. I found it fairly cathartic the whole thing". (U2) Impacts on researchers "You are always at risk of a certain type of tokenism with patient engagement activities and on this occasion there wasn't any. It was a very real and productive way of adding value to the project as a whole I thought. It made it more 'real' to all of us" (R11) "Just meeting different people, different perspectives -yeah I think it's great actually" (R6) Open access Researchers acknowledged that involving the UAG could take extra time and that occasionally external deadlines hindered engagement. Researchers voiced concerns over taking up too much of the UAG member's time, recognising that they had volunteered to be a part of the study team even though they were offered a small honorarium for their time. Members of the UAG reported that being involved in the project required a significant time commitment, but they were willing to take on this commitment and took their role seriously. The availability of an honorarium was an important signal to the UAG that their commitment was valued, although the UAG members did not expect payment for their time and were just happy for their out-of-pocket expenses to be paid. Clear and open communication of key concepts and tasks were seen as a key process for positive involvement. Some workstreams, such as the health economic workstream, were more difficult to understand and sometimes it was difficult for the UAG member to understand the jargon. Having the UAG Chair as the main point of contact with researchers functioned well. Communication with the UAG appeared to be regular and integrated into the existing communication channels set up for the project. Facilitating and hindering mechanisms Various mechanisms that helped foster and support the integration and engagement of the UAG within the study were identified, including embedding the UAG within the study as a separate workstream package, involving the UAG in study meetings, developing clear documentation such as a term of references at an early stage, allocating time and resources to the UAG, and putting in place clear communication channels and feedback mechanisms. The interview data revealed that both the researchers and the UAG members agreed that embedding the UAG into the study through a dedicated workstream for PPI was a particular strength of the approach adopted. A key element of this was involving the UAG in regular study team meetings, which both facilitated involvement and helped to build relationships. By integrating PPI into the study in this way, the UAG were seen and treated by many researchers as equal contributors to the research process. Moreover, the involvement of a group of patient representatives rather than just one or two allowed for consistency and stability in PPI throughout the life of the study. Social activities, such as going for dinner or drinks outside of more formal research activities, fostered relationships between the research teams and members of the UAG and were seen as important to facilitating engagement within the project. The development of clear documentation such as their terms of reference, both internally within the UAG and with regard to the UAG's involvement in the research tasks, was seen as an important facilitator of effective PPI in the study. However, the provision of researcher training on how to best engage patients and the public was seen as a possible area of improvement within the approach. The UAG members reported training in certain research areas may have been useful to enable greater involvement in some Open access areas of the research study, such as a basic understanding of health economics. UAG members commented that the priorities of the researchers sometimes lost their patient focus, and it was the role of the UAG to bring this back. Researchers recognised that sometimes there was a mismatch between their focus and the focus of the UAG. However, the UAG were seen as on an equal playing ground to researchers and they worked together to overcome any difference of opinion. Furthermore, members of the academic research team recognised that their feedback mechanisms to UAG members could have been better. At times, their understanding of the impact of their contributions appeared to be implicit, as opposed to being the result of specific, formal feedback sessions. When direct feedback, whether formal or informal, was given, this appeared to be highly valued by UAG members. Impact on research, researchers and service users Both the UAG and the researchers reported the greatest impacts of PPI were on the improved survey design and topic guide, enhanced clarity of patient-facing materials, input into the likely stress and burden on the participants in the ethics application, ensuring the steering group meetings were grounded in what was important to patients and assisting with the dissemination of results of the study through papers and at conferences. The least impact of the PPI group was on data analysis methods, recruitment of participants and dissemination of results to participating hospital trusts. The interview data revealed that overall, the approach used to involve PPI in the study (ie, embedded UAG) was seen positively by both members of the UAG and researchers. Members of the UAG and researchers believed that the contributions of the UAG had a real impact on the project, including individual pieces of work (eg, survey development, qualitative analysis) as well as the project as a whole (eg, making sure that the findings of the study had a real impact on men with prostate cancer). The positive experience that UAG members reported motivated them to be involved in future projects, while the positive experience reported by the researchers encouraged them to consider using a similar approach to PPI for future studies. Both researchers and members of the UAG saw the project and process as valuable, identifying personal and wider benefits to the project. DISCUSSION Contextual factors that contributed to the beneficial impact of PPI on the LAPCD study included clear aims and roles, time to contribute, confidence in contributing and strong PPI leadership, feeling valued and supported by the researchers and inclusion in study communications. Mechanisms that contributed to beneficial impact included incorporating PPI into the study from the start with a planned programme of work dedicated to user involvement activities embedded in each workstream; the collaborative nature of PPI; having resources available to allow the integration of the UAG into the study; and regular attendance of the UAG members at study research days, teleconferences and social events outside of formal research activities to build relationships. The LAPCD study embedded PPI into the study with a collaborative approach between the PPI workstream and all other workstreams of the study. The importance of systematic partnership working across all settings has been reported in other studies. [13][14][15] Wilson et al reported that a 'fully intertwined' partnership approach alongside enabling contexts including resources, research host and organisation of PPI leads to a greater positive impact. 16 Building these reciprocal relationships early is also important to develop shared goals for PPI at an early stage that fits the needs of the study. 14 15 Feeling valued and supported by the researchers and feeling included in communication about the study are important drivers of impact improvement and motivation to stay involved in research and highlights the importance of the researchers' attitude to the success of PPI. 9 One study concluded that the most important contextual factors that influence the outcome of involvement are the researchers themselves and the skills, assumptions, values and priorities they start with. 17 While training is available to prepare patient advisors for their new role as advisor, reviewer or collaborative partner, it is clear that the training of researchers is equally important. [18][19][20][21] Researchers in this study reported a positive attitude to PPI but admitted knowledge of how to include them could have been improved. Development of training and awareness of existing exemplar training for researchers is needed. 19 Hindering contextual factors included the geographical distance between UAG members and lack of knowledge to contribute in certain areas of the research programme. Mechanisms that hindered the impact of Open access PPI included time limitations and adjustment to different ways of working between researchers and UAG members. These factors have been reported in previous evidence on the impact of PPI. 2 Many of the challenges of PPI occur because of colliding worlds, where priorities, motivations and ways of working differ causing conflict and power struggles between researchers and service users. 19 It is therefore vital that clear aims and roles are identified at the start of the study. 2 22 Reported impacts of PPI on the LAPCD study were evident throughout the LAPCD study. The reported impacts on improvements in patient-facing materials, the design of research tools such as questionnaires, interview schedules and questions for focus groups and recruitment have previously been reported. 2 23 Studies have also reported the impact of PPI on analysis of data. 2 23 24 This can check the validity of study conclusions, correct misinterpretation of data, identify themes that would have otherwise been missed, identify which findings would be most relevant to patients or the public and improve the way in which results have been described in reports. 20 PPI contribution to write up of papers, presentation of results at conferences and other dissemination activities has been reported to increase the likelihood of people acting on the findings. 2 23 This study also reports the personal impact that PPI has on the patient advisors and the researchers. The UAG members reported a 'sense of helping others', enjoying the camaraderie, gaining confidence and sharing experiential knowledge to inform better services for men with prostate cancer. These personal impacts have previously been identified. 11 Other studies have reported the notion of the 'good citizen', with PPI in research being a natural extension of their wider civic interests, and how involvement in research helps patient advisors to make sense of living with or recovering from disease and therefore offering space for the reconfiguration of self and identity. 25 Researchers in the LAPCD study reported having a greater understanding and insight into what it is like to have prostate cancer as service users shared valuable experiential knowledge. This experiential knowledge has been referred to as 'knowledge in context'. 17 Evaluating the impact of PPI on a research study is complex, and several authors have explored frameworks that illustrate the factors that influence the difference that PPI has on a research study. 5 7 This study found that similar context factors and mechanisms reported in the PiiAF influenced the value added from PPI in the LAPCD study. 7 The 'theory of change' (figure 2) model used in this evaluation was informed by realist evaluation. 6 This model identifies how the context, environment, processes and mechanisms influence the impact that PPI has on a research study. An adapted version of this model was used to present the results of this evaluation in figure 3 above. Despite geographical diversity, there could have been more diversity in terms of socioeconomic status, cultural diversity and education in the UAG for this study. It could be argued that there is a need for both those who are confident enough to be a part of the research environment and happy to attend meetings, and those who are less confident in this environment but their contribution could be valuable in representing a wider population to help shape more representative research studies and healthcare services. A wider UAG group or 'community of interest' group could be established that the central UAG group tap into when needed. Strengths and limitations This is one of the first papers to directly trace the impact of PPI, and the strength of this impact within all aspects of the research cycle. The GRIPP2 guidelines for the reporting of PPI in healthcare research have been populated to provide an example (online supplemental appendix 1). This study was a retrospective evaluation of the LAPCD study and therefore may have been affected by recall bias. While the number of qualitative interviews was sufficient to gain saturation of themes, the sample size was low due to the limited size of the research team and UAG. SUMMARY/CONCLUSIONS The LAPCD study introduced a novel approach by integrating PPI into the study as a separate workstream that contributed to each of the other five workstreams. This enabled the UAG to be involved early in the study and to contribute to every stage of the LAPCD study. It provided facilitating contexts (clear aims and roles of PPI, equitability with the research team, strong relationships between the UAG and research team and perceived confidence and support around PPI) and facilitating mechanisms (planned time and resources for PPI available from the start, development of documentation for engagement: terms of reference, clear communication channels arranged, involvement of UAG in all team meetings and social activities to foster strong relationships). Beneficial impacts on the study were reported by both researchers and UAG members. Personal benefits were reported by UAG members and researchers which may have fostered commitment and influenced future attitudes to PPI. This paper provides an example of reporting of PPI using the GRIPP2 guidelines to contribute to standardised reporting of PPI in research. Twitter Jo Brett @BrettDennis Acknowledgements The work was funded by the Prostate Cancer UK and Movember Foundation. We would like to thank all of the service users and researchers who gave up their time to take part. Contributors JB and HB: project inception, management, questionnaire design, development of qualitative topic guides, analysis, write up. ZD: development of topic guide, interviews, analysis of qualitative data, write up. FM: development and distribution of questionnaire, analysis, write up. JK and DC: questionnaire design, qualitative topic guides, analysis, write up. EW and PW: input into development of questionnaire and topic guide, analysis, write up. AG and AWG: principal investigators to LAPCD study, commented on questionnaire and topic guide, input to interpreted findings, write up. JB is responsible for the overall content as the guarantor. All contributed to this manuscript and approved the final draft. Open access Funding This study was funded by the Movember Foundation, in partnership with the Prostate Cancer UK (HO-LAPCD-14-001). Competing interests None declared. Patient and public involvement Patients and/or the public were involved in the design, conduct, reporting or dissemination plans of this research. Refer to the Methods section for further details. Ethics approval This study involves human participants and was approved by Oxford Brookes University Ethics Committee (ValueAddedLapcd 2018). Participants gave informed consent to participate in the study before taking part. Provenance and peer review Not commissioned; externally peer reviewed. Data availability statement Data are available on reasonable request. Supplemental material This content has been supplied by the author(s). It has not been vetted by BMJ Publishing Group Limited (BMJ) and may not have been peer-reviewed. Any opinions or recommendations discussed are solely those of the author(s) and are not endorsed by BMJ. BMJ disclaims all liability and responsibility arising from any reliance placed on the content. Where the content includes any translated material, BMJ does not warrant the accuracy and reliability of the translations (including but not limited to local regulations, clinical guidelines, terminology, drug names and drug dosages), and is not responsible for any error and/or omissions arising from translation and adaptation or otherwise. Open access This is an open access article distributed in accordance with the Creative Commons Attribution Non Commercial (CC BY-NC 4.0) license, which permits others to distribute, remix, adapt, build upon this work non-commercially, and license their derivative works on different terms, provided the original work is properly cited, appropriate credit is given, any changes made indicated, and the use is non-commercial. See: http://creativecommons.org/licenses/by-nc/4.0/.
2022-11-16T06:16:49.723Z
2022-11-01T00:00:00.000
{ "year": 2022, "sha1": "91e49fccae1981006c5f89ef5f5b585ba146c51e", "oa_license": "CCBYNC", "oa_url": "https://bmjopen.bmj.com/content/bmjopen/12/11/e060861.full.pdf", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "2f070712bb636a267789ce59616596fe74b62e68", "s2fieldsofstudy": [ "Medicine" ], "extfieldsofstudy": [ "Medicine" ] }