id stringlengths 3 9 | source stringclasses 1 value | version stringclasses 1 value | text stringlengths 1.54k 298k | added stringdate 1993-11-25 05:05:38 2024-09-20 15:30:25 | created stringdate 1-01-01 00:00:00 2024-07-31 00:00:00 | metadata dict |
|---|---|---|---|---|---|---|
109078779 | pes2o/s2orc | v3-fos-license | Status of the ATLAS Forward Physics (AFP) Project
The ATLAS Forward Physics (AFP) project plans to add a set of detectors — silicon 3D pixel tracking detectors and QUARTIC time of flight detectors — in the forward region of the ATLAS experiment at the LHC. The AFP detectors will be placed around 210 m from the interaction point and are meant to detect protons produced at small angles. The detectors are to be housed in the so called Hamburg beam pipe — a movable beam pipe allowing horizontal movement of the detectors. The AFP is currently under approval with possible installation in 2014/15.
INTRODUCTION
The ATLAS Forward Physics project plans to add a set of detectors to both sides of the ATLAS experiment at the LHC.The final setup will consist of two 3D pixel tracking detectors and one high-resolution time of flight detector on each side of the forward region.These detectors will be placed around 210 m from the interaction point (IP).
The AFP detectors will allow to identify forward protons produced at small angles.With such capability, it will be possible to perform standard QCD physics measurements as well as explore new physics.
Concerning machine conditions, the AFP is designed to operate with high pile up and it will be able to collect data during standard high luminosity LHC runs.
PHYSICS MOTIVATION AND DETECTOR ACCEPTANCE
Two classes of measurements will be possible with high precision using the AFP [1,2,3] • exploratory physics -anomalous couplings between γ and W or Z bosons, exclusive production (magnetic monopoles, Kaluza-Klein resonances or SUSY) etc., • standard QCD physics -double Pomeron exchange, exclusive production in the jet channel, single diffraction, γ-γ physics etc.These measurements will extend HERA and Tevatron measurements to the LHC kinematical domain.The AFP is designed to detect protons which lost energy in diffractive processes.When a proton looses energy in an interaction, it is deflected from the nominal beam by dipole and quadrupole magnets in the forward region of the ATLAS.The acceptance and energy resolution therefore depends on the LHC optics and distance of the detectors from the beam.For the central production, it will be possible to measure the mass of a produced object independently of the ATLAS detector with better precision [1,3].Calculated dependence of acceptance of the produced object in central production for different positions of the tracking detector relative to the beam as a function of the mass of the object is shown in Fig. 1 [2].
AFP LAYOUT AND HOUSING
Figure 2 [1] shows the schema of the long straight section of the ATLAS forward region with marked positions of proposed AFP stations.The AFP will consist of two stations: • AFP1 at 206 m instrumented with the 3D silicon tracking detector, • AFP2 at 214 m instrumented with the 3D silicon tracking detector and the time of flight detector The detectors will be housed in the so called Hamburg beam pipe.The Hamburg beam pipe (Fig. 3) is a detector housing designed to allow detector installation in limited space [1,3].The beam pipe wall is very thin (less than 300 µm) in area of the so called floor and thin window to minimize the interaction of beam particles with the wall material and the distance of the detectors to the beam.The Hamburg beam pipe will be used in two lengths -short (with 100 mm long detector pocket) for tracking detectors and long (700 mm long pocket) for timing detectors.
The Hamburg beam pipe will be installed on a moving table to allow horizontal movement of detectors.To ensure vacuum in detectors, the whole assembly of the Hamburg beam pipe and detectors will be placed in a secondary vacuum box.
DETECTORS Tracking Detectors
Tracking detectors will consist of six layers of 3D pixel sensors with FE-I4 as the readout chip [1,3].These sensors will be cased in cooling plates and housed in the rectangular pocket of the short Hamburg beam pipe.
For the Phase-0 (possible installation in 2014/15), the sensors developed for the Insertable B-Layer [4] will be used.The sensors are radiation hard, have thin edge (less than 100 µm dead zone) and resolution of 10 µm in horizontal axis and 30 µm in vertical.The angular resolution using two detectors in a distance of 8 m is 1 µrad.
For the Phase-I (installation in 2018), edgeless 3D sensors are planned.
Time of Flight Detectors
The time of flight detectors are needed to reduce the background coming from pile up by determining primary vertex of incoming protons [1,3].From difference of time of flight, it can be determined whether two protons originate from the same primary vertex.
For the Phase-0 (2014/15), QUARTIC detectors with resolution of 10-20 ps will be used.To achieve required time resolution the detectors consist of 4×8 quartz bars.In such configuration, it is possible to perform 8 measurements of the time of flight with 30-40 ps resolution each.
For the Phase-I (2018), better space resolution will be needed due to more incoming protons.Several possibilities are being considered -QUARTIC with quartz fibers instead of bars, MicroMegas detector, CVD Diamond detector, Avalanche photo-diode and Si-Pm detector.SAMPIC chip developed in Saclay is being considered for readout.
SUMMARY
The AFP is a new planned ATLAS forward detector system, which will add a set of detectors in the ATLAS forward region around 210 m from the IP.The AFP aims to extend ATLAS capabilities to study diffractive processes.
Detectors are to be fitted in the Hamburg beam pipe and should comprise two sets of 3D pixel sensors and one set of QUARTIC time of flight detectors on each side of the forward region of ATLAS.
The AFP is under approval with possible installation in 2014/15.
FIGURE 1 .
FIGURE 1. Calculated dependence of acceptance of the produced object in central production for different positions of the tracking detector relative to the beam as a function of the mass of the object.
FIGURE 2 .
FIGURE 2. Schema of the long straight section of the ATLAS forward region with marked positions of the AFP stations at 206 and 214 m.D1 and D2 are dipole magnets and Q1-Q7 are quadrupole magnets.
FIGURE 3 .
FIGURE 3. Design of the Hamburg beam pipe with marked regions of the thin window and floor.In these regions, the wall thickness is very thin to minimize the interaction of beam particles with the wall material and the distance of the detectors to the beam. | 2019-04-12T13:55:45.206Z | 2013-04-16T00:00:00.000 | {
"year": 2013,
"sha1": "bd9c7f64cec8596b9a1083160a9d08ddcff515bc",
"oa_license": "CCBY",
"oa_url": "http://cds.cern.ch/record/1495797/files/ATL-LUM-PROC-2012-001.pdf",
"oa_status": "GREEN",
"pdf_src": "Anansi",
"pdf_hash": "3b69f37948478f4c7f784dda373a37b63081b55a",
"s2fieldsofstudy": [
"Physics"
],
"extfieldsofstudy": [
"Physics"
]
} |
807628 | pes2o/s2orc | v3-fos-license | Automated External Defibrillators and Emergency Planning for Sudden Cardiac Arrest in Vermont High Schools
Background: Sudden cardiac death (SCD) events are tragic. Secondary prevention of SCD depends on availability of automated external defibrillators (AEDs). High school athletes represent a high-risk group for SCD, and current efforts aim to place AEDs in all high schools. Hypothesis: The prevalence of AEDs and emergency planning for sudden cardiac arrest (SCA) in Vermont high schools is similar to other states. Understanding specific needs and limitations in rural states may prevent SCD in rural high schools. Study Design: Cross-sectional survey. Methods: A survey was distributed to all 74 Vermont high school athletic directors. Outcome measures included AED prevalence, AED location, individuals trained in cardiopulmonary resuscitation (CPR) and AED utilization, funding methods for AED attainment, and the establishment of an emergency action plan (EAP) for response to SCA. Results: All schools (100%, 74 of 74) completed the survey. Of those, 60 (81%) schools have at least 1 AED on school premises, with the most common location for AED placement being the main office or lobby (50%). Larger sized schools were more likely to have an AED on the premises than smaller sized schools (P = 0.00). School nurses (77%) were the most likely individuals to receive formal AED training. Forty-one schools (55%) had an EAP in place for response to SCA, and 71% of schools coordinated AED placement with local emergency medical services (EMS) responders. Conclusion: In Vermont, more than two-thirds of high schools have at least 1 AED on school premises. However, significant improvement in the establishment of EAPs for SCA and training in CPR and AED utilization is essential given the rural demography of the state of Vermont. Clinical Relevance: Rural high schools inherently have longer EMS response times. In addition to obtaining AEDs, high schools must develop a public access to defibrillation program to maximize the chance of survival following cardiac arrest, especially in rural settings.
following out-of-hospital cardiac arrest, with survival rates declining from 7% to 10% with every minute that defibrillation is delayed. 1 Given the favorable outcomes with employment of public access defibrillation and the desire to prevent sudden cardiac death in the school-aged population, there has recently been a widespread movement toward the implementation of AEDs in many high schools across the country. The purpose of this investigation was to investigate the prevalence of AEDs and emergency planning for SCA in high schools throughout the state of Vermont.
Methods
A 22-question Web-based survey (see online appendix available at http://sph.sagepub.com/content/suppl) was developed, and invitations to participate were distributed via electronic mail to all high school athletic directors in the state of Vermont (n = 74). The survey was adapted from the Web-based National Registry for AED Use in Sports (http://www.AEDSPORTS.com). Subsequent follow-up with redistribution of the survey took place at 4-, 8-, and 12-week intervals after the initial distribution. After 12 weeks, all remaining nonresponders were contacted by telephone in attempts to complete the survey. The survey was first distributed in May 2011, and all surveys were completed by December 2011. All identifying information was kept confidential, and only the authors conducting the study knew the individual results of each responder.
The survey consisted of questions pertaining to the population of each high school, the number of students participating in athletics, the existence of an emergency action plan (EAP) in response to SCA, the prevalence of AEDs in each high school, those formally trained in cardiopulmonary resuscitation (CPR) and AED utilization, the coordination of AED location in each school with local emergency medical services (EMS), as well as the location, cost, and funding for the AEDs. Categorical data were analyzed utilizing the Cochran-Armitage test for trend (SAS v.9.3 SAS Institute, Cary, North Carolina), and 2-sided P values were reported.
An online survey tool (SurveyMonkey.com, LLC, Palo Alto, California) was used to help implement the questionnaire. Using the site, we were able to track responses as well as collect and analyze our results. This study was exempted from institutional review as this was public information, placed no patient population at risk, did not apply any form of intervention toward patients, and was designed to gather only descriptive information about high schools in the state of Vermont.
Results
All high schools in Vermont (100%, 74 of 74) responded to our survey. In all but 1 instance, the survey was completed by the high school athletic director. In the 1 exception, the athletic director forwarded the survey on to the school nurse, as it was felt that the nurse was more capable of providing the information needed to complete the survey. Although all 74 schools responded, 4 schools did not fill out the survey to completion. However, the categorical data that those 4 schools did provide were included within our statistical analysis. Of the 74 schools that responded, 60 (81%) reported having at least 1 AED on site. When comparing size of school to AED prevalence, larger schools (>800 students) were more likely to have an AED on school grounds than smaller schools (<400 students) (Z score = -3.13, P = 0.00) ( Table 1). Of the 60 schools with an AED, 56 completed the survey in its entirety, providing information on the total number of AEDs in place. Of the 56 schools, 23 (41%) had 2 AEDs, 7 (13%) had 3 AEDs, and 4 (7%) had 4 or more devices. The remaining schools (39%) had a single device on site ( Figure 1).
The most common sites in which AEDs could be found within the high schools were the main office or lobby (50%) and the nurse's office (45%). Only 30% of schools kept an AED within the gymnasium, 29% kept an AED within the training room, and 16% kept an AED at athletic fields or arenas (eg, hockey rinks). Of schools with AED(s), 71% reported that they had coordinated the location of their AED(s) with local EMS responders and that those responders were able to access the AED(s) upon arrival.
In schools with AEDs, the school nurse (77%) was most likely to receive formal AED training. Other school personnel trained in the utilization of an AED included teachers (57%), administrators (54%), and athletic trainers (48%). Coaches (45%) were one of the least likely to have training in AED use. CPR training among school personnel followed a similar trend, with 74% of nurses being CPR certified. After school nurses, 47% of coaches, 45% of athletic trainers, and 43% of administrators were trained in CPR.
When asked about the cost for each high school to acquire a single AED, 48% of schools reported that it would cost them between $1001 and $2000 per AED. Of the various means available to finance the purchase of an AED, survey participants reported that 46% of schools purchased their AED(s) with funding from the school budget, while 39% were funded through grants and 27% were given as donations.
Of the 74 high schools in Vermont, 41 (55%) have an EAP in place to respond to SCA. However, of those 41 schools, only 15 (37%) practice and review their EAP at least once annually. There were no reported incidents of AED utilization or sudden cardiac arrest within the past year prior to survey completion.
discussion
Despite the overall success in cardiac arrest survival rates with the placement of AEDs in public locations, their acquisition by high schools across the country has been slow to materialize. In 2001, only 25% of high schools in the states of Iowa and California had at least 1 AED on school grounds. 11 Perhaps one reason that schools may be hesitant to purchase AEDs is because the incidence of SCA in children and adolescents initially was felt to be much lower. 12 Previous studies on the prevalence of sudden cardiac death (SCD) in high school athletes in Minnesota estimated the incidence of SCD to be in the range of 1:200,000 per year. 12 Newer studies suggest that the annual incidence of SCA in high school student athletes may be even higher, approximately 4.4 in 100,000. 7 In the Resuscitation Outcomes Consortium Epistry-Cardiac Arrest for population-based data on pediatric cardiac arrest, research suggests that in the pediatric population (age <20 years), the overall incidence of out-of-hospital cardiac arrest was 8.04 per 100,000 pediatric person-years. 2 Comparatively, the frequency in which SCA occurs in adults (age >35 years) appears to be much higher, occurring in roughly 1:1000. 1 Over the past decade, the number of high schools across the country implementing AEDs appears to be on the rise. In 2004, 143 of the 400 public high schools in Wisconsin (35%) volunteered to take part in Project ADAM, 4 a program designed to educate adults as well as students about SCA in children and adolescents. As part of the program, emphasis on CPR training for graduating seniors was encouraged, and participating schools agreed to place AED(s) on school grounds. 3,4 More recently, high schools in Washington and North Carolina have also shown a growing prevalence of AED placement within their schools (54% and 72%, respectively). 13,15 In Vermont, 60 of the 74 high schools (81%) reported having at least 1 AED. In addition to the low rates of SCA in the adolescent population, there is also the concern that AEDs are expensive to purchase and place additional stress on already reduced school budgets. In the AEDs in the Schools program, which sought to donate AEDs to high schools in Greater Boston, 27 of 29 schools that received a device as a donation reported a lack of funding as the major deterrent to the purchase of additional devices. 9 Yet, after the initial donation, 25 high schools decided to purchase additional AEDs, with most reporting that the only impetus for purchasing them was information they learned regarding the effectiveness of AEDs. 9 In this investigation, the majority of AEDs were funded through the school budget (46%), while only 27% were funded through donation. Further studies are needed to determine whether financial resources are an obstacle for those schools in Vermont currently lacking sufficient AEDs.
The 2010 US Census Bureau reported the population in the state of Vermont was 625,741, making it the second least populous state in the United States. 16 Of those living in Vermont, 414,480 live in rural Vermont. Fifteen hospitals and 6 long-term care facilities provide health services to residents who are dispersed over an area of 9,250 square miles. Of these institutions, 8 are identified as critical access hospitals, ensuring health care services to those living in rural areas. 10,18 Given the rural nature of Vermont and that many high schools remain isolated from health care professionals, it is crucial that school personnel who serve as first responders be prepared to act in the event that an SCA occurs. An integral part of preparation as a first responder means receiving proper training in both CPR as well as formal AED training. 7 Given their close proximity to high school athletes at the time of collapse, coaches are often first responders at the time of SCA.
While being trained in CPR and AED utilization is important for those responding to SCA events, that alone is not adequate emergency preparedness. To ensure that precious time is not wasted, schools should have in place an EAP for response to SCA, which should include any potential first responder (teachers, coaches, athletic trainers, students, etc) trained in CPR and AED utilization. In 2007, an interassociation task force released a consensus statement regarding recommendations for emergency preparedness and management of SCA in high school and collegiate athletic programs. In that statement, the essential elements of an EAP should include: formulation of an effective communication system, training responders in CPR and AED use, access to an AED for prompt defibrillation, coordination and integration of on-site responder and AED programs with local EMS responders, and practice of the response plan. 6 As our study showed, slightly more than half of the schools in Vermont (55%, 41 of 74) already have established an EAP for SCA. However, of the 41 high schools with EAPs, annual practice and review of these plans was limited to only 15 (37%). This highlights a significant deficiency in the ability of Vermont high schools to plan accordingly for emergency situations and must be a point of emphasis for the future.
In its scientific statement on response to cardiac arrest, selected life-threatening emergencies, and the medical emergency response plan, 1 the American Heart Association recommended the implementation of an AED program in schools with at least 1 of the following: 1. A reasonable probability of the AED use within 5 years of rescuer training and AED placement or an episode of SCA has occurred within the previous 5 years. 2. There are children or adults at the school who are believed to be at high risk for SCA. 3. An EMS call-to-shock interval of less than 5 minutes cannot be achieved reliably with conventional EMS services and a collapse-to-shock interval of less than 5 minutes can be achieved reliably (>90% of cases) by training and equipping lay persons to function as first responders. 1 Given the rural nature of Vermont, it is likely the call-to-shock interval of less than 5 minutes cannot be achieved reliably. Many schools in Vermont are situated a long distance from the nearest hospital and/or emergency response services, making the implementation of AEDs in schools that much more important. EMS involvement is critical given that they can alert the caller or bystander to the location of the AED as well as offer recommendations on optimal locations for AED placement.
The unique rural demographics of the state of Vermont and the high schools within the state create specific challenges if an SCA were to occur. Long distances to health care facilities and potentially longer EMS response times mean that AED implementation programs within high schools may serve as the only way of achieving life-saving, early defibrillation in rural areas. Proper steps toward emergency preparedness must be taken. While there have been other studies looking at the prevalence and utilization of AEDs in high schools, 13,15 response rates have been low and they have been conducted in larger, more urban areas of the country where immediate access to advanced medical care may be more readily available. To our knowledge, this represents the first study looking at the prevalence of AEDs in high schools within a predominantly rural setting and the first that was able to achieve a 100% response rate. This study establishes deficiencies in emergency preparedness for SCA that may be similar in other rural states. Follow-up work is needed to target the 14 schools without AEDs in an effort to achieve AED implementation for all Vermont high schools. The Vermont Principals' Association Sports Medicine Advisory Committee is aware of our findings, and outreach efforts are underway.
The primary limitation of this study is the inherent limitations of a retrospective survey. Although the study achieved a 100% response rate, recall bias or incomplete information may have been provided. The study was also not designed to investigate the incidence or outcomes of SCA in Vermont high schools. Because of the relatively small number of total high schools, a longer study period would be needed to examine the frequency and outcomes of SCA in this rural state.
conclusion In Vermont, the majority of high schools have at least 1 AED on school grounds and approximately half have an EAP in place for SCA. However, only one third of schools practice and review their EAPs on an annual basis, and significant improvement in the establishment of EAPs for sudden cardiac arrest is needed. High schools in Vermont must work to train more individuals in CPR and AED and improve coordination of AED placement with local EMS responders. Further investigation into the barriers that exist for AED acquisition for the remaining Vermont high schools is warranted. | 2018-04-03T05:59:00.507Z | 2013-04-10T00:00:00.000 | {
"year": 2013,
"sha1": "a526b164a7b68a31c2d91894b5284f1b20b8b8a1",
"oa_license": "implied-oa",
"oa_url": "https://europepmc.org/articles/pmc3806176?pdf=render",
"oa_status": "GREEN",
"pdf_src": "PubMedCentral",
"pdf_hash": "a526b164a7b68a31c2d91894b5284f1b20b8b8a1",
"s2fieldsofstudy": [
"Medicine"
],
"extfieldsofstudy": [
"Medicine"
]
} |
232423225 | pes2o/s2orc | v3-fos-license | Resistance to Thyroid Hormone Beta: A Focused Review
Resistance to thyroid hormone (RTH) is a clinical syndrome defined by impaired sensitivity to thyroid hormone (TH) and its more common form is caused by mutations in the thyroid hormone receptor beta (THRB) gene, termed RTHβ. The characteristic biochemical profile is that of elevated serum TH levels in absence of thyrotropin suppression. Although most individuals are considered clinically euthyroid, there is variability in phenotypic manifestation among individuals harboring different THRB mutations and among tissue types in the same individual due in part to differential expression of the mutant TRβ protein. As a result, management is tailored to the specific symptoms of TH excess or deprivation encountered in the affected individual as currently there is no available therapy to fully correct the TRβ defect. This focused review aims to provide a concise update on RTHβ, discuss less well recognized associations with other thyroid disorders, such as thyroid dysgenesis and autoimmune thyroid disease, and summarize existing evidence and controversies regarding the phenotypic variability of the syndrome. Review of management addresses goiter, attention deficit disorder and “foggy brain”. Lastly, this work covers emerging areas of interest, such as the relevance of variants of unknown significance and novel data on the epigenetic effect resulting from intrauterine exposure to high TH levels and its transgenerational inheritance.
INTRODUCTION
The term resistance to thyroid hormones (RTH) refers to the clinical syndrome of reduced sensitivity to thyroid hormones (TH) first described in 1967 (1) and until recently it was synonymous with mutations in the thyroid hormone receptor beta (THRB) gene. In the past decade, mutations in the THRA gene, as well as genetic defects involving TH cell transport and metabolism were added to those of defects of TH action, broadening our understanding of impaired TH sensitivity (2)(3)(4).
This mini-review is dedicated to RTH due to mutations in THRB gene producing RTHb, having as a signature elevated serum free iodothyronines levels but non-suppressed thyrotropin (TSH) in the absence of other conditions that may produce some of the characteristic test abnormalities. It focuses on emerging concepts, unusual associations and controversies involving diagnosis and management, while providing a succinct overview of RTHb covered in most medicine and specialty textbooks (5,6).
OVERVIEW OF RTHb
As most neonatal screening programs are based on TSH measured in dry blood spots, the precise incidence of RTHb is unknown. Surveys of 80,884 and 74,992 newborns using TSH and T 4 measurements identified 2 and 4 infants with THRB gene mutations indicating a prevalence of 1 in 40,000 and 1 in 19,000 live births respectively (7,8). Frequency among sexes is equal, whereas prevalence may vary somewhat among ethnic groups. The inheritance of RTHb is typically autosomal dominant. This is explained by the formation of dimers between the mutant and normal (wild-type; WT) TH receptor (TR) interfering with the function of the WT TRb. Since the first description of a THRB gene missense mutation causing RTHb (9), 236 different mutations in 805 families have been identified. They are located in the functional areas of the ligand (T 3 )-binding domain and adjacent hinge region (10). In 14% of individuals manifesting the RTHb phenotype no THRB mutations were identified. Rarely familial, they may be caused by mosaicism (11), whereas it has been postulated that mutations in enhancers, repressors or cofactors may be responsible for this subgroup of RTHb (12).
The distinctive biochemical feature of RTHb is high serum free iodothyronine levels (principally free T 4 ) with normal or high TSH concentration. This discrepant correlation has brought the term "inappropriate TSH secretion". Its wide use is deplorable as in fact the degree of TSH secretion is appropriate for the reduced sensitivity of the hypothalamic-pituitary axis to TH. Individuals with RTHb maintain a nearly euthyroid state compensated by the high TH level in concert with the tissue expression level of the mutant receptor. Thus, features of TH deficiency and excess may co-exist, producing sinus tachycardia in the heart expressing mainly the WT TRa and goiter by TSH stimulation, as the pituitary expresses mainly TRb including the mutant form. Visual disorders may also be present due to retinal photoreceptor dysfunction (13). Serum TSH determination remains the most sensitive test to determine reduced sensitivity to TH. In contrast, serum markers of TH action on peripheral tissues, such as cholesterol, creatine kinase, alkaline phosphatase, osteocalcin and sex hormone-binding globulin are less reliable, unless they are measured before and after administration of T 3 (14).
After excluding assay interference as a cause of discrepant thyroid function tests (15), the principal other condition to be considered in the differential diagnosis of RTHb is TSH secreting pituitary adenoma (TSH-oma), particularly in the absence of family history. Thus, testing of first-degree relatives is helpful and cost effective. Characteristics of a TSH-oma include failure to suppress TSH after the administration of supra-physiologic doses of T 3 , failure to normally stimulate TSH with TSH releasing hormone (TRH) (although exceptions of TSH-omas with TSH response to TRH have been reported), elevated sex hormone binding globulin levels and increased ratio of pituitary a glycoprotein relative to TSH (16). Co-secretion of growth hormone and prolactin and abnormal pituitary imaging on computerized tomography or magnetic resonance imaging are important diagnostic findings. However, incidental pituitary lesions may be found in up to 24% of patients with RTHb (15), thus increasing the complexity in differential diagnosis and the value of hormonal investigation and dynamic testing. Conditions that increase the serum iodothyronine levels in the absence of thyrotoxicosis must be considered, including familial dysalbuminemic hyperthyroxinemia (FDH). In a recent study of Khoo et al., the presence of the albumin mutation R218H in FDH interfered with the measurements of free T 4 and T 3 by automated immunometric assays leading to misdiagnosis of FDH as RTHb or TSH secreting tumor (17). The diagnosis of RTHb becomes quite challenging in the presence of concomitant thyroid pathology, a subject addressed in greater detail below. Caution should be exercised in the reduction of TH levels with antithyroid medication and ablative therapies (radioactive iodine or surgery) as it leads to difficulty in the subsequent treatment of hypothyroidism.
COMBINED RTHb AND THYROID DYSGENESIS
The diagnosis of RTHb is challenging and its management complicated when it co-exists with other disorders, such as congenital hypothyroidism (CH) and thyroid dysgenesis. Children with RTHb commonly have short stature, goiter and learning difficulties (14) and in association with CH will present high serum TSH and may exhibit hypothyroid symptoms when treated with standard levothyroxine doses. Five reports of RTHb with CH due to ectopic thyroid tissue have been reported (18)(19)(20)(21)(22). Of note, the case reported by Guo et al., had a lingual thyroid with a typical RTHb phenotype but no detectable mutations in the THRB gene (21).
Persistent serum TSH elevation is frequently encountered during the early treatment of CH despite reaching serum T 4 level in the upper limit of normal. This has been attributed to a delayed maturation of the T 4 mediated feedback control of TSH (23). Defining the cause of persistent TSH elevation and addressing it appropriately is of paramount importance, as undertreatment may adversely impact growth and mental development. When non-compliance and suboptimal treatment are excluded by measurement of serum T 4 and T 3 , suspicion for coexistence of RTHb should be raised and, when confirmed, treatment with supraphysiologic doses of levothyroxine aims to bring the serum TSH to near normal while following growth, bone maturation and cognitive development. When RTHb and ectopic thyroid tissue co-exist, another reason to aim at TSH suppression is to prevent thyroid tissue expansion in anatomic locations, such as the base of the tongue, that may cause dysphonia and hemoptysis.
AUTOIMMUNE THYROID DISEASE AND RTHb
Autoimmune thyroid disease (AITD) is a common thyroid condition affecting the general population and its coexistence with RTHb has been considered incidental (24,25). However, in a study of 330 individuals with RTHb and 92 unaffected firstdegree relatives, subjects with RTHb had an over 2-fold higher frequency of positive thyroid auto-antibodies (26), suggesting that this association is not coincidental. A proposed pathophysiologic mechanism by the group of Gavin et al. invoked chronic stimulation of intrathyroidal lymphocytes by elevated TSH in RTHb leading to pro-inflammatory cytokine production and thyrocyte destruction (27). Yet, in the study of Barkoff et al., the prevalence of AITD by age group was not influenced by the TRb genotype which argues against high TSH being the cause of AITD (26).
Previous studies have shown that TH activates the immune system by acting on thymic epithelial cells and by direct effect on neutrophils, natural killer cells, macrophages and dendritic cells (28,29). TH augments dendritic cell maturation and induces pro-inflammatory and cytotoxic responses. Given that dendritic cells are involved in the pathogenesis of AITD (30,31), this might be a pathway mediating the association between RTHb and AITD.
VARIABILITY IN RTHb MANIFESTATION
RTHb manifestations can be variable in tissue expression and in severity. The terms "generalized", "isolated pituitary" and "peripheral tissue" resistance have been used to describe different clinical manifestations of RTHb suggesting tissue variability in the resistance to TH. The term generalized resistance to TH (GRTH) was applied to most patients with RTHb that appear to maintain a euthyroid state whereas pituitary resistance to TH (PRTH) referred to patients with RTHb that have symptoms of thyroid excess in peripheral tissues or demonstrate changes in peripheral tissue markers compatible to TH action without significant suppression of TSH (32). A single patient with presumed isolated peripheral RTH (PRTH) was reported, in whom administration of high dose of liothyronine (L-T 3 ) suppressed serum TSH but elicited no clinical signs of TH excess (33). Subsequently shown not to have a THRB gene mutation, this case likely represents acquired reduced sensitivity to TH through deiodinase-3 induced hormone inactivation. The clinical spectrum in RTHb is quite broad and overlapping, even among carriers of the same THRB mutation and within the same family, suggesting that the classifications of generalized and pituitary RTHb are rather semantics to describe a varying range of clinical signs and symptoms resulting from altered sensitivity to TH (34)(35)(36).
In some instances, the variability in the severity of the resistance to TH is readily explained on the basis of the character and position of the genetic defect. Homozygous THRB mutations are clinically more severe as they lack a WT TRb and they interfere with the function of the WT TRa through heterodimerization (37,38). Frame-shift mutations, producing a nonsense extension of the TRb carboxyl terminus, interfere not only with ligand binding but also with interaction of the cofactors (39). Similarly, mutations with near normal ligand-biding can interfere with function through impaired binding to DNA (R243Q/W) (40,41) and others (L454V and R383H) have altered binding to coactivators or corepressors (32,42,43) leading as in the case of R429Q (44) to more prominent suppression of TSH through predominant effect on genes negatively regulated by TH. Alberobello et al. (45) showed that when a single nucleotide polymorphism located in an intronic enhancer was associated with R338W, it produced pituitary specific over-expression of the mutant TRb2 receptor illustrating the role of regulatory regions in tissue specific manifestation of RTHb.
Differences in the level of expression of the mutant THRB allele relative to the WT in germline transmitted RTHb have been shown in fibroblasts (46), but this was not found in another study (47). However, variable tissue expression of a mutant TRb does occur in de-novo mutations resulting in mosaicism (11). The latter can also explain the failure to identify a THRB gene mutation in individuals with classical presentation of RTHb when the only DNA source was circulating leukocytes. Finally, dramatic differences in phenotype observed among members of a family with the same THRB gene mutation have remained unexplained despite extensive genetic in vivo and in vitro functional studies (48).
CURRENT AND FUTURE TREATMENT APPROACHES
No specific therapy to fully correct the TRb defect is currently available. Based on the mechanism producing the defect, it is clear that developing mutation-specific ligands would abrogate the dominant negative effect of the mutant TRbs, allowing the WT TRb to elicit T 3 mediated thyroid hormone action. In 2005, the laboratory of the chemist John Kho synthesized TH analogues able to abrogate the dominant negative effect of the TRb mutants R2320C, R230H and R316H when tested in vitro (49). More recently Yao et al. (50) showed that roxadustat, a drug used to treat anemia of renal failure, had 3-to 5-fold higher binding to the TRb mutants V264D, H435L and R438H than T 3 . However, none of these agonists have been tested in vivo. Similarly, the development of cell and tissue-specific TH antagonists could reduce the cardiotoxic effects of high serum TH levels acting on the WT TRa predominantly expressed in the heart. Therefore, as of this writing, management of TRb is tailored to the individuals' symptoms resulting either from tissue TH excess or deprivation. Goiter, hyperactivity and mental "clouding" are clinical features that benefit from judicious treatment with L-T 3 without inducing side effects from TH excess.
Goiter is frequently observed in individuals with RTHb but is usually of little consequence. However, in the occasion of larger symptomatic goiter, a surgical approach is usually ineffective, as goiter tends to re-occur. Therefore, it is logical to target TSH suppression to inhibit thyroid gland growth (51). An approach of administering supraphysiologic doses of T 3 every other day (250 µg in the case of TRb R243Q) was successful in drastically reducing goiter size in a young patient without inducing thyrotoxic symptoms, as serum T 3 rapidly declined reaching levels lower than baseline before the ingestion of the next L-T 3 dose (52). The rationale is to deliver a large dose of the short lived L-T 3 to achieve very high peak serum level suppressing the TSH below 0.1 mIU/L to inhibit thyrocyte growth without sustaining elevated TH levels long enough to cause thyrotoxic symptoms (52). Thyroid nodules are quite prevalent in the general population and thus may occasionally co-exist with RTHb. Although the majority of thyroid nodules are benign and do not require surgical management, there are few reported cases of papillary thyroid carcinoma in patients with RTHb. In these cases, thyroidectomy and radioactive iodine ablation to prevent disease recurrence result in lifelong levothyroxine replacement therapy, and in RTHb persistently high serum TSH. Although the outcomes in the reported cases were fortunately not unfavorable, levothyroxine therapy is challenging and supraphysiologic doses are often needed to maintain serum TSH in lowest tolerable level (53). Alternative options to consider include 3,3,5-triiodothyroacetic acid (Triac), a thyroid hormone analogue with thyromimetic effects on pituitary and liver tissue that may be used to suppress TSH, combination of levothyroxine with beta-blocker to alleviate tachycardia along with calcium and vitamin D supplementation to prevent bone loss acceleration. Lastly, surveillance strategy may be considered for occult, micro-papillary thyroid carcinomas with low potential for aggressive progression.
Attention deficit disorder (ADHD), reported in 48-83% of individuals with RTHb, is treated using conventional drugs. When such medications are ineffective, treatment with L-T 3 was found beneficial in reducing impulsivity in 5 of 8 and hyperactivity in 4 of 7 individuals with RTHb and ADHD but not in individuals with ADHD only (54). Every-other-day L-T 3 therapy was also effective to improve the insomnia and hyperactivity in a young child with severe RTHb phenotype intolerant to daily L-T 4 therapy (55).
The success of treatment with intermittent high dose L-T 3 in improving brain function seems to be linked to the reduction of serum T 4 , a hormone more readily available to the brain which expresses predominantly TRa, providing a thyrotoxic local environment. This would be the rationale to consider blockand-replace strategy, proposed by Dr. Alexandra Dumitrescu, and used by the senior author to ameliorate "foggy brain" and anxiety occasionally reported by RTHb patients, whereas beta blockade may be employed to help with tachycardia.
Lastly, Triac with higher affinity than T 3 for several TRb mutants may be used to diminish the dominant negative effect of a TRb mutation. Further, though its short half-life, Triac can effectively reduce TSH with lesser thyromimetic effect on peripheral tissues (56). Triac therapy has been used in few RTHb cases and was found beneficial in partially alleviating thyrotoxic symptoms including tachycardia, excessive perspiration, attention deficit disorders, as well as goiter. This was the case in patients harboring mutations in the ligand binding domain (residues 310-353 and 429-460), whereas two cases with mutations in the hinge region were refractory to Triac (56,57). Notably, in a pediatric case of a homozygous R243Q mutation with features of thyrotoxicosis and early dilated cardiomyopathy, combination of Triac with methimazole resulted in reduction of thyroid hormones levels and normal TSH accompanied by lower basal metabolic rate and improved growth and cardiac function (58).
A summary of recommendations to guide clinical management of subjects with RTHb is presented in Figure 1.
THE IMPACT OF TRb VARIANTS OF UNKNOWN SIGNIFICANCE
The development of next generation sequencing (NGS) and its increased availability in clinical practice leads to identification of variants of unknown significance (VUS). These include variants of the THRB gene not previously reported to be associated with RTHb. The interpretation of such genetic reports, particularly missense mutations, poses a problem to the practicing physicians; how to explain the findings to the patient and how to proceed with future care. In vitro functional analyses of VUS are not commercially available and results cannot be deduced with certainty even when they are.
THRB gene mutations are clustered in three regions of the ligand-binding domain of the TRb. Yet a major region devoid of mutation ("cold area") contains CG-dinucleotides which are mutagenic hot spots. Artificial mutations created in these CGs produced TRbs weak in dominant negative effect explaining the failure to identify mutations in this region of the receptor (59). This is explained by the fact that the same region is included in the dimerization domain. This region originally encompassed codons 348-437. Later, with the identification of THRB gene mutations causing RTHb, the "cold region" was narrowed down to encompass codons 384-425 (32,60). Within this region, 12 variants (P384L, G385R, L386V, E390D, R391K, D397G, S398G, N408S, H413D, V414M, K420R, and V425L) were reported in the gnomAD database without information regarding clinical phenotype (61). Although most variants are considered benign based on in silico prediction algorithms, conflicting predictions were made for the P384L, D397G and K420R variants and the G385R variant was considered damaging (62). Recently, a 48 year-old patient with AITD, treated with levothyroxine, was found to have high free T 4 with non-suppressed TSH. A mutant TRb G385E was identified and reported as VUS. Family screening uncovered the same mutation in relatives with normal thyroid function, suggesting that this mutation may not be responsible for the abnormal thyroid pattern (63). Similarly, the G339S variant was identified in a family with AITD after an individual was misdiagnosed with RTHb, but the same variant was then found in several family members with normal thyroid function, making it unlikely for the G339S variant to be causally related to a RTHb phenotype (24).
The above paradigms illustrate that in silico prediction algorithms may not always be reliable when studying the functional relevance of VUS. Genotype-phenotype co-segregation among family members is useful in characterizing the functional impact of THRB mutations. Computational resources that factor in protein specific functional domains may have some predictive functional relevance of VUS but should not be the basis guiding clinical decision making.
EPIGENETIC EFFECT OF RTHb AND ITS TRANSGENERATIONAL INHERITANCE
The first body of evidence on fertility and pregnancy outcome in RTHb came from studies in a large Azorean kindred harboring the R243Q mutation. Fertility was not affected and, contrary to women with thyrotoxicosis, RTHb did not produce an increase in premature labor, stillbirth or pre-eclampsia, jn agreement with the women's euthyroid state despite elevated TH levels (64). However, a significantly higher rate of early miscarriages was observed in women with RTHb compared to spouses of males with RTHb or unaffected first-degree relatives independent of maternal age and parity. Furthermore, a tendency was seen for these women to miscarry unaffected fetuses rather than fetuses with RTHb, suggesting that the miscarriages occurred due to fetal exposure to incongruent high TH levels. In addition, unaffected newborns of mothers with RTHb had significantly lower birth weight and suppressed TSH at birth compared to offspring of unaffected mothers, arguing that they were exposed in a hypercatabolic intrauterine environment of high TH concentration, whereas infants with RTHb were protected from the toxic effect of TH excess. Of note, when women with RTHb carrying unaffected fetuses were given antithyroid medication to avoid free T 4 levels 20% higher than the upper limit of normal, the birth weight and TSH levels at birth of their offspring was similar to infants with RTHb (65).
In a subsequent study, the long-term effect of intrauterine exposure to high TH levels was examined in WT members of the Azorean kindred. Specifically, the study involved unaffected offspring of mothers with RTHb and offspring of unaffected mothers, whose fathers had RTHb, as well as mice mimicking the human phenotype. Unaffected humans and WT mice born to mothers with RTHb and exposed to high TH levels in utero developed reduced central sensitivity to thyroid hormone (RSTH), that persisted during adulthood (66) (Figure 2). Increased expression of deiodinase 3, the enzyme that inactivates TH, was found in the pituitaries of the WT mice born to dams with RTHb (66). This effect was found to be transmitted by male descendants but not in female with likewise RSTH (67). Although the exact mechanism of this transgenerational epigenetic inheritance is not fully characterized, it is thought to involve possible modulation of the imprinted deiodinase 3 gene that regulates local TH availability at a tissue specific level. It remains unclear whether prolonged exposure to high TH levels could have similar implications in adult life. This deserves further investigation as such a finding would have implications in the management of larger populations, such as individuals on long term TSH suppressive levothyroxine therapy for differentiated thyroid cancer.
DISCUSSION-CONCLUSIONS
The diagnosis of RTHb is challenging and the main condition in the differential diagnosis is TSH-oma. Diagnosis and management of RTHb are more challenging when other thyroid disorders co-exist, such as CH and ectopic thyroid tissue. More recently, an association has been described between RTHb and AITD. Although the causal relation remains unclear, proposed pathophysiologic mechanisms include TSH or TH induced stimulation of pro-inflammatory and cytotoxic responses. The observed variability in clinical manifestation of RTHb can be explained by the type of genetic defect, e.g. homovs hetero-zygosity, frameshift vs insertion/ deletion, mutations with predominantly TRb2 mediated action, mosaicism, and the tissue specific variability in TRb expression, e.g. heart and brain vs pituitary and liver. Management is tailored to control symptoms arising from tissue specific excess or lack of TH. In small case series treatment with every-other-day L-T 3 was beneficial in improvement of goiter and ADHD symptoms. When RTHb co-exists with CH, supraphysiologic doses of L-T 4 are needed to achieve normal bone and cognitive development. The advances in NGS have led to increasing frequency of VUS identification, where there may be limited data on their functional relevance beyond in silico prediction models. Caution should be exercised as to not guide clinical decision making based on computational resources and utilize information from genotypephenotype co-segregation in family members. Transgenerational studies in humans and mice provide evidence of an epigenetic effect induced by RTHb, by in utero exposure of WT fetuses to high TH concentration. The resulting reduced sensitivity to TH shows transgenerational inheritance across the male but not the female line and is thought to be mediated via modulation of deiodinase 3, that regulates local TH availability.
The advances in our knowledge on RTHb raise novel questions about TH action outside the hypothalamus-pituitarythyroid axis and the emerging concepts on epigenetic effect of RTHb need to be explored further, as they may have implications in larger populations, such as patients with thyroid cancer on long term TSH suppression therapy with TH.
AUTHOR CONTRIBUTIONS
TP and SR designed and wrote this manuscript and both conceptually contributed to this work. All authors contributed to the article and approved the submitted version.
FUNDING
This work was supported in part by grant DK15070 from the National Institutes of Health. The content is solely the responsibility of the authors and does not necessarily represent the official views of the National Institute of Diabetes and Digestive and Kidney Diseases or the National Institutes of Health. TP is supported by the NIH T32 grant 5T32HL007609-33. | 2021-03-31T13:18:39.523Z | 2021-03-31T00:00:00.000 | {
"year": 2021,
"sha1": "a3aa9df62d374f43a51ef206cdd63758abdba648",
"oa_license": "CCBY",
"oa_url": "https://doi.org/10.3389/fendo.2021.656551",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "a3aa9df62d374f43a51ef206cdd63758abdba648",
"s2fieldsofstudy": [
"Medicine",
"Biology"
],
"extfieldsofstudy": [
"Medicine"
]
} |
266336349 | pes2o/s2orc | v3-fos-license | Mimicking hypomethylation of FUS requires liquid–liquid phase separation to induce synaptic dysfunctions
The hypomethylation of fused in sarcoma (FUS) in frontotemporal lobar degeneration promotes the formation of irreversible condensates of FUS. However, the mechanisms by which these hypomethylated FUS condensates cause neuronal dysfunction are unknown. Here we report that expression of FUS constructs mimicking hypomethylated FUS causes aberrant dendritic FUS condensates in CA1 neurons. These hypomethylated FUS condensates exhibit spontaneous, and activity induced movement within the dendrite. They impair excitatory synaptic transmission, postsynaptic density-95 expression, and dendritic spine plasticity. These neurophysiological defects are dependent upon both the dendritic localisation of the condensates, and their ability to undergo liquid–liquid phase separation. These results indicate that the irreversible liquid–liquid phase separation is a key component of hypomethylated FUS pathophysiology in sporadic FTLD, and this can cause synapse dysfunction in sporadic FTLD. Supplementary Information The online version contains supplementary material available at 10.1186/s40478-023-01703-w.
Introduction
Fused in sarcoma (FUS) is a DNA/RNA-binding protein in which mutations and altered post-translation modifications (especially hypomethylation of arginine residues) give rise to pathological condensates that cause FUS-associated frontotemporal lobar degeneration (FTLD-FUS) and familial amyotrophic lateral sclerosis (fALS-FUS) [1][2][3].Under physiological conditions FUS is mainly located in the nucleus, from where it shuttles to the cytoplasm to perform roles at dendritic and axonal compartments.A key feature of FUS, essential for its function, is its ability to undergo liquid-liquid phase separation (LLPS) to form physiologically reversible biomolecular condensates (hereafter "condensates").These condensates are thought to underpin the role of FUS ribonucleoprotein granules in supporting regulated, specialised protein synthesis in distal neuronal compartments [2][3][4].The formation of these condensates is normally a reversible process.However, dysregulation of this process is a common feature across FUS-related pathological conditions.In these conditions, the pathological FUS species form stable fibrillar inclusions [1][2][3].These pathological condensates are typically mislocalised to the cytoplasm of spinal and hippocampal neurons of ALS and FTLD patients [5][6][7], and are characteristic neuropathological features of fALS-FUS and FTLD-FUS.These aberrant cytoplasmic condensates are thought to play a central role in spinal and hippocampal synaptic dysfunction in these disorders [10,11].However, two distinct biophysical mechanisms accelerate the formation of these pathological fibrillar condensates.In fALS-FUS the increased propensity of FUS to form irreversible condensates is largely driven by the presence of missense mutations.In contrast, in FTLD-FUS the pathological condensation is driven by hypomethylation of arginine residues in FUS [8,9].These differences in how the pathological condensates are formed raises the possibility that there may also be distinctions in the molecular mechanism(s) by which they induce neuronal dysfunction.To date, much of the research exploring the mechanism of FUS pathology in neurodegeneration has focused on the missense mutations associated with fALS-FUS [10].Much less is currently known about the molecular/ cellular pathobiology of hypomethylated FUS associated with sporadic FTLD-FUS [9].
In FTLD-FUS, which accounts for approximately 10% of all FTLD cases, FUS inclusions have been observed in multiple brain regions, including the hippocampal pyramidal layer [11,12].Thereby, indicating that FUS inclusions may contribute to the cognitive deficits observed during FTLD.Previous studies have suggested a role for FUS in synapse regulation and have demonstrated interactions with key synapse associated proteins (e.g., PSD95, GluA1) [13][14][15][16].Recently, knock-out of FUS in CA1 hippocampal pyramidal cells was shown to alter excitatory synaptic function in a region-specific manner [13].Collectively, positioning FUS, and its dysregulation as a potential driver of FTLD associated pathophysiology.
Biochemically, in FTLD-FUS, FUS hypomethylation accelerates pathological condensation by increasing the strength of inter-and intra-molecular arginine: tyrosine cation-pi interactions between protons in the guanidine moiety in the arginine side chains and electrons in the aromatic rings in the tyrosine side chains in FUS [2,17].However, the enzymatic basis of this arginine hypomethylation is unclear.As a result, suitable models have not been available to investigate how hypomethylated FUS condensates cause neuronal dysfunction.Specifically, it is unknown whether the neuronal dysfunction arises from the dysregulated localisation or the heightened LLPS ability of hypomethylated FUS, or both.
Previously we have shown that the biophysical effects of FUS hypomethylation can be discretely modelled by increasing the number of arginine residues (by 9,16 or 21 extra arginines) in the poorly conserved, intrinsically disordered, low complexity domain (LCD) of FUS [2].These constructs produce properly folded FUS proteins whose CD spectra are indistinguishable from wild type methylated FUS or wild type hypomethylated FUS [2].However, they display an arginine-dose-dependent increase in the cation-pi drive and accelerated formation of irreversible fibrillar FUS condensates upon ageing or upon expression in cells.The resulting pathological FUS condensates exhibit aberrant biophysical and functional properties similar to those of hypomethylated FUS purified from human FTLD-FUS brain and from Adenosine dialdehyde (AdOx)-treated cells [2,3].Specifically, these pathophysiological properties include binding to fluorescent dyes like pFTAA, and solubility characteristics similar to FUS condensates in FTLD-FUS [2,3].In the current experiments we use the FUS construct with 16 extra arginine residues (FUS-16R) because we have previously shown that FUS-16R displays properties both in recombinant protein and in cell-based experiments that most closely mimic modest degrees of arginine hypomethylation observed in FTLD-FUS brain and AdOx-treated cells [2,3].
The conventional approach to the investigation of the pathobiology of hypomethylated FUS in FTLD-FUS has been to inhibit arginine methylation using small molecules such as AdOx (adenosine dialdehyde).AdOx is a global methyltransferase inhibitor.However, AdOx causes broad changes in one-carbon metabolism, and results in the hypomethylation of DNA and numerous other proteins in addition to demethylation of FUS.As a result, the effects of AdOx on neuronal function are likely to be much broader than just hypomethylation of FUS.As a result, the "hypomethylation-mimicking" FUS constructs (especially FUS-16R) provide a discrete, powerful new tool that circumvents the limitations of AdOx.Specifically, it allows investigation of the effects of increased cation-pi driven condensation of FUS arising from FUS hypomethylation.
Here we capitalise on this tool to mimic hypomethylation of FUS in live neurons to reveal how hypomethylated FUS condensates dysregulate synaptic function.To dissect if over condensation or specific properties of the FUS condensates are responsible for pathological progression, we created two modified FUS-16R constructs.One encompasses a powerful c-terminal SV40 nuclear localisation signal (FUS-16R-NLS) that retains the FUS-16R protein in the nucleus.The other impairs LLPS by mutating 27 Tyrosine to Serine substitutions in the lowcomplexity domain, thereby reducing the cation-pi drive and impairing LLPS (FUS-16R-LLPS) [18][19][20][21].We then applied these constructs to investigate the dynamics and activity-induced recruitment of FUS in the CA1-Schaffer Collateral synapse circuit model of the hippocampus [2,22].The experiments outlined below demonstrate that stable, fibrillar condensates form in an activity-dependent fashion, and cause synaptic dysfunction.This synaptic dysfunction is dependent upon both the ability of FUS to form pathological condensates, and on the localisation of these condensates in distal neuronal compartments.
FUS-16R induced condensates in the soma and dendrite exhibit spontaneous and activity induced movement
We initially compared the localisation and dynamics of tagged FUS condensates which mimic hypomethylation of FUS (FUS-16R) and wild type FUS (FUS-WT), expressed via biolistic transfection in CA1 hippocampal neurons (Fig. 1).FUS-WT is present as abundant nucleoplasmic granules, which is the predominant physiological subcellular location for wild type FUS when expressed at endogenous levels.Interestingly, in our model we did not observe any FUS-WT at synaptic locations [14][15][16].In contrast, FUS-16R forms modest numbers of granules in the soma and nucleus, together with abundant granular assembles in dendrites (Fig. 1A).Within the apical dendrite, FUS-16R condensates exhibit dynamic movement and translocate to dendritic spine-like structures (Fig. 1B, C).Additionally, we illustrate that the FUS-16R condensates were not affected by incubation with 1,6 Hexanediol, a compound known to disassemble biomolecular condensates [20,23], this suggests that FUS-16R formed condensed assemblies in the form of gels and/or fibrillar aggregates (Additional file 1: Fig. S1).
Next, we examined whether neuronal activity alters the dynamics and localisation of the FUS condensates within spines and dendrites.To test this, we utilised the red-shifted channel rhodopsin chrimson [24] to optically induce dendritic depolarisation.We found that chrimson mediated depolarisation (647 nm, 200 ms, 0.4 Hz, 5 min) increased the movement of the FUS-16R condensates in apical dendritic regions of the CA1 neuron (p = 0.009, paired t-test; n = 7 cells per group; Fig. 1D, E).To further explore this phenomenon, we next asked whether single spine activation with uncaged glutamate (MNI-Glutamate) was sufficient to recruit FUS condensates to the activated spines.We found that uncaging glutamate at the spine induced a transient increase in FUS-16R in the spine (Fig. 1F) (F (4,28) = 4.732, p = 0.0048, one-way RM-ANOVA, Fig. 1F).
Validation of NLS and LLPS FUS-16R constructs
As we have previously shown the addition of the 16 additional arginine residues to FUS induced increased cationpi interactions and drives LLPS and subsequent fibrillar formation [2].In the present work, we wished to investigate what properties of FUS-16R condensates could be responsible for driving pathophysiology, therefore we utilised previously validated mutations to alter key properties of FUS-16R [19][20][21].Specifically, we induced the nuclear only localisation of FUS16R (FUS-16R-NLS [20]) by the addition of a c-terminal SV40 nuclear localisation signal or impaired the ability of FUS-16R to undergo LLPS (FUS-16R LLPS [19]) by the addition of 27 tyrosine to serine substitutions in the N-terminal domain (Additional file 1: Fig. S2).The various FUS constructs were expressed in HEK cells, and as expected, FUS-WT, FUS-16R, FUS-16R-NLS formed clear distinct condensates and FUS-16R-LLPS did not.These findings indicated that FUS-16R-LLPS impairs the ability of FUS-16R to undergo LLPS and forming FUS aggregates (Fig. 2A).Subsequently, we biolistically transfected the constructs into CA1 neurons and observed that FUS-WT and FUS-16R-NLS remain nuclear bound, FUS-16R forms condensates throughout the neuron and FUS-16R-LLPS is observed throughout the neurons but does not form punctate condensates (Fig. 2B).Collectively, taken in conjunction with the previously published validation of these mutations, this data confirms the specificity of the mutations introduced into FUS-16R.
Fig. 1 FUS-16R causes the formation of dendritic inclusions which exhibit spontaneous movement that is enhanced by neuronal activity.A Representative confocal images illustrating the somatic and dendritic region of CA1 neurons expressing either FUS-16R-EYFP (FUS-16R) and td-tomato (top panels; cell 1-4) or FUS-WT-EYFP (FUS-WT) and td-tomato (bottom panels; cell 5-8).B Representative straightened time-lapse image of an apical dendritic region transfected with FUS-16R top panel (1 s), middle panel (6 s).Live imaging allowed for the tracking of individual FUS-16R condensates as indicated by the pseudo-colour merged time-lapse image (bottom panel; green 1 s; magenta 6 s).C Quantification of the FUS-16R-EYFP granule trajectory length recorded from individual granules.D Representative pseudo-coloured heat map of FUS16R condensate intensity in a dendritic region of interest.Dendritic FUS16R condensates are highlighted (ROI1-3) and their movement trajectory over a 1-min period is plotted for pre-stimulation (baseline; black line) and post-stimulation (Chrismon depolarisation; red line).E Quantification of normalised FUS-16R-condensate movement (averaged 5-6 ROIs per cell, n = 6).Average trajectory length was longer post-stimulation (p = 0.009373, paired t-test).F Representative multiphoton timelapse (10-min interval) heat maps of FUS-16R intensity at a single CA1 dendritic spine prior to and following single spine glutamate uncaging.A single spine was stimulated (cyan dot) and the FUS-
FUS-16R condensates impaired single spine plasticity in a manner dependent upon dendritic FUS localisation and pathological condensation
We next wished to discover whether the observed changes in synaptic protein dynamics might affect the induction of activity-dependent dendritic spine plasticity in CA1 neurons.Activity-dependent spine plasticity is associated with both functional and structural modification of dendritic spines [26], and underlies the cellular and molecular mechanism of learning and memory [26,27].To accomplish this, we measured changes in single spine head size before and after the uncaging of glutamate (MNI-Glutamate) at apical dendritic spines (Fig. 4).
The presence of dendritic FUS-16R condensates can impair the fluorescent recovery of photobleached PSD-95
FUS binds RNA and plays a key role in RNA transport and translation [3,28].FUS also interacts with key synaptic proteins such as PSD-95 [14].Consequently, we wished to determine whether the abnormal localisation and condensation of FUS-16R in synapses might cause changes in the dynamic expression of synaptic proteins, which could underpin the observed functional impairments.In this experiment, we used PSD-95 as a representative exemplar.We applied a fluorescence recovery after photobleaching (FRAP) approach.We found that the FRAP recovery of PSD-95 was significantly reduced in neurons expressing FUS-16R when compared with control neurons (expressing td-Tomato) and FUS-WT neurons (Fig. 5A-C).Interestingly, in control (TdTomato transfected) neurons, inhibition of new protein translation, and therefore translation of PSD-95, via preincubation with anisomycin (40 μM; 30 min) reduced the PSD-95 FRAP recovery to a similar extent as in FUS-16R neurons (p = 0.9996, post hoc Tukey).These findings suggest that FUS16R condensates impair the normal homeostatic regulation of PSD-95.This could be occurring due to a direct interaction of the FUS-16R condensates with PSD-95 protein or from a sequestering of Fig. 3 FUS-16R induced a reduction of AMPA-and NMDA-R evoked EPSCs in a manner which is dependent on its localisation and ability to undergo liquid-liquid phase separation.A, D, G, J Representative widefield images illustrating the FUS expression and localisation for FUS-WT (A), FUS-16R (D), FUS-16R-NLS (G) and FUS-16R-LLPS (J) transfected neurons.B, C, E, F, H, I, K PSD-95 RNA within the condensates.Further investigation will be required to resolve this question.Finally, we determined that the reduction in PSD-95 FRAP recovery was fully rescued by FUS-16R-NLS and FUS-16R-LLPS (FUS-16R vs FUS-16R-NLS, p < 0.0001, post hoc Tukey; FUS-16R vs FUS-16R-LLPS, p = 0.0013, post hoc Tukey).Indeed, both FUS-16R-NLS and FUS-16R-LLPS exhibited similar levels of PSD-95 FRAP recovery to each other (Fig. 5D, E) and to control and FUS-WT neurons (Fig. 5D, E).Collectively, these experiments again support the hypothesis that both the dendritic mis-localisation of FUS-16R and its propensity to form stable fibrillar condensates significantly disrupt the normal activity-dependent changes in the dynamic expression of at least some key synaptic proteins (i.e., PSD-95).
Discussion
Prior work examining the pathobiology of FUS condensates has focused on condensates induced by fALS-FUS mutations.Three hypotheses have been proposed: 1) Nuclear loss of function (e.g., impairment of transcription), 2) Loss of the normal cytoplasmic role of FUS, 3) Toxic gain of function (e.g., sequestration of RNA/proteins inside condensates) [14,[29][30][31].These hypotheses are not mutually exclusive.However, the latter hypothesis is supported by several studies [4,[32][33][34], some of which show that driving the fibrillar stable condensates from the cytoplasm into the nucleus reduces synaptotoxicity [35][36][37].In contrast, the mechanisms by which arginine hypomethylated FUS condensates cause neuronal dysfunction in sporadic FTLD-FUS is still poorly understood.A major impediment to the field has been the lack of suitable tools.This obstacle arises from the absence of knowledge about the enzymatic processes that promote the accumulation of hypomethylated FUS.As a result, until recently, the only method for investigating the neurobiological impact of arginine hypomethylated FUS condensates has been through the use of small molecule inhibitors of one carbon metabolism (e.g.AdOx).However, these compounds have broad effects on the methylation states of both DNA and multiple other proteins, thereby adding many experimental confounds.
Fortunately, recent work by this and other groups have generated powerful new insights into the biophysical mechanism driving the propensity of arginine hypomethylated FUS to form stable fibrillar condensates [2,17].This work reveals that the propensity of hypomethylated FUS to over-condensation into stable fibrillar assemblies arises from the increased interaction of protons in the demethylated arginine guanidino moiety with pi electrons in aromatic ring of tyrosines.We previously took advantage of this biophysical insight to generate a series of FUS protein constructs (epitomised here by FUS-16R) that incorporate additional arginine residues [2].These additional arginine residues increase the cation-pi drive and thereby increase the propensity of hypomethylated FUS to condense into ensembles that mimic the biophysical properties of demethylated FUS.Crucially, the CD spectra of FUS-16R, wild type physiologically methylated FUS and wild-type hypomethylated FUS are indistinguishable [2].This indicates that the increased condensation propensity of FUS-16R is not due to simple misfolding and aggregation [2].
Using this molecular tool, we now demonstrate that FUS-16R causes: aberrant accumulation of pathological FUS-16R condensates in dendrites and dendritic spines which exhibit both baseline and activity-dependent hypermotility, and these pathological condensates can be recruited into dendritic spines; impaired dynamic expression of a key post synaptic protein-PSD95; impaired postsynaptic AMPA and NMDA receptor-mediated excitatory postsynaptic currents; and impaired synaptic remodelling, with the failure to enlarge synaptic spines following glutamate activation.
We show that these synaptic effects are dependent upon both the formation and dendritic mis-localisation of FUS-16R condensates.Crucially, these synaptic effects are rescued by forced re-localisation of the FUS-16R into the nucleus (using FUS-16R-NLS).The synaptic effects can also be rescued by reducing the number of tyrosine residues and thus attenuating the cation-pi-mediated condensation of FUS-16R (using FUS-16R-LLPS).We also illustrated that FUS-16R forms more condensed assemblies (most likely gel and/or fibrillar aggregates).As such, we can state that FUS-16R undergoes LLPS and then progresses to form hyper-condensed assemblies.It is then these hyper condensed assemblies located in the dendritic regions which result in synapse weakening.These rescue effects are fully congruent with rescue effects observed in similar experiments in neurons expressing fALS-FUS mutants [32].However, as the FUS-16R condensates are widespread it is possible that they could be impairing other key machinery within the neurons (e.g.transport, mitochondria etc.) which could have downstream effects on synaptic function.However, this is out width the scope of the current study and would require further investigation.
We show that one key downstream effect of pathological FUS-16R condensates is to alter the dynamic expression of key synaptic proteins such as PSD-95.PSD-95 is highly expressed in the postsynaptic compartment and plays an important role in surface presentation of AMPA receptors and in synaptic structure [36,37,41].Both PSD-95 and AMPA receptors are critical for the long-term synaptic plasticity [38][39][40].FUS appears to be intimately linked with these processes.Thus, FUS interacts with the mRNAs for key synaptic proteins (including GluA1 mRNA [15]).FUS also directly interacts with some of these key synaptic proteins, including AMPA, NMDA and CaMKII [14].We hypothesise that mis-localisation and abnormal condensation of FUS-16R mimic of arginine hypomethylated FUS could disrupt synaptic function by altering the localisation and/or availability of these key synaptic proteins and their cognate mRNAs.
The work reported here directly supports these hypotheses.Thus, we show that FUS-16R condensates, which mimic pathological hypomethylated FUS condensates, influence the dynamics of FUS condensates in dendrites and dendritic spines.We also show that they affect the expression of PSD-95 protein.These protein expressionbased changes are coupled with electrophysiological and synaptic morphology changes that are likely downstream consequences of these protein changes.
Collectively, our findings illustrate that FUS-16R condensates can impair the homeostatic function of PSD-95.However, our work does not presently identify the precise molecular mechanism by which aberrantly stable FUS-16R condensates impact the expression of PSD-95 and other synaptic proteins.However, we can propose that for PSD-95, there are at least two potential mechanisms by which pathologically condensed FUS-16R RNP granules in dendrites and dendritic spines could disrupt RNA processing for synaptic proteins like PSD-95.Firstly, the presence of the FUS-16R condensates in the dendrites and spines could impair the transport of key proteins such as PSD-95.Secondly FUS is known to bind mRNAs for synaptic RAS GTPase-activating protein 1 (SynGAP), which is essential for maintaining and stabilising the synaptic/surface expression of PSD-95 [41].Alternatively, prior work has shown that dendritic localisation of PSD-95 mRNA directly regulates PSD95 translation [42].Consequently, PSD-95 protein expression could be attenuated by mis-localisation and/or sequestration of these key mRNA's within pathological FUS-16R condensates.While this discussion focuses on PSD-95, we anticipate that other key synaptic components might be similarly influenced, and thereby contribute to the synaptic dysfunction.Appropriate additional experiments can be envisaged to address these unanswered questions.
In addition, the role of FUS and FUS mutations is relatively well documented in the presynaptic compartment [14,34,43].However, there is limited research on the impact of FUS hypomethylation at either the pre-or post-synapse.Our study utilised the organotypic hippocampal slice culture model coupled with biolistic transfection, optimised to observe a low transfection ratio into the CA1 hippocampal neuron, where there is a well characterised pre-post synaptic circuit in which the postsynaptic dendritic spine plays an important role in molecular mechanism of learning and memory.This experimental design raises the likelihood that the findings presented in this study arise from the post synaptic expression of FUS-16R therefore we eliminated possible presynaptic disturbances (i.e., only the postsynaptic neuron expressed FUS-16R).However, based on the known roles of FUS at the presynapse, further investigation into the potential pathophysiological role of hypomethylated FUS condensates at the presynaptic compartment would be beneficial.
We observed that FUS-16R condensates exhibited both spontaneous and activity induced movement within the dendritic regions of the CA1 neurons.The molecular basis of this finding is not immediately apparent.RNP granules typically do not have motor protein attachments.However, we have shown that RNP granules can be tethered to the surface of lysosomes via annexin 11 and then hitchhike on classical intracellular motors [44].Future work will be needed to discern whether the hypermobility reflects persistent links of pathological FUS-16R condensates to these RNP granule transport systems, and if the activity dependent recruitment of FUS-16R condensates to dendritic spines directly impairs synaptic function.If so, this might further contribute to impaired regulation of local new protein synthesis in synaptic terminals.Furthermore, the conditional knockout of FUS in the hippocampus was shown to alter excitatory synaptic function and lead to behavioural disinhibition [13], a hallmark of FTLD, therefore, in our model, the loss of synapse function by hypomethylation mimic of FUS could explain the underlying pathophysiology which drives behavioural changes.
Here we present data which illustrates a pathophysiogical role of hypomethylated FUS in the hippocampus, this is in keeping with research illustrating FUS condensates are observed in patient post-mortem hippocampal tissue [12] and the known role for FUS in mediating excitatory transmission [13,15].Specifically, a global hippocampal knock down of FUS was shown to reduce excitatory postsynaptic currents (EPSCs) and reduce mature (mushroom) spine structures in a manner dependent on GluA1 [15].Similarly, we observe a reduction in AMPAR and NMDAR mediated EPSC and observed a loss of spines, potentially indicating that FUS16R (i.e., hypomethylated FUS) induced a greater synaptic dysfunction.However, a recent study illustrated different roles for FUS in specific hippocampal compartments showing that regional knock down of FUS induced decreased excitatory transmission in the intermediate hippocampus and increased excitatory transmission in the ventral hippocampus [13].As we did not design the current study to examine different hippocampal subregions we cannot rule out the region specific pathophysiology.
Recent studies have shown wild type FUS RNP granules can be located at synaptic compartments, however many of these studies have utilised super-resolution microscopy or subcellular fractionation [14,43,45].Thereby suggesting that the level of FUS located at the synaptic compartments in 'healthy' neurons is small in comparison to its abundant expression in the nucleus.Furthermore, the accumulation of wild type FUS at synaptic compartments has been associated with a pathological phenotype of neurodegeneration [46].Therefore, it is not surprising that we were unable to observe wild type FUS at synaptic compartments during live cell imaging of organotypic slice culture.
There remain several important questions surrounding the neurobiology of pathological FUS condensates (either from fALS-FUS mutations or FTLD-FUS arginine hypomethylation).For instance, FUS is widely expressed in many cell types.Why then are FUSopathies predominantly manifest by neurodegeneration?Why does arginine hypomethylation of FUS predominantly target frontotemporal cortical neurons whereas the fALS-FUS mutants predominantly target upper and lower motor neurons?
An early hypothesis regarding the former question was that, compared to smaller cell types, the extremely elongated corticospinal and spinal motor neurons might be more sensitive to the impaired transport of pathologically condensed fibrillar FUS RNP granules.However, while attractive, this hypothesis does not explain the susceptibility of temporal and hippocampal neurons (with shorter axons).The hypothesis also does not explain the relative resilience of equally long ascending sensory neurons.
However, there is an alternate explanation for the phenotypic differences associated with the pathological condensates induced via arginine hypomethylation (FTLD-FUS) versus those associated with missense mutations (fALS-FUS).The difference in the formation of the condensates (e.g., hypomethylated vs methylated) may result in subtle differences in the RNA and protein interactomes of wild type FUS, missense mutant FUS, and hypomethylated FUS.Further supporting this hypothesis is the fact we observe FUS-16R to form more condensed gel and/or fibrillar aggregates, whereas the ALS associated mutation FUSP525L condensates displayed liquid-like properties [20].It is conceivable that the differing ALS versus FTLD phenotypes reflect the impact of hypomethylated arginine residues or of missense mutation on different, cell-type specific cargo elements that that are misprocessed by the pathological, stable fibrillar FUS condensates, however these hypothesis require further investigation.
Conclusion
Mimicking the pathological hypomethylation of FUS (e.g., FUS-16R) induced dendritic condensates and impaired synaptic function.Crucially, we have identified that both the dendritic localisation of the condensates, and their ability to undergo LLPS and form stable condensates is essential for driving synapse weakening.These results highlight the importance of the formation and localisation of the FUS condensates as a key component in hypomethylated FUS pathophysiology.Interestingly.FUS hypomethylated condensates have been observed in sporadic FTD cases.Clearly, these important questions will require additional work.Nevertheless, the experiments described here may have practical applications.Specifically, these experiments provide a potential platform through which to screen and preclinically validate compounds that can could be used to therapeutically manipulate abnormal dendritic FUS-16R phase state (e.g., PhaseScan technology) [47][48][49].If successful, such compounds could potentially be used in the symptomatic management of patients with FTLD-FUS even in the absence of an understanding of the enzymology of arginine hypomethylation in FUS.
Animals
All procedures involving animals were carried out in accordance with the UK Animals Scientific Procedures Act, 1986.Male 7-day old Wistar rats (Charles River, UK) were used to prepare organotypic hippocampal slices.All animal experiments were given ethical approval by the ethics committee of University of Bristol or King's College London (protocol reference U214) (United Kingdom).
Glutamate uncaging assays
Organotypic slices were submerged in a low Mg 2+ HEPES buffer and images acquired at room temperature on a multiphoton system (Scientifica Hyperscope with a Coherent Chameleon Discovery; Nikon 16x, 0.8 NA lens or a Nikon 25x, 1.1 NA lens).A region of interest containing dendritic spines located on apical secondary dendritic branches (100-200 μM from Soma).A small z-stack was obtained (0.5 μM step size).This initial image was used to identify the target spine.For induction of plasticity assays, a time-lapse acquisition was acquired the same z-stack every 2 min.Two-photon stimulation (2 Hz, 100 pulses, 10 ms, 5 mW, 720 nm) was aimed at the tip of the spines to uncage 5 mM MNI-glutamate (HelloBio, UK).Spine head area was calculated by summating the z-stack and processing via Image-J.For glutamate uncaging induced FUS16R granule movement assays, a time-lapse acquisition acquired the same z-stack every 10 min.Following a base line period glutamate was uncaged (as described above) and timelapse z-stack images acquired at 10 min intervals for a further 30 min.Change in FUS-16R was calculated from a change in the fluorescent intensity at each interval by summating the z-stack and processing via Image-J and performing a line-scan across the spine head.
Fluorescence recovery after photobleaching (FRAP) assay
PSD-95 puncta were located at regions of interest on apical secondary dendritic branches.A time-lapse acquisition (16x, 0.8 NA lens) acquired at 2 Hz for 30 s as a baseline.Photobleaching (2 Hz, 40 pulses, 300 ms, 35 mW, 920 nm, 0.45 µm × 0.45 µm log spiral shape) aimed on individual PSD-95 puncta.Immediately following photobleaching a further time-lapse acquisition was obtained at 2 Hz for 5 min.Offline, a group average was applied to average every 2 images from time-lapse and fluorescence intensity was measured at the target PSD-95 puncta via Image-J.The fluorescence intensity values were normalised to the baseline intensity.
Spinning Disk Confocal imaging
Transfected neurons were imaged at DIV 9-12 and DAT 5-7.Slices were submerged in a HEPES buffer and images acquired using Nikon-Yokogawa Spinning Disk confocal microscope with Nikon 100 × 1.10 NA lens.
PSD-95 puncta analysis
Organotypic slices were submerged in a HEPES buffer and images acquired at room temperature on a custom spinning disk confocal microscope (Nikon, Japan) with laser point stimulation (Rapp-Opto, Germany).A z-stack of a region of interest containing a secondary apical dendritic branch was acquired (8 averages per Z-frame).From the image a smaller ROI (20 μM) was selected and the z-stack averaged.The ROI was post processed in Fiji (imageJ) to reduce background noise.For PSD-95 puncta analysis the image was thresholded, and the thresholded puncta automatically analysed for area and number.
Spontaneous FUS-16R granule movement assay
The spinning disk confocal microscope was utilised to identify a dendritic region of interest, as described above.A single z-plane image was acquired for 1 min at 2 Hz capturing the localisation of the FUS-16R condensates.Offline analysis was performed with ImageJ plugin, Mosaic Suit.FUS-16R granules 5 pixels or bigger were tracked and only trajectories which persisted for 5 frames were further analysed.Total granule trajectory length (μm/min) and average granule movement (μm/sec) were tracked and calculated.
Chrimson induced FUS-16R granule movement assay
A secondary apical dendritic branch was identified, and an image acquired on the spinning disk confocal microscope.A single z-plane image was acquired for 1 min at 2 Hz to capturing the localisation of the FUS-16R condensates.Immediately following the acquisition Chrimson was activated at the dendritic region of interest (200 ms, 0.4 Hz, 5 min, with a 647 nm; Rapp-Opto laser stimulation module) following which a further min was acquired.Analysis was performed as described above for the spontaneous granule movement assay.
Validation of FUS constructs and 1,6 Hexanediol in HEK293T
HEK293T cells were cultured in DMEM-High glucose, pyruvate, not glutamate plus 10% foetal bovine serum, 1 × GlutaMax and 1 × Antibiotic-Antimycotic.Once 80% confluent, cells were transfected with the various FUS constructs (1 μg/μL) by lipofectamine 2000 (Thermofisher Scientific, UK) following manufacture guidelines.Subsequently, 24 h after transfection images were acquired using a multiphoton system (Scientifica Hyperscope with a Coherent Chameleon Discovery; a Nikon 25x, 1.1 NA lens).Average z-projections were created using FIJI (ImageJ).HEK293T cells transfected with FUS-16R were imaged at room temperature on a multiphoton system (Scientifica Hyperscope with a Coherent Chameleon Discovery; Nikon 16x, 0.8 NA lens or a Nikon 25x, 1.1 NA lens).Small Z-Stacks were acquired (0.5 μM step size) following which 10% 1,6 Hexanediol was added to the imaging media.The cells were maintained in the 1,6 Hexanediol media for 45 min and subsequently reimaged.The area of the FUS-16R condensates was calculated using FIJI (Image J), by thresholding and creating a binary image and the analyse particles function to determine the area.
Statistics
Statistical analysis was performed using GraphPad Prism 9.0 software.Details of individual statistical tests are provided within Additional file 1: Table S1.For most analyses, paired two-tailed t-tests were performed, with an alpha level of 0.05.For comparisons of effects across multiple groups, one-way ANOVAs were performed followed by post hoc analysis.Statistical analysis of changes in FUS-16R puncta area following addition of 1,6 Hexanediol was calculated by a nested t-test.Sample sizes are described in the relevant sections and were based upon preliminary findings of the minimum number of samples required to detect statistically significant difference in the grouped means, given the observed variance at a power level of 0.8.All grouped data are presented as mean ± SEM.
Fig.1FUS-16R causes the formation of dendritic inclusions which exhibit spontaneous movement that is enhanced by neuronal activity.A Representative confocal images illustrating the somatic and dendritic region of CA1 neurons expressing either FUS-16R-EYFP (FUS-16R) and td-tomato (top panels; cell 1-4) or FUS-WT-EYFP (FUS-WT) and td-tomato (bottom panels; cell 5-8).B Representative straightened time-lapse image of an apical dendritic region transfected with FUS-16R top panel (1 s), middle panel (6 s).Live imaging allowed for the tracking of individual FUS-16R condensates as indicated by the pseudo-colour merged time-lapse image (bottom panel; green 1 s; magenta 6 s).C Quantification of the FUS-16R-EYFP granule trajectory length recorded from individual granules.D Representative pseudo-coloured heat map of FUS16R condensate intensity in a dendritic region of interest.Dendritic FUS16R condensates are highlighted (ROI1-3) and their movement trajectory over a 1-min period is plotted for pre-stimulation (baseline; black line) and post-stimulation (Chrismon depolarisation; red line).E Quantification of normalised FUS-16R-condensate movement (averaged 5-6 ROIs per cell, n = 6).Average trajectory length was longer post-stimulation (p = 0.009373, paired t-test).F Representative multiphoton timelapse (10-min interval) heat maps of FUS-16R intensity at a single CA1 dendritic spine prior to and following single spine glutamate uncaging.A single spine was stimulated (cyan dot) and the FUS-16R intensity was measured by line scan across spine head.Histogram (below) illustrating FUS-16R condensate signal following stimulation in the presence (red bars) and absences (grey bars) of MNI-glutamate.Stimulation in the presence of MNI-glutamate significantly increased condensate signal at 20-(p = 0.0485, post hoc Tukey) and 30-min (p = 0.0214, post hoc Tukey) post-stimulation.**p < 0.01, paired t-test (See figure on next page.) | 2023-12-18T14:15:36.711Z | 2023-12-18T00:00:00.000 | {
"year": 2023,
"sha1": "1ac3d9905a4049def240908eabb2f3149d6785ef",
"oa_license": "CCBY",
"oa_url": null,
"oa_status": null,
"pdf_src": "PubMedCentral",
"pdf_hash": "955e8eb83038468cabc34c8c5be85bce0e0cd559",
"s2fieldsofstudy": [
"Biology",
"Medicine"
],
"extfieldsofstudy": [
"Medicine"
]
} |
236947413 | pes2o/s2orc | v3-fos-license | Overexpression of Flii during Murine Embryonic Development Increases Symmetrical Division of Epidermal Progenitor Cells
Epidermal progenitor cells divide symmetrically and asymmetrically to form stratified epidermis and hair follicles during late embryonic development. Flightless I (Flii), an actin remodelling protein, is implicated in Wnt/β-cat and integrin signalling pathways that govern cell division. This study investigated the effect of altering Flii on the divisional orientation of epidermal progenitor cells (EpSCs) in the basal layer during late murine embryonic development and early adolescence. The effect of altering Flii expression on asymmetric vs. symmetric division was assessed in vitro in adult human primary keratinocytes and in vivo at late embryonic development stages (E16, E17 and E19) as well as adolescence (P21 day-old) in mice with altered Flii expression (Flii knockdown: Flii+/−, wild type: WT, transgenic Flii overexpressing: FliiTg/Tg) using Western blot and immunohistochemistry. Flii+/− embryonic skin showed increased asymmetrical cell division of EpSCs with an increase in epidermal stratification and elevated talin, activated-Itgb1 and Par3 expression. FliiTg/Tg led to increased symmetrical cell division of EpSCs with increased cell proliferation rate, an elevated epidermal SOX9, Flap1 and β-cat expression, a thinner epidermis, but increased hair follicle number and depth. Flii promotes symmetric division of epidermal progenitor cells during murine embryonic development.
Introduction
Epidermal progenitor cells (EpSCs) divide both asymmetrically and symmetrically during embryonic skin development [1] These different forms of cell division maintain a pool of progenitor cells that can give rise to various compartments of the skin including the stratified epidermis and hair follicles [2]. Epidermal cellular division orientation is critical to the process of epidermal stratification, an essential morphogenic process required for the epidermis to act as a functional barrier to the external environment [3] and is implicated in cancer progression where normal asymmetric cell division (ACD) mechanisms are disrupted [4,5]. Prior to epidermal stratification, the majority of mitotic basal cell division occurs symmetrically with cells dividing parallel to the underlying basement membrane [6]. Over time, these symmetrically dividing multipotent stem cells develop to become short-lived EpSCs, generating daughters capable of asymmetric division while maintaining slow-cycling [7]. The formation of the epidermis begins at E9.5 and from E14 onwards, asymmetric basal cell division occurs with cells dividing perpendicular to the basement membrane and this is maintained until birth when the epidermis is fully mature [6]. Established through this stratification of the developing epidermis, the skin barrier is formed at E16.5 in Balb/c mice [8]. Beginning at E14.5, ACD is also adopted by developing EpSCs in the basal layer to form hair placodes while generating progenies retaining the adult stem cell features [7]. These placodes interact with the dermal papilla, Int. J. Mol. Sci. 2021, 22, 8235 2 of 20 which develops into hair pegs that continue to mature into hair follicles at E18.5 and become fully functional with the ability to produce hair fibre at P21 in rodents [9,10].
During late embryonic stages of development, when morphogenesis peaks, the majority of EpSCs go through ACD [2]. These EpSCs continue to develop into adult epidermal stem cells with diverse heterogeneity [11]. Nonetheless, foetal EpSCs all have a common proliferative character defined by the distinct expression of progenitor marker delta Np63 (∆Np63) and basal marker keratin 15 (K15) [12,13]. Previous studies have also implicated ∆Np63 as a key factor stimulating ACD during epidermal cell divisions [6,14]. Sex-Determining Region Y-Box 9 Protein (SOX9) has been shown to inhibit differentiation of EpSCs into keratinocytes by stimulating symmetric cell division (SCD) during stem cell self-renewal and cancer progression [15,16]. As a direct downstream target of beta-catenin (β-cat), upregulated SOX9 expression through constitutive β-cat activation has been found to enhance the colony-forming capacity of cancer cells from squamous cell carcinoma [17].
Identified as important regulators of mitotic spindle orientation, integrins participate in the establishment of cell adhesion and polarity in epithelia by interacting with the cytoskeleton, focal adhesion proteins and the extracellular matrix [18]. Extrinsic signals are thought to establish intrinsic polarity through integrin signal transduction during ACD. Together with partitioning defective proteins (PARs) [19], integrins are accepted to determine apical-basal polarity and cell fate during cytokinesis, and their expression is required to limit the proliferation and self-renewal of epidermal stem cells. The silencing of PARs results in an increase in the number of undifferentiated cells in the epidermis [20]. Mechanosensing proteins including integrin-cytoskeleton regulators, focal adhesion (FA) proteins and actin-remodelling proteins also have an important role in stem cell division [21]. Following ligand-induced or force-dependent integrin activation, FA proteins are recruited to an activated integrin β1 (Itgb1) complex, at the cell cortex during mitosis, in order to direct signal transduction from activated Itgb1 to the mitotic spindle [18]. The transmembrane protein talin, links the F-actin polymers to the cytoplasmic Itgb1 hence resulting in the recruitment of other adaptor proteins including paxillin and vinculin, ultimately initiating focal adhesion kinase (FAK) and integrin-linked kinase (ILK) signalling [22].
The actin remodelling protein Flightless I (Flii) binds to actin structures and localizes to active cellularization regions during embryonic development [23]. Mutations in Flii homolog, fli-1, disrupt the anterior/posterior polarity, cytokinesis and ACD in germline development of Caenorhabditis elegans [24]. Additionally, the loss of Flii leads to embryonic lethality in both Drosophila and the mouse [25]. Flii expression increases during development and its over-expression has been shown to impair activation of epidermal stem cells, skin barrier development and regulate cellular processes by interacting with integrinbinding proteins including paxillin, vinculin and talin [26,27]. Moreover, the interplay of Flii with Flightless I associated protein 1 (Flap1 or LRRFIP1) has been found to regulate β-cat dependent transcription [28,29]. Flap1 is an important regulator of canonical Wntsignalling (Wnt) and its interaction with β-cat results in an increased β-cat/lymphocyte enhancer-binding factor 1/T cell factor (LEF1/TCF) activated transcription via recruitment of the coactivator creb-binding protein (CBP)/p300 to the promoter in the nucleus [29,30]. Identified as a negative regulator of wound repair, reduced Flii levels improve wound contraction, epithelial proliferation and cell migration [31]. Overexpression of Flii during late gestation impairs skin barrier formation by decreasing the expression of tight-junction proteins in the epidermis [8]. Reduced Flii also increases laminin and Itgb1 protein levels in adult skin wounds via interacting with talin to activate integrins and promote integrinmediated signalling pathways [26]. Flii has further been shown to regulate the regeneration of murine skin appendages with Flii overexpression leading to longer hair fibre lengths in regenerated hair follicles; increased claw regeneration with elevated epidermal β-cat expression and significantly thicker nails following proximal digit amputation [32,33].
As Flii has been shown to have a critical role in regulating embryonic development and EpSC activation the objective of this study was to investigate if altering Flii expression impacted EpSC division symmetry during late embryonic development and adolescence.
The aim was to (1) determine the mitotic division patterns in adult murine primary keratinocytes in vitro; (2) assess the number of EpSCs, the proliferation rate and the symmetric pattern of division in EpSC of different embryonal stages in mice with altered Flii gene expression; (3) investigate the epidermal stratification and the morphogenesis between the stratified epidermis and hair follicles in these mice. Overall, we demonstrated that reduced levels of Flii promote asymmetrical cell division of EpSCs while Flii overexpression promotes symmetrical cell division with impacts on β-cat and SOX9 signalling.
In Vitro Assessment of Cell Division and Associated Protein Expression in Adult Murine Primary Keratinocytes
To investigate if Flii plays a role in the division orientation of proliferative keratinocytes, the division pattern of primary keratinocytes isolated from Flii +/− , WT and Flii Tg/Tg epidermis were characterized by BrdU-cytD assay. The pulse-chased cells were subsequently stained with anti-BrdU antibody to visualize the newly synthesized nucleus that inherited BrdU label asymmetrically or symmetrically. Importantly, arrested cytokinesis by cytD was confirmed by pH3 staining of duplicated yet unseparated chromatins as well as phalloidin staining of disrupted actin structure in granular form rather than fibrotic ( Figure 1A,B). Asymmetric division resulting in only one nucleus inheriting BrdU label and symmetric division resulting in both nuclei inheriting BrdU label were observed in all three genotypes of cells ( Figure 1A). No significant difference was observed in the percentage of ACD with BrdU labelling in between the three genotypes. However, a significantly increased percentage of SCD was found in Flii Tg/Tg primary keratinocytes when compared to Flii +/− and WT counterparts ( Figure 1C). To further investigate the role of Flii in β-catenin regulated SOX9 expression at a protein level, the level of β-catenin and SOX9 in Flii +/− , WT and Flii Tg/Tg primary mouse keratinocytes (n = 4) was characterized using Western blotting. Increased level of β-catenin protein was observed in Flii Tg/Tg cells, which was accompanied by increased levels of SOX9 when compared to Flii +/− and WT counterparts ( Figure 1D and Figure S2).
Altering Flii Gene Expression Does Not Impact the Numbers of ∆Np63 + K15 + EpSCs
EpSCs reside along with the basal layer of the developing epidermis and express ∆Np63 and K15 in both interfollicular and follicular regions [13]. Flii homozygous (Flii −/− ) mice are embryonic lethal [25]. Therefore, to investigate if developmental preference in embryonic epidermis brought by differential Flii expression affects EpSC numbers, the proportion of ∆Np63 + K15 + cells was first assessed within the entire population of epidermal cells in developing skin of Flii +/− , WT and Flii Tg/Tg mice. The Flii protein levels have been extensively characterised in these mice in our previous studies to demonstrate a 0.5-fold reduction in Flii +/− mice and a 1.5-fold increase in Flii expression in Flii Tg/Tg mice [8,31,34]. Flii expression during development was confirmed to be significantly lower in Flii +/− at E16 epidermis when compared to WT and Flii Tg/Tg counterparts, while Flii over-expression was observed in Flii Tg/Tg epidermis at E17 and E19 when compared to WT and Flii +/− counterparts, peaking at E19. (Figure 2A,B). ∆Np63 + K15 + EpSCs were found in the basal layer and HF of the developing epidermis in all strains. The percentage of EpSCs remained between 36.7-38.6% during late embryonic development (E16, E17, E19) in wild-type mice rising to 55.6% at P21. Altering the level of Flii expression did not significantly change the percentage of EpSCs within the epidermis ( Figure 2C).
Overexpression of Flii Results in Proliferating Cells Preferably Undergoing Symmetrical Division While Reduced Flii Promotes Asymmetrical Division
To further visualise the difference in cell division pattern brought by differential Flii expression, the mitotic division plane of the proliferating cells was assessed in the developing epidermis within the proliferating basal layer where ∆Np63 + K15 + EpSCs reside. PCNA + cells in the skin sections were co-localized with phosphor-histone H3 (pH3) and γ-tubulin (γ-tub) to capture the dividing chromosomes and corresponding centrosomes, respectively (Figures 3A-C and S1). The division orientation was determined from an average of 100 cells per group, by assessing the direction of the centrosome axis against the basement membrane as asymmetric (perpendicular to basement membrane), symmetric (parallel to basement membrane) or uncategorized division ( Figure 3D) in both follicular and interfollicular regions ( Figure 3A). In WT mice at E16 when the stratification process was commencing, 42.6% of cell divisions were classified as asymmetric while 19.6% of divisions were symmetric ( Figure 3E). At E17, even fewer symmetric divisions were observed (7.3%) with the predominant form of cell division being asymmetric. However, by P21, when the epidermis was fully functional, the ratio of asymmetric to symmetric division was close to 1:1 (31.4% and 28.1%, respectively) ( Figure 3F). Altering Flii gene expression had minimal effect on cell division orientation at E16 but significant changes were observed at E17 and E19. Overexpression of Flii (Flii Tg/Tg ) led to a decrease in the numbers of cells undergoing asymmetric division at E17 (31.1% Flii Tg/Tg vs. 40.8% WT) while a 3-fold increase in symmetric cell division was observed (21.8% Flii Tg/Tg vs. 7.3% WT) ( Figure 3E,F). This corresponded with a significantly thinner epidermis at E17 compared to WT ( Figure 4A,D). When Flii levels were reduced (Flii +/− ) an increase in asymmetric divisions were observed (48.6% Flii +/− vs. 40.8% WT) at E17 ( Figure 3E) while the number of symmetric divisions remained similar to those observed in WT mice (7.3%) ( Figure 3F). This corresponded with a significantly thicker and more stratified epidermis at E17 compared to WT ( Figure 4A,D). At E19, the percentage of symmetric divisions remained low (11.0%) in both WT and Flii +/− epidermis, but higher numbers of symmetric divisions (19.4%) were again observed in Flii Tg/Tg epidermis ( Figure 3F). Consistently, thinner and less stratified epidermis was observed in Flii Tg/Tg mice at E17 and E19 ( Figure 4A,D). No significant differences were observed in asymmetric divisions between the three genotypes at E19 ( Figure 3E). By P21, an adolescent stage when epidermal compartments are fully matured, the percentage of asymmetric and symmetric divisions were similar with both reaching to about 30%, with no significant differences observed between the three genotypes ( Figure 3E,F).
Differential Flii Expression Directs Morphogenesis between Stratified Epidermis and Hair Follicles during Late Embryonic Progression
Using skin from the late embryonic and juvenile stages of Flii heterozygous (Flii +/− ), wild-type (WT) and Flii overexpressing (Flii Tg/Tg ) mice, we investigated the effect of Flii on the development of major epidermal components including the stratified epidermis and hair follicles. The number of emerging HFs, the depth of the HFs and epidermal thickness were analysed to assess the effect of Flii expression on the developmental preference of epidermal components. The formation of placodes was observed in the embryonic skin from all three genotypes of Flii mice at E16 ( Figure 4A). The number of emerging placodes was significantly lower in Flii +/− skin when compared to WT and Flii Tg/Tg counterparts, while epidermal thickness was significantly decreased in Flii Tg/Tg skin when compared to Flii +/− and WT counterparts ( Figure 4B,D). However, no significant difference in the depth of the emerging placodes was observed between the three genotypes at E16 ( Figure 4C). These developmental preferences continued onwards at E17, where the number of hair pegs was significantly higher in Flii Tg/Tg skin when compared to Flii +/− counterpart, with the depth of hair peg increased and epidermal thickness lessened in Flii Tg/Tg skin when compared to Flii +/− and WT counterparts ( Figure 4B-D). By E19, the number of hair pegs remained significantly lower in Flii +/− skin when compared to WT and Flii Tg/Tg counterparts. The depth of hair pegs was lowest in Flii +/− skin, intermediate in WT and highest in Flii Tg/Tg skin, respectively, while epidermal thickness remained lower in Flii Tg/Tg skin when compared to Flii +/− and WT counterparts ( Figure 4B-D). By P21, the difference in the number of hair follicles and epidermal thickness was no longer significant, while the depth of hair follicles remained lower in Flii +/− skin when compared to WT and Flii Tg/Tg counterparts ( Figure 4B-D).
remained significantly lower in Flii +/− skin when compared to WT and Flii Tg/Tg counterparts. The depth of hair pegs was lowest in Flii +/− skin, intermediate in WT and highest in Flii Tg/Tg skin, respectively, while epidermal thickness remained lower in Flii Tg/Tg skin when compared to Flii +/− and WT counterparts ( Figure 4B-D). By P21, the difference in the number of hair follicles and epidermal thickness was no longer significant, while the depth of hair follicles remained lower in Flii +/− skin when compared to WT and Flii Tg/Tg counterparts ( Figure 4B-D). Hair follicle (including its infantile and juvenile forms as placodes and hair pegs) quantity was determined by measuring the number of developing hair follicle per length of skin (C) Hair follicle (including its infantile and juvenile forms as placodes and hair pegs) depth was determined by measuring the distance between basement membrane (BM) and dermal papilla (DP). (D) Epidermal thickness was determined by measuring the distance between BM and stratum granulosum in interfollicular regions across consistent length of epidermis at E16, E17, E19 and P21. N = 6. Mean ± SEM. * p < 0.05. ** p < 0.005.
Overexpression of Flii Results in Increased SOX9 Expression and Elevated Epidermal C Proliferation
Given that overexpression of Flii induced more HFs yet the proportion of proge cells remained unchanged, we next investigated if the cell division preferences in the liferating basal layer and HFs, where ΔNp63 + K15 + EpSCs reside, exhibited any differe SOX9 has previously been shown to promote HF development, symmetrical cell div and self-renewal of stem cells during development [7]. To determine if altering Flii expression during epidermal development affects this downstream target of β-cat sig ling, the number of SOX9 + cells were assessed in mice with differential expression of SOX9 + cells within the epidermis steadily increased during this developmental pe ( Figure 5A,B). While the numbers of SOX9 + cells were similar in all three genotypes a Hair follicle (including its infantile and juvenile forms as placodes and hair pegs) quantity was determined by measuring the number of developing hair follicle per length of skin (C) Hair follicle (including its infantile and juvenile forms as placodes and hair pegs) depth was determined by measuring the distance between basement membrane (BM) and dermal papilla (DP). (D) Epidermal thickness was determined by measuring the distance between BM and stratum granulosum in interfollicular regions across consistent length of epidermis at E16, E17, E19 and P21. N = 6. Mean ± SEM. * p < 0.05. ** p < 0.005.
Overexpression of Flii Results in Increased SOX9 Expression and Elevated Epidermal Cell Proliferation
Given that overexpression of Flii induced more HFs yet the proportion of progenitor cells remained unchanged, we next investigated if the cell division preferences in the proliferating basal layer and HFs, where ∆Np63 + K15 + EpSCs reside, exhibited any differences. SOX9 has previously been shown to promote HF development, symmetrical cell division and self-renewal of stem cells during development [7]. To determine if altering Flii gene expression during epidermal development affects this downstream target of β-cat signalling, the number of SOX9 + cells were assessed in mice with differential expression of Flii. SOX9 + cells within the epidermis steadily increased during this developmental period ( Figure 5A,B). While the numbers of SOX9 + cells were similar in all three genotypes at E16 (Figure 5A,B), the number of SOX9 + cells significantly increased in Flii Tg/Tg skin from E17 onwards when compared to Flii +/− and WT counterparts ( Figure 5B). At E19, a significant difference in the number of SOX9 + cells was also observed between Flii +/− and WT mice in the epidermis, with the least number of positive cells in Flii +/− epidermis, intermediate in WT and highest in Flii Tg/Tg epidermis ( Figure 5B). By P21, the difference in SOX9 + cell numbers diminished between Flii +/− and WT epidermis, while the highest number of SOX9 + cells remained in Flii Tg/Tg epidermis ( Figure 5B). Figure 5A,B), the number of SOX9 + cells significantly increased in Flii Tg/Tg skin from E17 onwards when compared to Flii +/− and WT counterparts ( Figure 5B). At E19, a significant difference in the number of SOX9 + cells was also observed between Flii +/− and WT mice in the epidermis, with the least number of positive cells in Flii +/− epidermis, intermediate in WT and highest in Flii Tg/Tg epidermis ( Figure 5B). By P21, the difference in SOX9 + cell numbers diminished between Flii +/− and WT epidermis, while the highest number of SOX9 + cells remained in Flii Tg/Tg epidermis ( Figure 5B). Previous studies have described a role for SOX9 in promoting cell proliferation, so we investigated if proliferating cell nuclear antigen expression (PCNA) expression was different between Flii mice genotypes during epidermal development ( Figure 5A,C). Similar numbers of proliferating cells were detected between the three genotypes at E16, with the numbers steadily increasing from E16 to E19 ( Figure 5C). At E17, a significant increase in the number of proliferating cells was observed in Flii Tg/Tg epidermis when compared to Flii +/− and WT counterparts, with most of the increase observed in the follicular epidermis ( Figure 5C). The number of proliferating cells peaked in the Flii +/− and WT epidermis at E19, with WT epidermis containing similar number of proliferating cells as Flii Tg/Tg epidermis while Flii +/− epidermis consistently showed the least ( Figure 5C). By P21, the number of proliferating cells decreased in the epidermis in all three genotypes of mice with no significant difference observed between the groups ( Figure 5C).
Overexpression of Flii Results in Increased Epidermal Flap1 and β-Cat Expression during Late Embryonic Development
Flap1, an antagonist to Flii, is an important enhancer of β-cat stabilization and β-cat/LEF1/TCF activated transcription [30]. To determine whether Flii affected Flap1 and β-cat stabilization in the developing epidermis, the expression of these two proteins were measured as the number of positive basal cells with Flap1 or β-cat, respectively, in skin of Flii +/− , WT and Flii Tg/Tg mice. Consistent with previous finding where Flap1 was detected in keratinocytes and fibroblasts [35], embryonic Flap1 expression was detected throughout the whole skin with apparent nuclear and cytoplasmic expression in the epidermis including the interfollicular and follicular regions ( Figure 6A,B). At E16 the cytoplasmic expression of Flap1 was weak while the number of positive cells with basal nuclear Flap1 expression was found to be the highest in Flii Tg/Tg epidermis, intermediate in WT and lowest in Flii +/− epidermis ( Figure 6A-C). From E17 to E19, Flap1 expression was mainly detected in nuclear form, with no apparent difference in the number of positive cells from all three genotypes. By P21, the number of positive cells with basal nuclear Flap1 expression reached the highest level at approximately 75% of positive basal cells in all three genotypes ( Figure 6A,B). β-cat staining revealed predominant expression at cell membrane junctions throughout the epidermis and nuclear expression in basal epidermal cells from E16 ( Figure 6A,C). The number of β-cat positive cells was the lowest in Flii +/− epidermis, intermediate in WT and highest in Flii Tg/Tg epidermis at both E16 and E17, with similar numbers between each embryonic day in different genotypes ( Figure 6A,C). β-cat expression was relatively stable throughout the epidermal development on both WT and Flii +/− mice until an observed decrease at P21, however Flii Tg/Tg mice epidermis displayed a significant drop in β-cat expression during development and still had significantly increased number of β-cat positive basal cells at P21 compared to WT and Flii +/− counterparts ( Figure 6A,C).
Reduced Flii Levels Lead to Increased Epidermal Cell Differentiation during Late Embryonic Development
Independent ∆Np63 expression is an indicator of a proliferative potential of dividing epidermal cells during asymmetric division [6,36]. ∆Np63 expression was significantly elevated in Flii +/− epidermis exhibiting high numbers of asymmetric division at E17 ( Figure 7A,B). While no significant difference in ∆Np63 expression was found at E16 or E19 between the genotypes, ∆Np63 expression was significantly higher in Flii +/− epidermis when compared to WT and Flii Tg/Tg counterparts at P21 ( Figure 7B). Expression of keratin 1 (K1) and keratin 14 (K14) differentiation markers distinguish mature basal and suprabasal epidermis, respectively. These markers were examined in the epidermis of Flii +/− , WT and Flii Tg/Tg mice ( Figure 7A,C,D). Flii overexpression led to decreased suprabasal K1 expression at E17 and E19 ( Figure 7A,C). K1 expression was constantly higher than that observed in WT and Flii Tg/Tg mice ( Figure 7C). Consistent with previous study where K14 expression was found to be ubiquitous in embryonic epidermis, indicating progenitor-like potential of developing epidermal cells [37], epidermal K14 expression was elevated in in Flii Tg/Tg during E16 and E19 ( Figure 7D). By P21, K1 expression was maintained at same level as during epidermal development while K14 expression was significantly reduced in all genotypes ( Figure 7C,D).
of developing epidermal cells [37], epidermal K14 expression was elevated in in Flii Tg/Tg during E16 and E19 ( Figure 7D). By P21, K1 expression was maintained at same level as during epidermal development while K14 expression was significantly reduced in all genotypes ( Figure 7C,D).
Reduced Flii Results in Increased Talin, Activated-Itgb1 and Par3 Expression in the Epidermis During Late Embryonic Development
Integrin-dependent cell adhesion at the basement membrane plays a positive role in the differentiation of epithelial cells. Acting in concert with cell polarity regulator Par complex, integrins have been shown to induce ACD and alter the outcome of cell lineage in adult epithelial stem cells [20]. Talin, an important integrin-binding protein and Itgb1
Reduced Flii Results in Increased Talin, Activated-Itgb1 and Par3 Expression in the Epidermis during Late Embryonic Development
Integrin-dependent cell adhesion at the basement membrane plays a positive role in the differentiation of epithelial cells. Acting in concert with cell polarity regulator Par complex, integrins have been shown to induce ACD and alter the outcome of cell lineage in adult epithelial stem cells [20]. Talin, an important integrin-binding protein and Itgb1 activator, was previously identified as a Flii binding partner [26]. To determine whether Flii affects the integrin-dependent mechanism required for asymmetric division in the developing epidermis, the expression of talin, activated-Itgb1 and Par3 were measured by the fluorescent intensity in skin of Flii +/− , WT and Flii Tg/Tg mice during epidermal development ( Figure 8A-D). Talin and Itgb1 expression were detected throughout the whole skin with clear expression within the epidermis including the interfollicular and follicular regions ( Figure 8A). Epidermal talin and activated-Itgb1 expression were maintained at a similar level from E16 to P21 in all three genotypes of mice. While no significant difference in talin and activated-Itgb1 expression was observed at E16 in between the three genotypes ( Figure 8B,C), Flii +/− epidermis had significantly higher talin and activated-Itgb1 expression at E17 when compared to WT and Flii Tg/Tg counterparts, with activated-Itgb1 expression being the lowest in Flii Tg/Tg epidermis ( Figure 8A-C) which was in line with previous findings in adult wounded skin [26]. By E19, talin expression remained the highest in Flii +/− epidermis when compared to WT and Flii Tg/Tg counterparts, while the differences in epidermal Itgb1 expression between the three genotypes could no longer be seen. By P21, no significant difference in either talin or Itgb1 expression was found between the three genotypes. The expression of Par3 was found to be mainly in the epidermis with much higher expression during development than adolescent skin. Unlike talin and Itgb1, the dose-dependent effect of Flii gene expression on Par3 expression and cell polarity started earlier than E17. At E16, Flii +/− epidermis showed significantly increased Par3 expression when compared to WT and Flii Tg/Tg counterparts, and same results were found at E17 (Figure 8A,D). By E19 no significant difference in Par3 expression was observed in between the three genotypes. At P21, elevated level of Par3 expression was re-established in Flii +/− epidermis when compared to WT and Flii Tg/Tg counterparts.
activator, was previously identified as a Flii binding partner [26]. To determine whether Flii affects the integrin-dependent mechanism required for asymmetric division in the developing epidermis, the expression of talin, activated-Itgb1 and Par3 were measured by the fluorescent intensity in skin of Flii +/− , WT and Flii Tg/Tg mice during epidermal development ( Figure 8A-D). Talin and Itgb1 expression were detected throughout the whole skin with clear expression within the epidermis including the interfollicular and follicular regions ( Figure 8A). Epidermal talin and activated-Itgb1 expression were maintained at a similar level from E16 to P21 in all three genotypes of mice. While no significant difference in talin and activated-Itgb1 expression was observed at E16 in between the three genotypes ( Figure 8B,C), Flii +/− epidermis had significantly higher talin and activated-Itgb1 expression at E17 when compared to WT and Flii Tg/Tg counterparts, with activated-Itgb1 expression being the lowest in Flii Tg/Tg epidermis ( Figure 8A-C) which was in line with previous findings in adult wounded skin [26]. By E19, talin expression remained the highest in Flii +/− epidermis when compared to WT and Flii Tg/Tg counterparts, while the differences in epidermal Itgb1 expression between the three genotypes could no longer be seen. By P21, no significant difference in either talin or Itgb1 expression was found between the three genotypes. The expression of Par3 was found to be mainly in the epidermis with much higher expression during development than adolescent skin. Unlike talin and Itgb1, the dose-dependent effect of Flii gene expression on Par3 expression and cell polarity started earlier than E17. At E16, Flii +/− epidermis showed significantly increased Par3 expression when compared to WT and Flii Tg/Tg counterparts, and same results were found at E17 (Figure 8A,D). By E19 no significant difference in Par3 expression was observed in between the three genotypes. At P21, elevated level of Par3 expression was re-established in Flii +/− epidermis when compared to WT and Flii Tg/Tg counterparts.
Discussion
Previous studies have shown that the level of Flii expression can affect both wound repair and tissue regeneration, both processes which are heavily reliant on the division and differentiation of epidermal progenitor/stem cells from their inactive niches [31,33,38]. Wound repair requires EpSCs differentiation for re-establishment of the barrier function, a process contributed mainly via the ACD of the progenitor cells [39]. Tissue regeneration, however, requires EpSCs to temporarily proliferate without losing their progenitor properties, a process that depends on the SCD of the progenitor cells [39]. The molecular basis for balancing the different division outcomes is yet to be understood completely.
Consistent with our previous finding showing delayed development of the epidermal skin barrier function in Flii Tg/Tg embryonic skin at E17 [8] while reducing Flii decreased HF regeneration in adult rodents [32], the morphological results presented in this study suggest that the majority of Flii Tg/Tg basal EpSCs follow symmetrical to develop down HF formation rather than asymmetrical cell division required for epidermal stratification. Our results indicate that the percentage of ∆Np63 + K15 + progenitor cells do not differ in the embryonic epidermis of the three Flii genotypes, suggesting that Flii expression does not alter the developmental rate of EpSCs. As a downstream effector of β-cat dependent transcription, SOX9 expression is a positive modulator of SCD [7]. Overexpression of Flii led to increased SCD of proliferating basal cells shown by elevated SOX9 and PCNA expression during epidermal development and concurrent increased expression of K14 compared to wild-type counterparts during late embryonic development. Interestingly, Flii deficiency resulted in significantly increased ACD and increased K1 expression compared to wild-type and Flii over-expressing counterparts highlighting the important role that differential Flii levels play during epidermal development.
To better understand the effects of Flii on EpSCs cell division we examined the effects of differential levels of Flii on its signalling partner, Flap-1 expression, during epidermal development. Previous studies have shown that Flap1 directly interacts with Flii [29,30] while administration of recombinant-Flap1 to wounds in vivo directly reduced Flii levels [35]. As a naturally occurring antagonist to Flii, it is possible that Flap1 is expressed to neutralize the effects brought by Flii expression at a specific embryonic stage of epidermal development. Consequently, significantly lower nuclear expression of Flap1 in the basal epidermis was found in Flii +/− embryonic skin at E16 when compared to WT mice skin
Discussion
Previous studies have shown that the level of Flii expression can affect both wound repair and tissue regeneration, both processes which are heavily reliant on the division and differentiation of epidermal progenitor/stem cells from their inactive niches [31,33,38]. Wound repair requires EpSCs differentiation for re-establishment of the barrier function, a process contributed mainly via the ACD of the progenitor cells [39]. Tissue regeneration, however, requires EpSCs to temporarily proliferate without losing their progenitor properties, a process that depends on the SCD of the progenitor cells [39]. The molecular basis for balancing the different division outcomes is yet to be understood completely.
Consistent with our previous finding showing delayed development of the epidermal skin barrier function in Flii Tg/Tg embryonic skin at E17 [8] while reducing Flii decreased HF regeneration in adult rodents [32], the morphological results presented in this study suggest that the majority of Flii Tg/Tg basal EpSCs follow symmetrical to develop down HF formation rather than asymmetrical cell division required for epidermal stratification.
Our results indicate that the percentage of ∆Np63 + K15 + progenitor cells do not differ in the embryonic epidermis of the three Flii genotypes, suggesting that Flii expression does not alter the developmental rate of EpSCs. As a downstream effector of β-cat dependent transcription, SOX9 expression is a positive modulator of SCD [7]. Overexpression of Flii led to increased SCD of proliferating basal cells shown by elevated SOX9 and PCNA expression during epidermal development and concurrent increased expression of K14 compared to wild-type counterparts during late embryonic development. Interestingly, Flii deficiency resulted in significantly increased ACD and increased K1 expression compared to wild-type and Flii over-expressing counterparts highlighting the important role that differential Flii levels play during epidermal development.
To better understand the effects of Flii on EpSCs cell division we examined the effects of differential levels of Flii on its signalling partner, Flap-1 expression, during epidermal development. Previous studies have shown that Flap1 directly interacts with Flii [29,30] while administration of recombinant-Flap1 to wounds in vivo directly reduced Flii levels [35]. As a naturally occurring antagonist to Flii, it is possible that Flap1 is expressed to neutralize the effects brought by Flii expression at a specific embryonic stage of epidermal development. Consequently, significantly lower nuclear expression of Flap1 in the basal epidermis was found in Flii +/− embryonic skin at E16 when compared to WT mice skin while expression of Flap1 was significantly higher in the skin of Flii Tg/Tg mice compared to both Flii +/− and WT mice. Flap1 has been shown to interact with β-cat and lead to increased β-cat-dependent transcription activation in the nucleus, a process that promotes increased canonical Wnt pathway signalling [28]. Indeed, spatial and temporal regulation of Wnt/β-cat and β-cat-independent Wnt signalling during skin development has been shown to regulate cell division, polarity and tissue homeostasis [40].
Our results suggest that the antagonizing effects of Flap1 may have outweighed the negative effect brought by Flii during Wnt/β-cat signalling, resulting in elevated β-cat stabilization. Consequently, low, intermediate and high levels of nuclear β-cat were found in Flii +/− , WT and Flii Tg/Tg skin, respectively. Although Flap1 levels remain similar in all three genotypes from E17 onwards, β-cat levels differ, respectively to Flap1 till E19 suggesting that β-cat stabilization takes place earlier during development and subsides by E19. Our finding of increased epidermal β-cat expression in Flii Tg/Tg skin and keratinocytes suggests an important role for Flii regulation in Wnt/β-cat signalling pathway. Given that increased SOX9, an ER-regulated gene [41,42], was found in Flii Tg/Tg epidermis when compared to Flii +/− and WT counterparts, while Flii is an estrogen receptor coactivator, it is possible that elevated nuclear Flii promotes increased SOX9 at the transcriptional level which may also explain the increased number of PCNA + cells observed in Flii Tg/Tg embryonic mice skin secondary to increased SOX9 signalling that drives increased SCD. These findings are opposite to the observations in the colon epithelial cells where Flii over-expression was found to inhibit Wnt/β-cat signalling [43], suggesting cell and tissue-specific responses in Flii regulation of Wnt signalling and cell division during epidermal/epithelial development.
Apart from influencing division pattern through Flap1 and β-cat signalling in the nucleus, Flii may also exert its modulatory effect via integrin-linked mechanisms, as ACD is highly dependent on cell polarity and relationship with the microenvironment [44]. Increased talin and Itgb-1 expression were observed in Flii +/− mice epidermis, which together with increased Par3 levels and ACD suggest that the dividing cells in Flii +/− mice epidermis adopted the apical-basal polarity. Concurrent with significantly higher ∆Np63 and K1 expression observed in Flii +/− epidermis, these findings suggest that differentiation potential of EpSCs is elevated when Flii is reduced during morphogenesis. Indeed, ameliorated levels of talin in Flii +/− epidermis may lead to the activation of Itgb1 which subsequently signals downstream effectors to enhance ACD of the proliferating EpSCs in the developing epidermis. Consistent with our previous findings demonstrating slower development of intact skin barrier during development and post injury accompanied with slower healing in Flii Tg/Tg mice, results of this study are in agreement showing increased epidermal SCD in Flii over-expressing animals. Future studies should explore the distribution of epidermal growth factor receptor in Flii Tg/Tg mice skin and investigate if increased SCD pattern in Flii Tg/Tg mice is an adaptation to delay the onset of cancers as increased Flii levels have been suggested to promote skin, colon and breast cancer development [45][46][47].
Conclusions
Our findings provide insights into the modulatory effect of Flii on EpSCs during late embryonic development although the exact mechanism underpinning Flii effects on cell division pattern is yet to be identified. While reduced Flii levels promote the microenvironment supportive of increased cell polarity and subsequent ACD, high Flii levels result in increased SCD mediated via effects on Wnt/β-cat signalling pathway. These alternative division pathways may influence EpSC differentiation and suggest that Flii may be a key regulator of epidermal cell division in specific cellular environments.
Animal Studies
All experiments and maintenance of mice were conducted according to Australian Standards for Animal Care under protocols approved by the Child Youth and Women's Health Service Animal Ethics Committee (WCHN) and carried out in accordance with the Australian code of practice for the care and use of animals for scientific purpose (AE1019/10/18). All mouse strains were congenic on the BALB/c background. Flii deficient mice were generated by switching a null allele (Flii tm1Hdc ) with an endogenous allele (Flii + ) locus, with animals heterozygous for this mutation designated Flii +/− [25]. The animals with one WT copy of the Flii gene and one mutant copy of the Flii gene express no more than 50% of the normal Flii gene expression [25]. WT (wild-type) littermates to Flii +/− mice used as WT control animals. Transgenic Flii overexpressing mice (strain name: (Tg1FLII) 2Hdc) were generated by incorporating a 17.8-Kb fragment of a human cosmid clone that spans the entire FLII locus, with animals homozygous for the transgene in addition to the endogenous Flii allele designated Flii Tg/Tg . Details regarding to the generation of the transgenic mice strains were described previously, showing elevated levels of Flii protein in various tissues including skin [34]. An upregulation of Flii protein levels was confirmed using semi-quantitative Western analysis that showed total (mouse + human) protein levels up to 1.52-fold greater than wild-type levels [34]. Fetal skin collected at E16, E17, E19 of gestation and from P21 pups (n = 6). All skin was fixed in 10% formalin overnight, followed by dehydration through ethanol and xylene, and embedded in paraffin. secondary antibodies (1:400, Invitrogen, Sydney, Australia) were diluted in phosphatebuffered saline and applied for detection. For detection of actin, Oregon Green 488 Phalloidin (1:400, Thermofisher, Sydney, Australia) directly conjugated antibody was used in combination with secondary antibody. Nuclear counterstain 4,6-diamidino-2-phenyindole (DAPI) was applied last. Stained samples were imaged, followed by measurement of grey intensity using Olympus CellSens Dimension software.
Protein Isolation and Western Blot
Protein was extracted from Flii +/− , WT and Flii Tg/Tg adult murine primary keratinocytes as described previously [26]. Following brief homogenization in lysis buffer (50 mM Tris pH 7.5, 1 mM EDTA, 50 mM NaCl, 0.5% Triton X-100) containing protease inhibitor tablet (1 per 10 mL; Complete Mini (Roche, Indianapolis, IN, USA). Samples were centrifuged and supernatants collected. In total, 5 mg of protein was run on 10% SDS-PAGE gels at 100V for 1 h. and transferred to nitrocellulose membrane using standard Towbins Buffer with 20% Methanol at 100V for 1 h. Following blocking in 15% milk-blocking buffer for 10 min. Primary antibodies including Flii (1:200, Santa Cruz Biotechnology, Dallas, USA), β-catenin (1:200, Santa Cruz Biotechnology, Dallas, USA) and SOX9 (1:300, Abcam, Cambridge, UK) were diluted in buffer and applied to the membrane at 4 • C overnight. Species-specific secondary horseradish peroxidase-conjugated antibodies were diluted in 5% milk-blocking buffer and applied to the membrane at room temperature for 1 h. Protein bands were detected using Super Signal West Femto (Pierce Biotechnology, Rockford, IL, USA) and visualized with GeneSys analysis software (Syngene, Frederick, MD, USA).
Data Collection and Statistical Analysis
Grey intensity measurement was measured in epidermal regions including interfollicular epidermis and HFs, with the background intensity deducted from statistical analysis. Positive cells were counted from the basal layer of the epidermis and the HF compartments, the percent of positive cells was then calculated as the number of positive cells over the total number of cells in those regions. In vivo assessment of division direction was from an average of 100 proliferating cells in 6 individual mice per genotype at any embryonic stage. BrdU pulse-chase assay was assessed on an average of 40 proliferating cells from 9 different random regions on the slide of per experimental group. Data was analysed using the Student's t-test to compare between two groups. A p-value of <0.05 was considered significant.
Institutional Review Board Statement: Animal ethics protocols used in the study were approved (AE1019/10/18) by the Child Youth and Women's Health Service Animal Ethics Committee in Adelaide, South Australia.
Informed Consent Statement: All authors gave consent to publishing the information in this manuscript. All authors will provide copies of signed consent forms to the journal editorial office if requested.
Data Availability Statement:
The data that support the findings of this study are available from the corresponding author, AJC, upon reasonable request. | 2021-08-08T06:16:23.852Z | 2021-07-30T00:00:00.000 | {
"year": 2021,
"sha1": "06614bf22093025b1d301ae9a957a22e667e4848",
"oa_license": "CCBY",
"oa_url": "https://www.mdpi.com/1422-0067/22/15/8235/pdf",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "49c8d2a61f55c0e484a299dd63315bc9a681b38f",
"s2fieldsofstudy": [
"Biology"
],
"extfieldsofstudy": [
"Medicine"
]
} |
14682365 | pes2o/s2orc | v3-fos-license | Prolidase activity in chronic plaque psoriasis patients
Introduction Psoriasis is a chronic, inflammatory, T-cell-mediated and hyperproliferative skin disease characterized by erythematous, squamous, sharply circumscribed and infiltrated plaques. The metabolisms of the collagen proteins undergo considerable changes due to the acceleration of their turnovers as a result of increased prolidase activity in psoriasis patients. Aim To determine the level of prolidase activity in psoriasis patients and evaluate its relationship with the oxidative system. Material and methods The serum prolidase enzyme activity, total antioxidant levels and total oxidant levels of 40 psoriasis patients and a control group including 47 healthy individuals were analyzed by using their serum samples, and their oxidative stress indices were calculated. Results The prolidase levels (p < 0.01), total oxidant levels (p < 0.01) and oxidative stress index levels (p < 0.001) of the patient group were higher than the corresponding parameters in the control group. The total antioxidant level was low (p < 0.01). Although a positive correlation was found between the prolidase and total antioxidant levels and the total oxidant level, no correlation was found between prolidase and the oxidative stress index. Conclusions It has been determined that the activity of the prolidase enzyme increases due to the increased collage turnover in psoriasis patients. Increased serum oxidant levels and oxidative stress indices values may play a role in the pathogenesis of psoriasis.
Introduction
Psoriasis is a frequently observed chronic, recurrent inflammatory disease that may affect the joints and the skin. Its frequency varies between 1% and 3%. Despite many etiological studies, its cause remains unknown. One idea that has been accepted in recent years suggests that psoriasis is an autoimmune inflammatory disease characterized by the secondary keratinocyte multiplication of the lymphocytes active in the dermis and epidermis. However, the sequence of the activation relationship between keratinocytes and immune cells has not been determined. Information regarding the important role of T cells in the pathogenesis of psoriasis increases every day. The disease progresses with the development of papules and plaques on an itchy and erythematous pearl-like squamous surface [1,2].
The skin is constantly exposed to ultraviolet (UV) radiation, and thus, reactive oxygen species (ROS) pro-duction occurs [3]. This production may be endogenic, such as that caused by nicotinamide adenine dinucleotide phosphate (NADPH) oxidase, xanthine oxidase, lipoxygenase and nitric oxide synthesis radicals created as a result of active neutrophil or enzyme activation, or it may be exogenous, such as that caused by UV rays, atmospheric gases, microorganisms, pollution and xenobiotic agents, which are pro-oxidative stimulators [4,5]. Reactive oxygen species created as a result of the normal metabolism of the body are pushed off by antioxidants, which are the defense mechanism of the body. These processes are performed by the normal oxidant/antioxidant balance, and if this balance is disturbed in order to produce antioxidants, this results in oxidative stress [6][7][8]. The resultant ROS induce lipid peroxidation, DNA modification and inflammatory cytokine release [6,9].
In psoriasis, reactive oxygen products and lipid peroxidation increase due to a rise in the quantity of leuko-cytes. Psoriasis is related to a large number of biochemical and immunological disorders. Recently, it has been suggested that increased ROS production and compromised antioxidant system function may play a role in the pathogenesis of psoriasis [9].
Collagen provides the foundation of the connective tissue structure necessary for inflammation, cell movement, wound healing, trophoblast implantation and fetal development. It is believed that prolidase activity is directly related to the collagen turnover rate because prolidase is the only enzyme that breaks the proline-glycine peptide bond [10]. An increase in serum prolidase activity has been seen in liver diseases, malignant conditions and many diseases that progress with chronic inflammation. In psoriasis patients, the metabolisms of collagen proteins undergo a substantial change due to the acceleration of their turnovers as a result of inflammation. It is believed that the existence of extended tissue distribution may be important in the development and results of a fairly large number of diseases due to changes in prolidase enzyme activity. In the few studies in which prolidase enzyme activity has been evaluated in diseases characterized by chronic inflammation, it has been observed that this enzyme's activity is high due to collagen deterioration [11][12][13].
In this study, the possible roles of oxidative stress and prolidase enzyme activity, which reflects collagen metabolism and is an important component of the extracellular matrix, in psoriasis pathogenesis have been studied while determining the values of the oxidant system and antioxidant system in psoriasis patients and healthy individuals.
Material and methods
Forty patients with ages varying between 18 and 55 who were were admitted to the Dermatology Clinic of the Research and Application Hospital of the Faculty of Medicine at Harran University. Work began in January 2010 and lasted 3 months. Patients with mild to severe chronic-plaque-type psoriasis with Psoriasis Area and Severity Index (PASI) values of 10 and above, as well as 47 healthy volunteers who served as control group, were involved in this study. The criteria used for choosing the patients involved in the study were: being older than 15 years old, not being treated for any purpose, and volunteering to participate in the study. Patients with coexisting diseases, such as diabetes, neoplastic diseases, liver and kidney disorders, psychological diseases and infections; those with immune-suppressing conditions or familial hypercholesterolemia; and those with a history of major surgery were excluded from this study. Additionally, patients using medicines, such as antipsoriatics, antipsychotics, antioxidants, vitamins, diuretics, and hormone replacement treatment; smokers; and imbibers were also excluded. Healthy volunteers did not have any systemic and skin diseases.
The ages, genders, heights and weights of the patients; process of their psoriasis; their coexisting symptoms; and their and their families' risk factors were recorded. The ages, genders, heights, weights and personal and family histories of the control group were also entered into the study. The body mass indices (BMI) of all patients were calculated using a (weight) kg/(height) cm 2 formula. Blood samples were also obtained from all patients and all members of the control group. Parameters such as the prolidase enzyme activity, total antioxidant capacity (TAC) and total oxidant capacity (TOC) of the serum samples were analyzed, and the corresponding oxidative stress indices (OSI) were calculated. Information regarding the study protocol was provided to all of the subjects, and written informed consent was obtained from the participants or their parents. The study was approved by the Ethics Committee of the Faculty of Medicine of Harran University.
Measurement of serum prolidase activity
The measurement of prolidase activity was performed via the modified Chinard method. The serum prolidase level was measured based on the principle of establishing a colorful compound with ninhydrin under the effect of heat in an acidic environment with proline created by mediation of the prolidase enzyme while using glycyl and proline as substrate. The intensity of the color depends on the concentration of proline and is measured spectrophotometrically. Free proline was measured spectrophotometrically via the modified (optimised) Chinard method [14][15][16][17].
Measurement of total antioxidant capacity
The TAC level of the serum was measured via an auto-analyzer (Aeroset ® , Abbott ® , IL, USA) developed by Erel that uses a commercial Rel assay test. The Fe 2+ -o-dianisidine complex with hydrogen peroxide produced OH radicals via a Fenton reaction. This power converted the produced reactive oxygen species and colorless o-dianisidine molecules into yellow-brown-colored dianisidine molecules at a low pH. The dianisidine radicals multiplied in their colorful form through their participation in the oxidant reactions that developed. At the same time, the antioxidants, which stop the oxidation reaction, suppressed the colorful form. The results were provided after measurements were completed using an automatic analyzer with a 240 nm spectrophotometer reaction. Trolox, a water soluble analogue of vitamin E, was used as a calibrator. The results were reported in terms of mmol Trolox [18].
Measurement of total oxidant capacity
The TOC level of the serum was measured via an auto-analyzer (Aeroset ® ) developed by Erel and a commercial Rel assay test (Gaziantep, Turkey). The oxidant oxidised the ferrous ion-o-dianisidine complex to ferric ion. Gliserol accelerated this reaction threefold. The ferric ions were converted into a colorful form via orange xylenol in an acidic environment. The color intensity of the sample, which depends on the amount of oxidant in the sample, was measured spectrophotometrically. Hydrogen peroxide (H 2 O 2 ) was used as a standard, and the results were reported in μmol H 2 O 2 equivalent/l [19].
Measurement of oxidative stress index
The TOC/TAC ratio provides the OSI, which is an indicator of the degree of oxidative stress. The TOC, reported in mmol Trolox equivalent/l, was converted to μmol equivalent/l, and the OSI value was calculated using the following formula: OSI (arbitrary unit) = TOC (μmol H 2 O 2 equivalent/l)/10 x TAC (mmol Trolox equivalent/l) [20].
Statistical analysis
All analyses were conducted using the SPSS statistical program (Version 11.5 for Windows; SPSS, Chicago, IL, USA). The normality of the distributions was evaluated via the Kolmogorov-Smirnov test for the data set. The comparison between patients and controls was conducted using the independent t-test for normally distributed data and the Mann-Whitney U test for non-normally distributed data. Results were expressed as means ± standard deviations.
Results
Forty-five percent (n = 18) of the psoriasis patients (n = 40) were female, and 55% (n = 22) of them were male. Fifty-one percent (n = 24) of the control group (n = 47) were female, and 49% (n = 23) of them were female. The average ages of the patient group and the control group were 37.90 ±10.75 and 36.60 ±8.29, respectively. The mean BMI of the patient group was 25.07 ±4.41 kg/m 2 , the mean PASI score of the patient group was 33.95 ±15.26 and this mean value was 25.21 ±4.00 for the con-trol group. Statistically, no significant differences were observed between the two groups with respect to their BMIs, ages and genders ( Table 1).
The average prolidase level of the psoriasis patients was 699.11 ±9.92, and the value corresponding to the healthy controls was determined to be 694.03 ±8.62 (p < 0.01). A comparison of the two groups with a box plot showed a statistical increase in the prolidase levels of the patient group as compared to the control group ( Figure 1). The average total oxidant capacity was determined to be 12.05 ±2.66 for the psoriasis patients and 10.89 ±1.49 for the healthy controls (p < 0.01). When the two groups were compared in a box plot, an increase was observed in the patient group as compared to the control group (Figure 2). The average total antioxidant capacity was determined to be 1.09 ±0.13 for the psoriasis patients and 1.18 ±0.20 for the healthy controls (p < 0.01). When the two groups were compared in a box plot, a decrease was observed in the patient group as compared to the control group in the box plot ( Figure 3). The average Oxidative Stress Index was determined to be 12.05 ±2.66 for the psoriasis patients and 0.93 ±0.17 for the healthy controls (p < 0.001) ( Table 2). A significant increase was observed in the patient group as compared to the control group when the 2 groups were compared using a box plot (Table 3, Figure 4).
It is not known which clinical and pathological events resulting from psoriasis the changes in collagen metabolism are related to. In order to determine the role of prolidase enzyme activity in psoriasis patients, detailed advanced studies with extended contexts are required. The high prolidase level observed in the patients in our study is a substantial biochemical parameter. It shows the collagen turnover and the resulting rise in the metabolic rate.
Prolidase is directly related to the collagen turnover rate because prolidase is the only enzyme that breaks the peptide bond between proline and glycine [10]. Additionally, prolidase enzyme activity has been determined to be high due to collagen deterioration in diseases characterized by chronic inflammation [11][12][13]. The metabolisms of the collagen proteins undergo an important change as a result of turnover acceleration due to inflammation in psoriasis patients. The clinical and pathological events resulting from psoriasis to which these changes in the collagen metabolism are related are not apparent. In the study conducted by Güven et al., serum prolidase activity was determined to be higher in psoriasis patients than in the control group [40]. The high prolidase levels observed in our study make us believe that collagen turnover and, as a result, metabolism increase in cases of psoriasis.
It is believed that in addition to genetic predisposition, ROS and mediated oxidative stress may play a role in the pathogenesis of inflammatory skin diseases, such as psoriasis [41]. It is also believed that the ROS produced by keratinocytes, fibroblasts and endotel cells cause neu- trophil chemotaxis and therefore the production of superoxide in phagocytic reactions as a result of neutrophil accumulation in psoriatic lesions [42]. An increase in ROS results in lipid peroxidation [43]. Karababa et al. have indicated that increases in serum TOC and OSI are accompanied by decreases in serum TAC levels in psoriasis patients [44]. Gabr et al. and Hashemi et al. have observed lower TAC levels in psoriasis patients than in control groups. It has been indicated that an increase in the serum oxidant levels in psoriasis patients is accompanied by a decline in serum antioxidants [44][45][46][47]. The increased serum TAC levels and OSI values, as well as the decreased serum TOC levels, observed in our study match the information provided in the literature.
Conclusions
The acceleration of collagen turnover (deterioration and de novo synthesis) in psoriasis patients and therefore the increase in the activity of the prolidase enzyme in cases of psoriasis have been shown. The convenient measurement of serum prolidase activity and the absence of large variations in this enzyme's activity in adults make it a non-invasive biochemical indicator for the evaluation of collagen tissue damage in psoriasis patients. Prolidase alone may not be able to provide information regarding the effects of psoriasis to clinicians, and it should be evaluated along with other biochemical indicators. In addition, the increased serum oxidant levels and OSI values may play a role in the pathogenesis of psoriasis. However, further exhaustive investigations are required to support these results. Table 3. Correlations between TAC, TOC, OSI, and prolidase in patients with chronic plaque psoriasis | 2018-04-03T04:10:14.436Z | 2015-03-30T00:00:00.000 | {
"year": 2015,
"sha1": "2e755150be9d2e0d3013d6e55f676c99adc9a0dc",
"oa_license": "CCBYNCND",
"oa_url": "https://www.termedia.pl/Journal/-7/pdf-24299-10?filename=prolidase%20activity.pdf",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "2e755150be9d2e0d3013d6e55f676c99adc9a0dc",
"s2fieldsofstudy": [
"Medicine"
],
"extfieldsofstudy": [
"Medicine"
]
} |
233595578 | pes2o/s2orc | v3-fos-license | Learning from disease registries during a pandemic: Moving toward an international federation of patient registries
High-quality dermatology patient registries often require considerable time to develop and produce meaningful data. Development time is influenced by registry complexity and regulatory hurdles that vary significantly nationally and institutionally. The rapid emergence of the coronavirus disease 2019 (COVID-19) global pandemic has challenged health services in an unprecedented manner. Mobilization of the dermatology community in response has included rapid development and deployment of multiple, partially harmonized, international patient registries, reinventing established patient registry timelines. Partnership with patient organizations has demonstrated the critical nature of inclusive patient involvement. This global effort has demonstrated the value, capacity, and necessity for the dermatology community to adopt a more cohesive approach to patient registry development and data sharing that can lead to myriad benefits. These include improved utilization of limited resources, increased data interoperability, improved ability to rapidly collect meaningful data, and shortened response times to generate real-world evidence. We call on the global dermatology community to support the development of an international federation of patient registries to consolidate and operationalize the lessons learned during this pandemic. This will provide an enduring means of applying this knowledge to the maintenance and development of sustainable, coherent, and impactful patient registries of benefit now and in the future.
Introduction
In the hierarchy of evidence-based medicine, randomized controlled clinical trials are accepted as the standard for confirming the safety and efficacy of treatments to guide clinical practice. Although rare events may be encountered serendipitously, the stringent inclusion criteria of clinical trials exclude patients with significant comorbidities and are not powered to detect rare adverse events encountered in the "real world." Although spontaneous reporting such as the Medicines and Healthcare products Regulatory Agency Yellow Card Scheme in the United Kingdom can detect adverse reactions to medications post-marketing, patient registries reflect " realworld" evidence more closely. [1][2][3] With large numbers of participants and long-term follow-up, registries are more suited to detect rare adverse drug events. The "realworld" data registries collections also describe a wider range of disease severities; off-label use, including combination therapies specifically excluded in randomized controlled trials; and the natural history of diseases as comparators. Registries are also ideally placed to identify cohorts of potential clinical trial candidates and enable pharmacoeconomic evaluations.
Broad, inclusive projects, such as patient registries, that capture diverse data can be resource intensive. Incrementally, increasing data security and privacy regulatory requirements add strain in an age of ever-evolving global connectivity. Patient registries often develop as silos, created to address region-specific nuances and experiences. This pattern of development typically results in poorly harmonized datasets across different countries. [4][5][6][7] With high-quality patient reg-istries and time to identify and incorporate diverse datasets, this lack of data interoperability can at times be rectified. When a pandemic strikes, a time when coherence and speed are at a premium, these weaknesses are exposed. Valuable information can be lost that might otherwise have benefited patients and the global medical community.
We briefly review the current state of dermatology patient registries and consider how we can evolve to become pandemic ready and maximize the reach and value of "realworld" data at a time when efficient use of limited resources is particularly important.
Patient registries: international collaboration and data set harmonization
Although patient registries have existed for many years, their definition has evolved and is perhaps most robustly described as: "an organized system that uses observational study methods to collect uniform data (clinical and other) to evaluate specified outcomes for a population defined by a particular disease, condition, or exposure, and that serves one or more predetermined scientific, clinical, or policy purposes. A registry database is a file (or files) derived from the registry." 4 The benefit of patient registries is well recognized. The real-world evidence they generate can identify best clinical practice to improve outcomes and health care value. For example, data from the Swedish Hip Arthroplasty Register, when compared with the hip revision burden of the United States between the years 2000 and 2009, were estimated to have resulted in avoidance of approximately 7,500 hip revisions in Sweden during the same period. 8 Sweden achieved this by using the registry data to identify the best clinical practices and the most suitable implants, resulting in one of the lowest revision rates worldwide. The capacity of patient registries to register large numbers of patients has also been recognized as a critical component of rare disease care and as being able to identify rare side effects of medications. Efalizumab, a humanized, recombinant, monoclonal IgG1 antibody, demonstrated considerable efficacy in the treatment of psoriasis in what was, at the time, the "longest continuous study using a biologic therapy for psoriasis." 9 Despite following 339 patients for up to 33 months, progressive multifocal leukoencephalopathy was not identified. This rare but serious adverse effect, for which efalizumab was ultimately withdrawn after reporting by the Yellow Card Scheme in the United Kingdom, was only identified after spontaneous reporting of one suspected and three confirmed cases after more than 46,000 patients had been exposed to the medication. 10 Evaluation of the long-term safety of biologic therapies in psoriasis, without reliance on spontaneous reporting and randomized controlled trials alone, was the primary reason for the establishment of a number of national registries. 11 , 12 Since its origination in 2005, the collabo-rative network, PSONET(European Registry for Psoriasis http://psonet.eu ), has linked such independent registries for patients with psoriasis receiving systemic medications to monitor the long-term safety and effectiveness of therapy. 12 The value of patient registries has been recognized at the governmental level. In the United States, the Department of Health and Human Services, through the Agency for Healthcare Research and Quality (AHRQ), produces comprehensive registry development and maintenance guidelines. 4 In the European Union, registries have been identified as "key instruments for developing rare disease (RD) clinical research, improving patient care and health service (HS) planning," resulting in the funding of the European Platform for Rare Disease Registries (EPIRARE) project "to improve standardization and data comparability among patient registries and to support new registries and data collections." 5 The PAtient Registries iNiTiative (PARENT) joint action also received significant funding to identify best practice registry development, producing, among other deliverables, "Methodological guidelines and recommendations for efficient and rational governance of patient registries." 7 The European Medicines Agency has also recognized the value of using patient registries and their networks of stakeholders in facilitating the Health Technology Assessment. This resulted in the development of a cross-committee task force to facilitate harmonization of data collected in disease registries and encourage the use of existing patient registries "to measure the safety and efficacy of medicinal products in routine clinical practice." 13 , 14 The value of patient registries in the dermatology community has become increasingly more apparent, generating an ever-expanding volume of real-world evidence. Patient registries, such as the British Association of Dermatologists Biologics and Immunomodulators Register (BADBIR; United Kingdom and Republic of Ireland; http://badbir.org/) and BIOBADADERM (Spain; https://biobadaderm ), in psoriasis have emerged on a national level. Reaching across national borders, collaborations across Europe, such as the PSONET initiative for psoriasis registries, and the TREatment of ATopic eczema (TREAT) registry taskforce ( https://treat-registry-taskforce.org/), which have established atopic dermatitis registries in multiple European countries, aim to facilitate closer harmonization of patient data. 15 , 16 Additional patient registries are emerging in the rare disease area (eg, ectodermal dysplasias plus mosaic and DNA repair disorders). Patient registries for epidermolysis bullosa and hidradenitis suppurativa have existed for a number of years, [17][18][19] and rare disease registries are expected to grow significantly in population coverage within the European Union owing to the emerging European Reference Networks (ERNs). These represent virtual networks that connect highly specialized experts in over 900 health care units from more than 300 hospitals across 26 member states in the European Union to provide care for rare diseases. Sites within the United Kingdom, which recently left the European Union, continue to partic-ipate in ERNs. Dermatology is represented by ERN-Skin, which is currently developing a generic registry capable of capturing numerous skin conditions at a high level and sharing common data points. In addition to disease-focused registries, treatment-related international registries are in development, such as the Laser Treatments for Dermatology (LEAD) registry. 20
COVID-19 patient registries
In 2020, a novel RNA virus, severe acute respiratory syndrome coronavirus 2 (SARS-CoV-2), causing a disease known as coronavirus disease 2019 (COVID-19), resulted in a global pandemic that, to date, has claimed the lives of an estimated 850,000 people and infected more than 25 million. 21 At a time of unprecedented demands on physicians and health care providers, a number of new dermatology patient registries have been developed to assess the outcomes of dermatology patients with COVID-19. Ten of these registries have recently been established. 22 Many of these registries have a global reach. One is patient-facing (PsoProtectMe, https://psoprotectme.org/), one has both patient and physician entry options (Global Hidradenitis Suppurativa COVID-19 registry, https:// hscovid.ucsf.edu ), 23 , 24 but the others are physician-entered only. A third patient-facing survey, Surveillance Epidemiology of Coronavirus Under Research Exclusion (SECURE)-AD Patient Survey, ( https://www.secure-derm. com/secure-pad/) has also emerged. An analysis of datasets demonstrates a remarkable coherence across COVID-19related data collected. This contrasts with prio experience of poor patient registry interoperability, the improvement of which was a key principle underlying the PARENT and EPIRARE projects. 4-6 , 25 , 26 The coherence of the COVID-19 patient registries is likely to have been contributed to by each registry using the core concept developed by the COVID-19 Inflammatory Bowel Disease Registry (SECURE-IBD;, https://covidibd.org ). 27 , 28 In addition, the creators of these registries met early in the epidemic to establish a collaborative framework, and the American Academy of Dermatology/International League of Dermatological Societies COVID-19 Dermatology Registry, PsoProtect and SECURE-AD, have already shared data with one another. 22 An additional contributor is likely to be the experience in patient registry development and maintenance by the registry teams.
Anonymity or the de-identification of data in several COVID-19 patient registries has enabled exemption from ethics committee review in most jurisdictions. Despite these exemptions, some academic centers continue to require data use agreements, and full ethical approval has been required in others (eg, in Australia, Ireland, and Canada). The latter requirement hints at the volume of work required to develop a patient registry that adheres to current standards in an era of increasing demands for data protection and security.
Each ethical application requires considerable resources and expertise. A data protection impact assessment, study protocol, ethics application, and evidence confirming insurance coverage and financial sustainability of the registry project are often required. Information technology expertise with experience in registry development to create an appropriate platform is critical. Considerable effort is then necessary to recruit and manage steering and advisory boards to develop a dataset, user-test the registry platform, and establish data analysis strategies. Continuous liaison with multiple physician and patient organizations to mobilize endorsements and drive patient recruitment is then essential.
Traditional compared with emerging pandemic registries
Patient registries, particularly those with international recruitment, have traditionally taken years to develop, even with considerable budgets. For example, in atopic dermatitis and alopecia areata, global eDelphi projects have both taken more than a year to facilitate the development of a common data set. [29][30][31][32][33][34] Newly emerging COVID-19 patient registries, despite the considerable requirements outlined earlier in this report, have been developed far more rapidly through the considerable collective goodwill, energy, and diligence of the dermatology community.
There is, unfortunately, an increasing likelihood that the current COVID-19 pandemic will persist and possibly cause additional waves. It is also likely that future, unrelated, pandemics will occur. It is essential to reflect on patient registries before and during the current pandemic to consider the lessons learned and to determine how the knowledge gained may benefit the dermatology community now and in the future.
Evolving patient registries
Undoubtedly, chief amongst the lessons learned regarding patient registries during the COVID-19 pandemic, is the need to rapidly deploy new or adapt existing patient registries in the event of future pandemics. Existing approval mechanisms are not designed to meet the pressing urgency demanded by a pandemic. Ethics committee meetings, data sharing agreements, and data protection impact assessments are critical elements of patient registry approval. These activities take considerable time and expertise, even when expedited by COVID-specific national research ethics committees and streamlined pathways that have emerged during the pandemic.
Although the response to the current pandemic has been impressive in some countries, it will need to be even quicker in the future. Otherwise, the benefit of answering clinical questions, such as the safety of the initiation, discontinuation, or continuation of immunosuppression/immunomodulation for such immune-mediated diseases as psoriasis and atopic dermatitis, will be lessened. Greater permeation of registries beyond countries with many resources and expert centers is needed. This requires the availability of pre-existing registry infrastructures, which the current emerging COVID-19 patient registries may provide.
To maximize data utilization, its harmonization will be essential. Even the most seemingly simple variables can be interpreted and recorded differently between countries. Defining standard, understandable, and cohesive reporting variables early on is of paramount importance. This will require broad agreement on standard data sets with clear definition of data terms. It should incorporate the work of relevant groups, such as the Core Outcome Measures in Effectiveness Trials (COMET, http://www.comet-initiative.org/) 34 initiative, which has generated core outcome sets for use in COVID-19 research. Where new data sets need to be generated, a rapid process of term definition and broad agreement to implement them should be established.
For those who intend to construct new patient registries, visibility of standard data sets must be prioritized. The reusable building blocks of patient registry development, such as standardized ethics templates, patient information leaflets, committee membership, and authorship agreements, as well as expertise regarding data protection, security, governance, software development, and implementation, must be readily available. Ethics applications will be required to be considered in advance, particularly to facilitate nonanonymized patient registries needed to avoid problems with data double entry from removal of patient identifiable data. There should be mechanisms to facilitate easier collaboration of patient registry groups across time zones, languages, cultures, and physician-patient boundaries. Considerable work will need to be undertaken to ensure that patient registries can integrate with existing information systems.
Electronic health records (EHRs), for example, contain valuable patient-level data, export of which could reduce some of the data entry burden of patient registries. Unfortunately, EHRs have traditionally connected inefficiently and expensively with patient registries or contain data that require significant processing to make it capable of being incorporated within a registry. 35 Inter-registry interoperability will also be important to enable use of existing pharmacovigilance registry data that can act as denominators or even identify patients who might require recall upon identification of risk modifiers. Such connectivity is likely to rely heavily on ensuring that registries embrace open standard data models, such as openEHR, that encourage recording of data in a similar manner from system to system and by utilizing messaging standards, such as HL7® FHIR®, that enable structured data exchange between them. [36][37][38] Beyond dermatology, harmonization and shared data infrastructure across specialties will be an important driver of research efficiency and effectiveness. For example, in the early stages of the COVID-19 pandemic, the SECURE-IBD registry shared its data dictionary, institutional review board (IRB) templates, communication tools, and other components of its blueprint with multiple autoimmune focused groups, including several international dermatology and rheumatology efforts. 27 Because patients across immunemediated conditions share similar medication exposures, harmonized data collection will facilitate studies of the effect of various immune suppressant medications on COVID-19-related outcomes across conditions. Ultimately, pooling data across conditions will provide important answers to emerging safety conditions much faster than single disease or specialty registries working independently.
Patient involvement is a critical component of success. A feature of COVID-19 patient registries has been patient involvement at a steering committee level and the establishment of robust communication with patient organizations. This has reconfirmed the immense value of a patient-centric approach, evidenced through considerable benefits in all aspects of patient registry development and deployment, including improved communication, dataset generation, advocacy, visibility, and endorsement. A notable feature of the self-reporting COVID-19 patient surveys for psoriasis (PsoProtectMe), atopic dermatitis (SECURE-AD Patient Survey), and hidradenitis suppurativa (Global Hidradenitis Suppurative COVID-19 registry) is the considerably greater speed of recruitment reported, compared with the corresponding physicianreported patient registries ( https://www.psoprotect.org , 23 https://www.secure-derm.com/secure-ad-physician 39 and https://www.hscovid.ucsf.edu 24 ). Although PsoProtectMe and SECURE-AD Patient Survey enable registration of patients who have not experienced COVID-19 and questions typically arise regarding privacy, security, and data validity, it is clear that patient-centric registries are key to better patient engagement and registration.
Future direction
COVID-19 has generated seismic ripples that continue to disrupt the fabric of our societies and the manner in which we practice medicine. With great challenges, however, come opportunities to evolve. We suggest an international federation of dermatology registries as a means to harness the foundations of registry collaboration among new and pre-COVID registry communities. Such a collaboration would use and build on the experience gained during this challenging time. This will aim to address many of the challenges identified earlier in this report and provide an entity capable of catalyzing rapid, international deployment if and when future pandemics emerge.
Such a federation would aim to develop the reusable blueprints of registry creation, standardized data sets, and definitions to better align existing and future patient registries. As an independent organization, the federation would aim to impartially facilitate cohesion, rather than act as a regulator. Although promoting interoperability, the federation would not seek to host patient data that might compromise data sovereignty, but still facilitate data merging where consent to data sharing exists.
Such a federation could enable greater visibility of registries and their characteristics through the development and maintenance of a registry of registries, a concept described by PARENT and the AHRQ. 6 , 40 Orphanet is a resource that gathers and improves knowledge on rare disease. Initially established by the French National Institute for Health and Medical Research in 1997, it has evolved to become a global consortium of 41 countries. Although Orphanet lists a number of dermatology-relevant patient registries, these are within a large directory that focuses on all rare diseases. 41 , 42 An inventory of disease registries already exists, supported by the European Network of Centres for Pharmacoepidemiology and Pharmacovigilance (ENCePP) Resource database of data sources, although it is incomplete with respect to dermatology patient registries. 13 , 43-46 The AHRQ developed a similar concept to act as a patient registry equivalent of ClinicalTrials.gov that is "a database of privately and publicly funded patient registry studies conducted around the world;" however, its funding ended in 2019. 40 , 47 , 48 This is a timely reminder that such valuable resources may benefit from being located within the care of the networks that will most benefit from them, such as a federation of dermatology registries, to facilitate awareness, utilization, and sustainability. A simplified example of such a registry of registries ( Table 1 ) is presented, although we envision a more detailed, live registry to be maintained by the proposed federation. Initially published in 2016, after a literature review of dermatology patient registries, Table 1 has been expanded to incorporate a number of omitted registries and those that have emerged during the COVID-19 era. 49 This proposed federation would provide a hub capable of fostering the continued connectivity of patient registries with relevant stakeholders, including patient and physician organizations that have been so impressive during the COVID-19 era. This may increase the capacity for patient organizations to advocate for physicians to engage more broadly with relevant patient registries. It would facilitate fast tracking of applications to regulatory authorities and ethics boards through the provision of reusable templates and group experience to provide guidance to steering committees committed to swift registry development. Ultimately, streamlining and collaborating on registry development in this manner could translate into the speedier provision of real-world information. Subsequently, this might reduce the time taken to address clinical hypotheses, for example, the effectiveness of hydroxychloroquine in patients exposed to COVID-19 and the impact of systemic medications on prognosis.
To develop a federation of dermatology registries, we envision additional work, but perhaps less than would have been envisaged, before COVID-19 given the significant effort undertaken already by registry groups. The blueprint of such an organization has been outlined by the structures created for each of the patient registries. In the first instance, a steering committee with global representation from existing stakeholders, nominated experts with specific expertise in pharmacoeconomics, epidemiology, health informatics and data protection; and patient representation would be required. A larger scientific advisory board, which can be expanded to ensure democratic representation when new patient registries emerge, would also be invited. The time expenditure of committee members is likely to be significantly rewarded by the outputs the federation would be able to generate in terms of simplifying registry development and maintenance.
Although funding for sustainability would be required, much of the large infrastructure costs have already been borne by the development of the registries the federation seeks to support. Such a federation would also provide a valuable conduit to facilitate the generation of patient registries capable of providing data to the European Medicines Agency and the US Food and Drug Administration-mandated postmarketing surveillance studies. Supporting such a project would be of notable value to the pharmaceutical industry.
It is important to note that the federation would require broad endorsement. Given the wide-ranging support by international patient and physician groups that have already endorsed a number of the newly developed COVID-19 patient registries, this should not be a significant hurdle. Undoubtedly, an international federation of patient registries will require considerable debate and more formalized structures; however, it is critical that the opportunity not be lost.
Conclusions
COVID-19 has placed exceptional demands on societies and economies globally, but it has provoked a coherent response from the international dermatology community. One encouraging occurrence has been the rapid harmonization and development of international patient registries to collect relevant COVID-19 data from cohorts of dermatology patients. We urge the international community to build on this work and suggest the establishment of an international federation of dermatology registries to generate new standards and practices. Such a cohesive approach may also establish more rapid and sustainable avenues for funding these registries and provide more affordable solutions at times where economic capabilities are under strain.
Although such an undertaking would be of particular significance during pandemics, the value to facilitating harmonization and improving the quality of existing and future non-pandemic registries would also be significant. Despite such an undertaking being viewed as resource hungry and necessitating considerable innovation and input, much of the groundwork has already been done. The rapidly increasing human toll of COVID-19 and the continued, pressing need for outcomes data are a powerful incentive to collaborate on and adopt such pioneering solutions. | 2021-05-04T22:05:23.270Z | 2021-04-06T00:00:00.000 | {
"year": 2021,
"sha1": "91d7103d54df9b61f02c273c7f4ed60e58117429",
"oa_license": null,
"oa_url": "https://doi.org/10.1016/j.clindermatol.2021.01.018",
"oa_status": "BRONZE",
"pdf_src": "PubMedCentral",
"pdf_hash": "d38fe1b4edc219bc6b0ba1fe3391e07484fdd6c6",
"s2fieldsofstudy": [
"Medicine"
],
"extfieldsofstudy": [
"Business"
]
} |
252699094 | pes2o/s2orc | v3-fos-license | Differences in Wildlife Roadkill Related to Landscape Fragmentation in Central Brazil
: The interaction between animal movement and roads is pervasive, but little is known of the effects of the land-use patterns in roadside landscapes on roadkill events. Here, we compared wildlife roadkill along two road stretches that cross landscapes with different land-use patterns, including the presence of protected areas in Central Brazil. Sampling was conducted in 2017 and 2018 in two seasons (dry and rainy). We expected roadkill events to be more frequent bordering the protected area. Roadkill occurred more frequently in the rainy season in the unprotected landscape. Birds were most frequently recorded in the unprotected (44%, n = 76) than in the protected landscape (37%, n = 48). The least recorded group in the unprotected landscape was Squamata (11%, n = 18), while mammals were less detected in the protected landscape (14%, n = 18). Classes ‘agriculture’ and ‘savanna’ were related to amphibian roadkill numbers. For Squamata, we observed the effect of the presence of forests in the protected landscape. Bird roadkill was affected by protection level, while the presence of pasture and the level of protection explained mammal roadkill. Differences in roadkill patterns reinforce the need for long-term management of this source of mortality for the Cerrado fauna.
INTRODUCTION
Road infrastructure is a ubiquitous and transforming element in a landscape, causing considerable impacts on the environment and wildlife (Neumann et al. 2012, Rosa et al. 2018. Besides causing roadkill directly, roads and highways are an anthropogenic source of spatial heterogeneity (Laurance et al. 2009, Munro et al. 2018, causing habitat loss, fragmentation, changes in ecosystem water flux (Jaarsma & Willems 2002, Coffin 2007, Strevens et al. 2008, Ascensão et al. 2013, Walker et al. 2013, and altering the relief configuration of the landscapes (Trombulak & Frissell 2000). Therefore, the main objective of Road Ecology is to understand the environmental impacts related to road infrastructure, while seeking to mitigate these effects (Forman & Alexander 1998, Coffin 2007. The extent and frequency of these impacts are often related to landscape patterns, as well as species traits (Laurance et al. 2009, Simmons et al. 2010, Ascensão et al. 2013, Galetti et al. 2013.
The resulting degraded environments along roads can attract species with higher environmental plasticity while undermining landscape use by more sensitive and habitat specialist species (Beisiegel et al. 2013, Rosa et al. 2018. Generalist species are attracted to roads and highways due to their serving as high mobility connectors (paths without obstacles), as a refuge (drains, bridges, tunnels, etc), as a source of dietary resources (such as seeds, grasses, and carcasses) (Harris & Scheck 1991, Forman & Alexander 1998, Le Viol et al. 2012, and for thermoregulation (Colino-Rabanal & Lizana 2012, Camacho 2013, Hill et al. 2021.
These attractive elements, located on or by the side of roads, can function as ecological traps due to the risk of vehicle collision (Harris & Scheck 1991, Coffin 2007. Therefore, roads may have deep impacts on wildlife mortality, and serve as population sinks in the landscape (Clevenger et al. 2001, Gunson et al. 2012, Abra et al. 2021. These impacts can make roads a severe threat to wildlife worldwide, with the potential to modify the structure and composition of biological communities (Gaddy & Kohlsaat 1987, Forman & Alexander 1998, Munro et al. 2018.
The permanence of dead animals on the roads or road shoulders after a vehicleanimal collision allows the direct observation and measurement of roadkill events, and subsequently the analysis of roadkill patterns and their underlying mechanisms (Clevenger et al. 2001, Ascensão et al. 2013, Galetti et al. 2013. Roadkill patterns and their impacts on animal populations are likely affected by land cover in the surrounding landscapes (Forman & Alexander 1998, Coffin 2007, Benítez-López et al. 2010. Protected areas and their surroundings, for example, are critical environments which need conservation actions related to roadkill, and several studies in Road Ecology take place along roads crossing or bordering protected areas (Garriga et al. 2012, D'Amico et al. 2015, Braz & França 2016. Animal abundance and richness are expected to be higher within protected areas in comparison to human-altered landscapes so that protected areas should act as a source of animals dispersing through the landscape (Carranza et al. 2014, Gray et al. 2016. Some studies have shown that higher frequencies of roadkill events are observed around nature reserves with higher levels of protection (Garriga et al. 2012, Kioko et al. 2015. Protected areas, however, are just one element in complex human-dominated landscapes, which present other land-use classes, circumstantially crossed by roads and highways. In fragmented landscapes, the effects of roads can act synergistically with other anthropogenic impacts, such as habitat loss. Therefore, the interaction between animal movements and roadkill events in fragmented landscapes is largely pervasive for animal populations (Van der Ree et al. 2011, Magioli et al. 2016, Rosa et al. 2018), but little is known of the effects of protected areas and unprotected landscapes on roadkill patterns.
The Brazilian savanna (Cerrado biome), located in Central Brazil, is the largest open vegetation domain in South America (ca. 2 million Km²), and the second-largest biome in Brazil (covering approximately 24% of the country), after the Amazon forest (Klink & Machado 2005, Werneck 2011). The biome is a world biodiversity hotspot, having a high biological diversity which is severely threatened by natural habitat loss, driven by agricultural activities, in addition to the introduction of exotic species (Myers et al. 2000, Klink & Machado 2005, Moro et al. 2012. Currently, the Cerrado is experiencing an unprecedented expansion in its road infrastructure (Klink & Machado 2005, Carvalho et al. 2009, Miranda et al. 2017, specially designed to allow the outflow of agricultural production (especially soybean) for exportation (Klink & Machado 2005, da Cunha et al. 2010, Souza et al. 2015. The corollary is that more frequent vehicle-animal collisions on roads in these human-dominated landscapes are becoming increasingly common (Carvalho et al. 2009, Souza et al. 2015, de Freitas et al. 2015, Braz & França 2016, Miranda et al. 2020. Herein, we compared the rates of wildlife roadkill events along roads crossing landscapes with different land cover patterns in Central Brazil. We compared roadkill patterns in terms of the absolute number of events, the number of species affected, and the taxonomic group involved (amphibians, reptiles, birds, or mammals). We predicted that roadkill events were more frequent and affected more species of different taxonomic groups in the road stretch bordering a protected area in an iconic Cerrado region (Chapada dos Veadeiros), where higher amounts of natural vegetation (forest and savanna) were to be found along the roads. We also investigated the effect of seasonality on roadkill events for different species and taxonomic groups.
Study area
We conducted the study in the Northeastern State of Goiás (GO), Central Brazil. The Cerrado biome is formed by a mosaic of vegetation formations, from natural grassland, woodland savannas, and dense forests (Ribeiro & Walter 1998). The climate in the region is Köppen's Aw (rainy tropical), with a marked seasonality between the dry and rainy seasons (Alvares et al. 2013, Cardoso et al. 2015. The annual average precipitation is 1500-1750 mm, with mean temperatures varying between 20°C and 26°C (Nimer 1989). The rainy season is typically concentrated between October and March, while the dry season spreads from April to September (Ribeiro & Walter 1998), with small variations according to region and year. In the present study, we restricted the rainy season from November to April, and the dry season from May to October, following the cumulative daily rainfall obtained from the Alto Paraíso de Goiás municipality weather station (INMET 2018).
We monitored two road stretches located in the Pouso Alto Environmental Protection Area (Pouso Alto APA) (Figure 1). Despite the name, APAs (a protected area category from the Brazilian legislation similar to IUCN protected areas category VI) are not strictly directed to environmental conservation since it allows several types of land use, and are not effective to avoid deforestation (Françoso et al. 2015). The Pouso Alto APA encompasses a wide area (8.720 Km²), which comprises highly fragmented and deforested land cover classes, and the Chapada dos Veadeiros National Park (PNCV), the latter being the only area intended for strict environmental protection in the studied landscape.
The first stretch was placed along 33.6 Km of the highway BR-010, from the town of Alto Paraíso de Goiás (14°08.533'S and 47°31.300'W) to the APA southern border (14°25.742'S and 47°30.444'W). We refer to this road stretch as the 'unprotected landscape' (Figure 2), where the the predominant surrounding classes include extensive areas converted to human economic activities (pasture, agriculture, and forestry). In this area, there is currently a considerable ongoing expansion of cropland by mechanized industrial agriculture as well as urban encroachment in the town of Alto Paraíso de Goiás. Moreover, in the last years the highway system has been expanded seeking to facilitate the outflow of crop products and the promotion of mass tourism.
The second stretch comprehended two roads bordering -and at some points crossing -the PNCV. It runs 31 km along the BR-010 road from Alto Paraíso de Goiás (14°10.725'S and 47°48.517'W) toward the municipality of Teresina de Goiás (13°54.229'S and 47°22.704'W), and 39 km along the GO-239 road, from Alto Paraíso de Goiás (14°08.550'S and 47°31.323'W) toward the municipality of Colinas do Sul (14°10.771'S and 47°48.971'W). This stretch was termed 'protected landscape' in our study, due to being adjacent to the PNCV, where the predominant landscape classes surrounding these roads are native vegetation (forests, savannas, and grasslands), with few and sparse areas of pasture farms ( Figure 2).
Roadkill sampling
Sampling was conducted over twelve months between 2017 and 2018, covering both seasons in the Cerrado. The road stretch in the unprotected landscape (BR-010) was monitored four times each month, which resulted in 48 independent samples (sampling campaigns). The protected landscape road stretch (BR-010 and GO-239) was monitored twice a month, resulting in 24 sampling campaigns.
Both the BR-010 and the GO-239 highways are single-lane roads, 7 m wide, with a single asphalt surface shared between both ways. At the time of sampling, only a portion of the GO-239 stretch (in the protected landscape) presented speed reducers for the sake of protecting human lives and wildlife. The BR-010 (with stretches in both protected and unprotected landscapes) presented a few traffic signs indicating animal crossings. These roads are busier during school vacations and long holidays.
Monitoring was performed by car, by two observers, at a speed between 40 to 50 Km/h (according to the minimum speed limits imposed by Brazilian legislation on highways). In the unprotected landscape, monitoring took place between 06:00 and 08:00 hs. from the South northwards, and between 16:00 and 18:00 hs in the opposite direction. In the protected landscape, since the monitored stretch was longer, sampling hours were randomly assigned, between 06:00 and 18:00 hs, both from the South northwards (along BR-010) and from the East westward (along GO-239).
For every roadkill event, we recorded the place and date where it was found (on the road or at its shoulders), photographed, identified, and took local coordinates. Subsequently, every carcass was removed from the road to avoid re-counting. Identification of the animals was done to the smallest taxonomic level possible, within four groups (Amphibia, Aves, Squamata, and Mammalia). Our response variable is therefore the number of roadkill events recorded in each of the taxonomic groups.
Landscape map
The classified land use and land cover in a buffer along the monitored roads were obtained from the Mapbiomas platform version 2.1 (www. mapbiomas.org). Mapbiomas is a nationalscale classification using historical and current Landsat images, with 30-m resolution. We used a land-use classification from 2016, the latest available date. We observed a few discrepancies in the classification concerning gallery forests (included in the 'forest' landscape class). For that reason, we corrected the map by manually drawing the forest polygons on Google Earth based on high-resolution images (1-m Ikonos images available on Google Earth), and then updating the original Mapbiomas map using the new forest polygons. These forests, sometimes narrower than the spatial resolution of Landsat pixels, are important landscape elements, Figure 2. Land use classes around each monitored road stretches in the Pouso Alto APA, which were subsequently related to roadkill patterns. The 'unprotected landscape' stretch (BR-010) was the road within the black buffer, where most anthropogenic land uses are found, and the 'protected landscape' stretch (BR-010 and GO-239) was the one within the red buffer, bordering the limits of the Chapada dos Veadeiros National Park (PNCV). The PNCV limits were updated in June 5, 2017.
potentially functioning as landscape connectors (Johnson et al. 1999). Because of this, the manual correction was vital for a realistic representation of the landscape elements available. Therefore, our landscape classification presents a 30-m resolution, except for gallery forests, which present a resolution of 1 m. Other small discrepancies were observed in the map, such as the classification of native grasslands as pasture, which is a common problem in remote sensing the Cerrado biome (Ferreira et al. 2013). These issues were manually corrected based on our experience of the landscape.
We evaluated land use in a 5-Km buffer around the road stretches, to provide a general context of the landscape and allowing the identification of classes for manual correction when necessary ( Figure 2). However, landscape predictors used in our analysis (see below) were quantified within a 1 Km buffer along each monitored road stretch. This distance was selected to match the cluster analysis done with the roadkill records (see below), which divided each road stretch into 1 Km segments. This buffer width too seems to be efficient for encompassing short-term movements for all studied taxonomic groups in our study (Tozetti & Toledo 2005, Tozetti et al. 2009, Brandão et al. 2018, Henrique & Grant 2019. Generated landscape predictors included, for each segment, (1) proportion of savanna; (2) proportion of forest; (3) proportion of native grassland; (4) proportion of cropland (including areas of agriculture and forestry); (5) proportion of pastures; (6) distance of the segment's center to the nearest gallery forest. These predictors presented no multicollinearity.
All landscape analyses were done in ArcGIS 10.4 (ESRI 2016), except for the gallery forest delineation, which was done manually in Google Earth. Multicollinearity was tested using the 'stats' package in R 3.4.3 (R Core Team 2017).
Analyses
We used rarefaction curves, based on record abundance for each taxonomic group (Gotelli & Cowell 2001), for describing the sufficiency sampling.
Roadkill rates in each monitored stretch were compared between landscapes (protected and unprotected), for each taxonomic group, and for the most recorded species. The rate was defined as the total number of individuals divided by the total extent of the road stretch sampled per day. Daily rates are thus presented as the number of individuals/Km/day. To evaluate the seasonal variation in daily roadkill rates, we compared seasons (rainy and dry) and study areas using an Analysis of Coraviance for each of the four taxonomic groups and for the three most recorded species.
Roadkill records were analyzed in a cluster analysis, aiming to evaluate the more suitable scale to relate roadkill patterns of each taxonomic group to the landscape around road stretches. Cluster analysis was conducted using Ripley's K tool. In this procedure, a density function (L(d)) is tested over varying grouping radiuses, to verify whether records are grouped (L observed > L expected ), dispersed (L observed < L expected ), or randomly distributed (L observed = L expected ). Based on this result, we obtained an ideal grouping distance of 1 Km, which defined the length of the segments into which the road stretches were divided. Therefore, we quantified landscape predictors (land cover) for each of these 1-Km segments.
We used a model ranking approach to compare generalized linear models (GLM) (McCullagh & Nelder 1989) and evaluate the effect of landscape predictors in the number of roadkill events. Modeling was fitted using Poisson family error terms. The complete model (GLM, family = Poisson) considered the following variables to be independent: level of protection (binary), landscape classes related to human activities (proportion of pastures and croplands), and to natural vegetation (proportion of forest, savanna, and grassland). For the model ranking procedure, we followed Zuur et al. (2009). In this procedure, a stepwise removal of independent predictors was performed, based on the results of a likelihood ratio test. At each step, the predictor with the highest value of p was removed, until the removal of any other variable significantly affected the model. The final model was visually assessed based on the normality and homoscedasticity of residuals.
RESULTS
We recorded 301 roadkill events of 75 taxa of wild vertebrates (Table I). In the unprotected landscape, we sampled 1,615 Km and recorded 172 roadkill individuals, of which 124 were identified to the species level (54 species, distributed in 50 genera, 33 families, and 21 orders). Due to carcass condition, 49 individuals could not be identified to the species level. In the protected landscape we sampled 1,680 Km and recorded 129 roadkill events, of which 105 were identified to the species level (41 species belonging to 40 genera, 23 families, and 14 orders).
Rarefaction curves did not reach an asymptote for any of the taxonomic groups, either in the unprotected or in the protected landscape ( Figure 3). This indicated that the number of species affected by roads is likely to be much higher. The patterns observed in the rarefaction analysis did not indicate a significant difference in the number of roadkill events between landscapes for all taxa, except for amphibians, which were more abundant in the unprotected landscape. On the other hand, the rarefaction curve for this group was far from presenting any asymptotic pattern, rendering the comparison inconclusive.
Roadkill rates differed between landscape categories and among groups (Table II). Roadkill occurred more frequently in the rainy season (224 records: 130 in the unprotected landscape, and 94 in the protected landscape).
The number of roadkill events in the rainy season was higher for all taxonomic groups ( Figure 4). However, seasonal differences in the unprotected landscape were significant only for Amphibia (F = 12.910, p < 0.001), and Aves (W = 6.632, p = 0.018); and did not differ for Squamata (F = 3.808, p = 0.065), and for Mammalia (F = 0.099, p = 0.756). Between the studied landscapes, difference in the number of roadkill events was significant for Mammalia only (F = 9.430, p = 0.006), and did not differ for Amphibia (F = 1.235, p = 0.279), Squamata (F = 0.588, p = 0.452), and Aves (F = 2.246, p = 0.149). Two of the three As a function of landscape structure, the proportion of the classes 'agriculture' and 'savanna' was related to the number of amphibian roadkill events (Table III). The number of Squamata was assessed only for the protected landscape, where we observed a significant influence of the proportion of forests ( Figure 5 and Table IV). Bird roadkill events were affected by the level of protection only ( Figure 5 and Table V), while for mammals, the class 'grassland' and level of protection explained roadkill events ( Figure 5 and Table VI).
DISCUSSION
We had expected higher roadkill rates in the protected landscape, but only Squamata corroborated our initial expectations. Overall, roadkill rates were higher in the unprotected landscape for all taxa, and richness and abundance patterns corroborated those results. However, Squamata presented higher roadkill rates in the protected landscape only during the dry season, probably reflecting their environmental requirements and natural history aspects (Garriga et al. 2012 In general, vertebrates that were most frequently found as roadkill, both in the unprotected and protected landscapes, were generalist species, presenting high plasticity in habitat use. The observed patterns can thus be related to a higher dispersal of generalist species through degraded areas, and the more flexible habitat use by these species (Bernardino & Dalrymple 1992, Forman et al. 2003, Barrientos & Bolonio 2009), which can also use areas surrounding the road even in the less altered landscape. The landscape changes in the Pouso Alto APA region (except inside the PNCV) may be favoring opportunistic species, with higher plasticity in habitat use, and eventually causing regional biotic homogenization (Beisiegel et al. 2013, Gámez-Virués et al. 2015.
In the Cerrado, human-modified and degraded areas, adjacent to roads, present a higher incidence of exotic grasses, which is one of the main drivers of environmental change in the biome (Hoffmann et al. 2004, Klink & Machado 2005, Moro et al. 2012. Vehicle traffic also contributes to the dispersal of exotic species, mainly by grain spilling along the roads (Forman 2000, Hansen & Clevenger 2005. Granivore birds (such as Volatinia jacarina and Sicalis flaveola) are attracted by exotic grass seeds (Forman & Alexander 1998, Hansen & Clevenger 2005, Carvalho et al. 2007. Scavengers (such as Cerdocyon thous) are also attracted to the roads by the availability of carcasses (Forman & Alexander 1998, Alves et al. 2018. The availability of these resources can explain the higher frequency of roadkill of these opportunistic species in our findings. In our study, birds were the most affected taxa in both landscapes. Among the birds, passerines were the most frequently observed in Amphibia was the second taxon most affected by vehicle collision in our study, both in the protected and in the unprotected landscapes. However, almost all events corresponded to the bufonid Rhinella diptycha. Bufonids are common victims of roadkill, as has been observed in African savannas (Kioko et al. 2015), in the Iberic Peninsula (Garriga et al. 2012), in Australia (Beckmann & Shine 2012, and in the Brazilian Cerrado (Melo & Santos-Filho 2007, Braz & França 2016, Miranda et al. 2017. The killing of toads due to road collisions can be related to their seasonal migration habits (Lemckert 2004, Vimercati et al. 2017, to the slow pace at which they disperse (Cunnington et al. 2014), or even to the use of roads as dispersal corridors (Brown et al. 2006). On the other hand, toad carcasses, due to the presence of toxic substances on their skin glands, are likely to be avoided by scavengers, lasting more time on roads when compared to other similar-sized frogs.
Small-sized vertebrates can be more easily underestimated in studies that use cars for roadkill counting (Antworth et al. 2005, Langen et al. 2007, Santos et al. 2016. Rains, scavenger activity, or even other vehicles can easily remove small and light-weight carcasses from the roads (Teixeira et al. 2013, Ratton et al. 2014, Santos et al. 2016, affecting their detectability. The very small richness of amphibians observed in our study, when compared to the local species pool (e.g. Santoro & Brandão 2014), can be an indication of this effect.
In the rainy season, we recorded higher roadkill rates for amphibians, Squamata, and birds, during which amphibians and birds were more often killed in the protected landscape. Higher roadkill rates during the rainy season in the Cerrado were also reported in previous studies (Coelho et al. 2008, Braz & França 2016, Miranda et al. 2017, as well as in other countries (e.g. Forman & Alexander 1998, Smith & Dodd 2003, Pinowski 2005, Garriga et al. 2017. This finding probably relates to higher species activity due to higher dietary resource availability during the rainy season (Dalponte & Lima 1999, Batalha & Martins 2004, Machado & Silveira 2010. Moreover, the rainy season is the breeding season for several taxa in the Cerrado, and it is expected that animals present higher activity in the search for resources and mates (Gascon 1991, Oliveira & Gibbs 2002, Oliveira & Marquis 2002, Oda et al. 2009). The temporal and spatial occurrence of water in the landscape constrain amphibian activity, causing it to be more active and conspicuous during the rainy season (Goosem 2004, Kioko et al. 2015, especially for seasonal biomes, as the Brazilian Cerrado (Santoro & Brandão 2014). Seasonal differences on herpetofauna roadkill were also recorded in other regions of the Cerrado biome (Melo & Santos-Filho 2007, Miranda et al. 2017. Amphibians were more abundant in the unprotected landscape, which might suggest that they tend to cross the road more often in landscapes with less available reproductive habitats (Lemckert 2004, Brown et al. 2006. In the unprotected landscape, the proportion of forest explained the observed roadkill rates for amphibians. Most of the forest cover in this landscape corresponds to gallery forests (Ribeiro & Walter 1998), narrow strips of riparian forest that run along rivers and streams in the Cerrado. It is interesting to note that wet grasslands, the habitats more often used by Cerrado amphibians (Santoro & Brandão 2014), are commonly located adjacent to gallery forests. Although these riparian forests are protected by the Brazilian environmental legislation, the associated grasslands are not, and are often removed for the establishment of pastures and agricultural fields (Becker et al. 2010, Toledo et al. 2010. Gallery forests are mesic habitats during the dry season and are ombrophilous habitats effectively used by several animals as dispersal corridors and as a refuge (Johnson et al. 1999).
The wild canids were the most affected mammals in both landscapes. The crab-eating fox (Cerdocyon thous) is one of the mammals most affected by vehicle collision in the Cerrado (Vieira 1996, Melo & Santos-Filho 2007, da Cunha et al. 2010, de Freitas et al. 2015. Cerdocyon thous is a very common and opportunistic species, presenting large home ranges, over which they intensively forage both in preserved as well as in altered habitats, such as road margins (Clarke et al. 1998, Juarez & Marinho-Filho 2002, Beisiegel et al. 2013). In addition, the maned-wolf (Chrysocyon brachuyrus) is a nearthreatened canid according to the IUCN red list (Paula & DeMatteo 2015), and the hoary-fox (Lycalopex vetulus) is considered a vulnerable species (Beisiegel et al. 2013). Both were found as roadkill in the unprotected and the protected landscapes. Overall, all Cerrado wild canids are severely threatened by roads and are experiencing fast declines in the biome due to a myriad of factors (Beisiegel et al. 2013, Paula et al. 2013, de Freitas et al. 2015, Abra et al. 2021).
Although we did not find any relationship between land use and mammal roadkill rate in the unprotected landscape, 77% of Cerdocyon thous carcasses were found in this landscape, showing that rural landscapes are frequently Interestingly, mammal roadkill events in the protected landscape were negatively related to the proportion of farming activities. Although medium-sized and large mammals (about 50% of our records for that taxa) can disperse through different landscape classes, including crops, pastures, and forestry areas (Oliveira et al. 2009, Bocchiglieri et al. 2010, Martin et al. 2012, de Freitas et al. 2015, Magioli et al. 2016, highest mammal richness and abundance are found in natural remnants (Trolle et al. 2007, Bocchiglieri et al. 2010, Martin et al. 2012, Magioli et al. 2016. Similarly, other studies showed that mammal roadkill patterns are, indeed, related to the presence of natural remnants (e.g. Freitas et al. 2013, Braz & França 2016, Brum et al. 2016. The resource distribution in the environment is one of the main factors that regulate habitat use by animals (Law & Dickman 1998), and the presence and proportion of different land use classes affect the distribution of resources. It is also noteworthy that protected areas are a source of individuals for other natural remnants (Naranjo & Bodmer 2007), thus preventing the collapse of animal populations in more fragmented areas, even for opportunistic species (Beisiegel et al. 2013, Paula et al. 2013, de Freitas et al. 2015. It is expected that in deeply altered habitats, such as soybean croplands in the Cerrado, mammal roadkill rates tend to decrease over time both in terms of richness and abundance. This can be mainly explained by local extinctions (but see Colino-Rabanal et al. 2012 for the case of invasive mammals) and is an interesting question for future studies.
Along with other factors (such as seasonality, traffic flux, and spatial location), landscape structure effectively affects animal roadkill patterns (Seo et al. 2015), and the management of the impacts of roads on animal populations should include larger and smallerscale landscape analyses as well as longterm monitoring (Andrews 1990, Van der Ree et al. 2011). Special attention should be given to particular landscape features, such as the presence of humid or mesic habitats (Seo et al. 2015), fragmentation degree, and ecological requirements of the studied groups (Forman 2000, Laurance et al. 2009, Galetti et al. 2013). Seasonal differences in roadkill rates related to landscapes and taxa reinforce the need for long-term management of this relevant source of mortality for the Cerrado fauna in protected and unprotected landscapes. | 2022-10-05T15:04:15.806Z | 2022-01-01T00:00:00.000 | {
"year": 2022,
"sha1": "42426f4dc27935e4c7ba7d69e86a3551f737a358",
"oa_license": "CCBY",
"oa_url": "https://www.scielo.br/j/aabc/a/rG4pxZnnW8yhytfMCydYYvb/?format=pdf&lang=en",
"oa_status": "GOLD",
"pdf_src": "Anansi",
"pdf_hash": "2987885bbe7ee031cbd6f3b01d633547316825da",
"s2fieldsofstudy": [
"Environmental Science"
],
"extfieldsofstudy": [
"Medicine"
]
} |
237920115 | pes2o/s2orc | v3-fos-license | On subcopula estimation for discrete models for discrete models
Purpose – To discuss subcopula estimation for discrete models. Design/methodology/approach – The convergence of estimators is considered under the weak convergence of distribution functions and its equivalent properties known in prior works. Findings – Thedomainofthetruesubcopulaassociatedwithdiscreterandomvariablesisfoundtobediscrete on the interior of the unit hypercube. The construction of an estimator in which their domains have the same form as that of the true subcopula is provided, in case, the marginal distributions are binomial. Originality/value – Tothebestofourknowledge,thisisthefirsttimesuchanestimatorisdefinedandproved to be converged to the true subcopula.
Introduction
In the last decade, copulas have been successfully used in many multivariate models appearing in various fields such as economic, finance, agriculture, hydrology, etc. One of the reasons is that copula models allow us to investigate the behavior of each random variable separately before combining their behavior via Sklar's theorem.
According to Sklar's theorem, any joint distribution function H with marginals H 1 , . . ., H k can be written as for all x ! ∈ R k where C is a copula. Moreover, the converse is also trueany function H defined as in equation (1) for some copula C will always be a joint distribution function with marginals H 1 , . . ., H k . As a result, copulas can be seen as functions that link marginal distribution functions together. (copula means link or connection in Latin.) They are functions that contain dependence structures among random variables. This leads to, for example, the requirement that a measure of association should be able to be written in terms of copulas in order to remove the effect of marginal distributions, which is also known as scale-free property. This idea has been carried through several measures of association such as Spearman's rank correlation coefficient, Hoeffding's Phi-square, and several measures of functional dependence (Siburg and Stoimenov, 2010;Dette et al., 2013;Dhompongsa, 2013, 2016;Boonmee and Tasena, 2016). (see also (Tasena, 2020) for a recent survey.) At first glance, the above arguments are reasonable. If we were to carefully investigate equation (1), however, we will see that the values of the copula C that effect H are only those that lie on Q k i¼1 Range H i ð Þ. Therefore, any two different copulas that agree on the set Q k i¼1 Range H i ð Þ will define the same joint distribution function H. This might not pose a problem if a copula were only used to define H, but this will not be the case if we need to extract dependence structure. Consider the problem of defining measure of association again. If there are more than one copulas that can be used to define the distribution function H, then which one should be used to compute the association level among random variables. After all, different choices might lead to different values yielding inconsistent results. This is referred to as an identification problem: if we need to identify the copula responsible for the dependence structure among random variables, which one should be used?
In the past, identification is never an issue since copulas are only used in continuous models. If the joint distribution function H is continuous, so are its associated marginal distribution functions H 1 , . . ., H k . Thus, the range of H i always contains the open unit interval 0; 1 ð Þfor all i 5 1, . . ., k. Therefore, all copulas associated to H must at least agree on 0; 1 ð Þ k . Since any copula is continuous, we can extend this agreement to the unit hypercube. Therefore, we can conclude that there is only one such copula.
The situation is different when one or more random variables are discrete. Say, for example, the first marginal H 1 is a Bernoulli distribution function. Then Range H 1 ð Þis a threepoint set that is rather small comparing to the unit interval. As a result, there are usually infinitely many copulas that can be used to define the joint distribution function H. See de Amo et al. (2017) for characterization of such copulas. This situation appears in all discrete models. We do not suggest that using copula extension is always a bad idea. We simply state that there is usually no justification for using one particular form of extension over another. Even if there is, it will only apply to a very specific situation. Also, several discrete models include the form of marginal distribution functions. Therefore, the form of the domains of subcopulas is also known as well. For example, if we know that all marginals are Bernoulli, then the domain must be a product of three-point sets. If all marginals are Binomial, then the domain must be a product of sets of the form with the usual parameters p and m. The same applies to Poisson distribution functions, etc.
We should not ignore this information when constructing an estimator, should we? So instead of focusing on the whole unit hypercube 0; 1 ½ k , we suggest focusing only on the set Q k i¼1 Range H i ð Þsince this is all the information we can infer for H in equation (1). In other words, we should simply be focusing on a subcopula obtained by restricting the copula C on Q k i¼1 Range H i ð Þ instead of the copula C itself. Mathematically, a subcopula is simply a restriction of a copula to a closed set of the form In the case of equation (1), the subcopula associated with the joint distribution function H is obtained by restricted the domain of C to Q k i¼1 Range H i ð Þ. Since all copulas are continuous, it can be proved that two copulas agree on Q k i¼1 Range H i ð Þ if and only if they are agree on Q k i¼1 Range H i ð Þ. Therefore, the subcopula associated with a joint distribution function is unique. Thus, the identification issue is resolved.
It should be mentioned that using subcopulas also brings another complication to model estimations. As we already know, the true joint distribution function is always unknown and has to be estimated from the data. Therefore, we will also have to estimate the true subcopula from data. Since the domain of the true subcopula depended on the marginal distribution functions, it is also unknown and has to be also estimated. (This situation never appears in continuous models since the domains of copulas are always known.) Using plugin estimators will only partially solve the problem. Say, for example, we estimate the marginal distribution functions H i with its empirical version H in . Then we may estimate Similarly, we may construct the empirical subcopula S n from the empirical distribution function H n of H. How can we justify whether S n is a good estimator of the true subcopula S? Recall that the domain of S is Þthat is clearly varied with n. So we will have to compare functions with different domains. Something that cannot be done directly. This is probably one of the reasons why subcopula estimation is harder than copula estimation.
Recently, little work has been done to resolve this issue. The basic idea is to embed the set of subcopulas into another set in which the concept of convergence is defined. In other words, we need to replace a subcopula S with its representation, say, r S ð Þ so that we may define S n → S as r S n ð Þ→ r S ð Þ. de Amo et al. (2017) is the first one who worked in this direction where they represent a subcopula S with its graph so that we have S n → S if and only if their corresponding graphs converge under the Hausdorff distance in this case. Rachasingho and Tasena (2020), on the other hand, identify a subcopula with the class of its copula extensions. This provides a relationship between the convergence of subcopulas and their corresponding copula extensions. In order to resolve the identification problem, they also suggested that the distribution forms of subcopulas be used instead (Rachasingho and Tasena, 2018;Tasena, 2021a, b). A nice property of distribution forms is that they do not change the support of the underlying measures. Hence, they will never affect the dependence structure contained in the subcopulas.
In the next section, we will summarize the results of these findings. In Section 3, we will focus on the issue of the domain of subcopulas in discrete models. A discussion will also be provided at the end of this work.
Concepts and terminologies
In this section, we will provide an overview on concepts and terminologies used throughout this work, focusing on subcopula estimations. First, recall that a copula is simply the restriction of a distribution functions with uniform marginals on the unit hypercube 0; 1 ½ k . Since the support of such distribution lies in 0; 1 ½ k , the copula still contain all essential information of that distribution function. Hence, it can also be thought of as a distribution function by abusing the notation. A subcopula is simply a restriction of a copula on the domain of the form Q k i¼1 A i where 0, 1 ∈ A i for all i 5 1, . . ., k. Since a copula is continuous, any subcopula with domain Q k i¼1 A i can be uniquely extended to a subcopula with domain Q k i¼1 A i . Therefore, it is usually required that the domain of a subcopula is closed. It is also possible for a subcopula to be a restriction of distinct copulas. In other words, copula extensions of subcopulas are not unique in general. For characterization of such extensions, see, for example (de Amo et al., 2017).
For convenient, we will define the vector-valued functions H ! and H ! − associated with a joint distribution function H by letting Similar to the true joint distribution function, the true subcopula is unknown and has to be estimated from data. In order to do that, we need a notion of convergence in the space of subcopulas, or equivalently, the notion of distance between two subcopulas. Since the domain of subcopulas varies, we need to consider a representation of subcopulas in a way that their distance can be computed. In other words, we need to embed the set of subcopulas into a metric space. Several works have been done in this direction.
First, de Amo et al. (2017) identified a subcopula S with its graph They then define the distance ξ via ξ S; T ð Þ¼h d∞ G S ð Þ; G T ð Þ ð Þ for any subcopulas S and T where d ∞ is the Chebyshev distance and h d denotes the Hausdorff distance between closed subsets in a metric space with d as its distance function. Hausdorff distance has also been used by Rachasingho and Tasena (2020) to define a distance between bivariate subcopulas. The idea has been extended to multivariate cases in Tasena (2021b). Denote the class of copulas extending a subcopula S by S ½ . Rachasingho and Tasena define for all subcopulas S and T. It is proved that η and ξ induced the same topology, that is, convergence in η is the same as convergence in ξ (Tasena, 2021b, Theorem 3.3, Theorem 3.6). Rachasingho and Tasena (2018), Tasena (2021a) also consider another representation of subcopulas. Recall that in the continuous case, the random variable F X ð Þ is uniform whenever a random variable X has distribution F. Apply this fact in the multivariate setting to a random vector X ! with a continuous joint distribution function H, we can conclude that the random vector U ! ¼ H ! • X ! has uniform marginals. In fact, the joint distribution function of U ! is the copula associated with H. So we could argue that the joint distribution function of ! is the dependence structure of H and continue to do so even in the noncontinuous case. In this latter case, the joint distribution function of U ! is actually the distribution form of the subcopula associated with the joint distribution H. Here, the distribution form S D of a subcopula S is defined by for all x ! ∈ 0; 1 ½ k (Tasena, 2021a). Notice that S D 5 S when S is a copula and S is the restriction of S D on Dom S ð Þ in general. Therefore, S D can be treated as a (faithful) representation of S. Since the probability that U ! ¼ H ! • X ! belongs to the domain of the subcopula S is one, the extension part of S D does not really contain any information in the probabilistic sense. Therefore, S D does not change the dependence structure of the joint distribution function H. The fact that S D is a distribution function also implies that wellstudied modes of convergence for distribution functions can be applied to distribution forms of subcopulas as well. In fact, the Chebyshev distance for distribution forms of subcopulas has been studied in Rachasingho and Tasena (2018) while Levy distance, which metrise the weak convergence, has been studied in Tasena (2021a). The latter has also been proved to metrically equivalent to ξ (Tasena, 2021a, p. 8). For subcopulas S and T, their Levy distance l S; T ð Þcan be written as We summarize the convergence results found in these works again in the following theorem.
Theorem 2.1. Let S n be a sequence of subcopulas and S be another subcopula with the same dimension. Then the following statements are equivalent.
(1) The graph of S n converges to the graph of S under the Hausdorff distance.
(2) The domain of S n converges to the domain of S under the Hausdorff distance and the class of copula extensions of S n converges to the class of copula extensions of S. The latter is equivalent to the following two conditions: if a sequence of copula C n k extending S n k converges to a copula C as n k → ∞, then C must be a copula extension of S, and for any copula C extending S, there must be a sequence of copula C n extending S n such that C n converges to C.
(3) S D n converges weakly to S D , that is, either one of the following equivalent conditions hold: Henceforth, we will denote S n → S if a sequence of subcopulas S n converges to a subcopula S in the sense of the above theorem.
Empirical subcopulas in discrete model
In the previous section, we focus on the convergence of subcopulas that lay a groundwork for subcopula estimations. In this section, we will discuss empirical subcopulas in discrete model and show that it is possible to construct an estimator with a specific form of domains according to the marginal distributions. First, recall the definition of empirical distribution functions. Let X ! 1 ; . . . ; X ! n be an i.i.d. sample from a k-dimensional distribution function H. Then the empirical distribution H n associated to this sample is defined by
Conclusion and discussion
In this work, we discuss subcopula estimation in discrete models. We summarize the results discover recently regarding convergences of subcopulas focusing on weak convergences of distribution functions. It is known that empirical subcopulas weakly converge to the true subcopula. We also construct another subcopula estimator in the case where the marginal distributions of random vectors are known. For example, in the case that each random variable has Binomial distribution. While empirical subcopulas, in this case, might not correspond to those that have Binomial distributions as their marginals. This new subcopula estimator does possess such property. We also argue that it is better to use subcopulas instead of copulas in a discrete model. There are a few works sharing our opinion, see for example (Faugeras, 2017;Geenens, 2020;Trivedi and Zimmer, 2017). See also Nikoloulopoulos (2013) for the problem that might arise when using copula to model discrete data. | 2021-09-01T15:12:28.829Z | 2021-06-22T00:00:00.000 | {
"year": 2021,
"sha1": "21cfe0f477e8905dc89e7de9101d43906626c559",
"oa_license": "CCBY",
"oa_url": "https://www.emerald.com/insight/content/doi/10.1108/AJEB-04-2021-0052/full/pdf?title=on-subcopula-estimation-for-discrete-models",
"oa_status": "GOLD",
"pdf_src": "ScienceParsePlus",
"pdf_hash": "c9f2716ecba4d56bd819328f89329e075a05a8ae",
"s2fieldsofstudy": [
"Mathematics"
],
"extfieldsofstudy": [
"Mathematics"
]
} |
39741798 | pes2o/s2orc | v3-fos-license | 5 A Service Hardware Application Case Fiducia
The various perspectives on how requirements for a process-developing IT application are described have led to the long-standing challenge of business IT alignment. For BPM (Business Process Management) modeling at Fiducia for many years, employees in the business departments have been able to compile large, complex processes by involving experts. Such models are not focused on the point of view of each individual employee involved but on the process as a whole. Consequently, the speci fi cation is coarse-grained to such an extent that an identi fi cation of the employees with a model and how they effectively work along a process cannot be achieved. Moreover, the super fi cial examination does not allow deriving guidelines for implementing an IT solution based on coarse-grained models. Introducing S-BPM brings the point of view of the individual employee to the center of describing processes. It thereby enables describing how processes actually run from his/her point of view. We have used this capability to empower the employees of the business departments to carry out this description task (modeling) themselves. Based on a sample project, which also includes integrating SAP as a database, I shall describe the difference between the “ traditional ” approaches to BPM and S-BPM. Since both approaches were used in this project, the bene fi ts can be described precisely. The savings in Euro and time (earlier availability) represent an important factor here besides the quality of the description. By considering the details of the process, the quality of the description is signi fi cantly increased, and, last but not least, the identi fi cation of the employees in the business departments with their models, who fi nally were able to create applications by themselves.
Background
As early as in the mid-1980s I had been considering possibilities of enabling employees of the departments to run data-processing operations by themselves. In those days this was known as end-user computing or fourth-generation language processing. The possibilities for letting employees from the department access information were still very limited at that time.
Despite this, the needs of the departments to generate information, regardless of IT (according mainly to their subjective viewpoint), was already very large at that time.
History of PCs
With the more widespread introduction of PCs into companies in the early 1990s the departments became increasingly independent of 'centralized' IT and thus started developing their own, 'shadow' IT departments. Tools such as Excel, Access and even Lotus Notes gave department users new flexibility to perform their individual processes and information gathering with IT support. In this way, an IT structure developed that was local to and controlled by the department.
History of the 'Mainframe Mind Set'
The IT departments in companies were still acting largely within the culture that had evolved with application development for mainframes since the early 1960s. In the early phase of the development process, quality was assured by long specification phases. This was necessary since changes to the programming languages used could only be made with difficulty, due to the complexity of the code. The departments were used to the fact that implementing IT applications costs a lot of time and money and that, therefore, demands for new IT applications, or modifications of existing applications, could not be implemented quickly or spontaneously.
The Change Brought by Globalization
The effects of globalization and the resultant changes in the market have led to a demand for continuously shorter and more frequent product development cycles. The interconnectedness brought by the Internet provides customers with ever more information to let them compare the offers of competing suppliers, which significantly affects product development in the companies. The agility required as a result directly influences the processes and the associated IT systems.
There is often a need for changing the original concept as early as in the specification phase of an IT application development process. After the subsequent development phase before the 'going-live' deadline there are always a number of requests for changing the 'finished' application by the involved department. Consequently, the expectations of the departments concerning a new IT application are not met by the time the application is launched.
Effects in the Companies
In the companies as well, the agility of the market is resulting in changes of methods with respect to collaborative work. Collaboration (close networking) among the people involved in the process results in better adaptations to the rapidly changing challenges. This is also causing changes in roles and creating new workflows that then must be modified quickly. This much narrower, frequently changing interplay increases the complexity and traceability of the overall workflows IT is required to support. Each department knows 'its' roles and workflows. In the past it was the role of the IT department to bring together these different viewpoints, ensuring a well-targeted application landscape for the company based on an economically viable IT architecture. The IT department was thus the link between all IT applications in the company.
Departmental Expectations Are Changing
Due to the increasing number of new opportunities available to the departments and their 'shadow' IT, the use of apps and actual cloud solutions, the expectations of the departments to respect with the IT solutions in the company are changing. Now they want to exert influence on the 'development' of IT applications-quickly, flexibly, and without the 'hurdles' that IT development requires when delivering high-quality applications.
The now familiar way of working with IT applications-resulting from the spread of apps-creates the expectation that company IT applications will also allow greater ease of use (usability) through a reduction in complexity from the user's point of view.
Forecasts by analysts that IT budgets will in future be shifted increasingly to the individual departments underline the trend that sees departments increasingly seeking opportunities for IT support for their processes independently of their own IT experts.
At the same time, due to the increasing complexity and more frequent changes in workflows and roles, it is becoming increasingly difficult for IT to function effectively as a central coordinator for the different roles/views (developing an authorization concept). The expert-driven consideration which IT applications are required in which situation (and which are not) is becoming ever more difficult to sustain.
An Ideal Scenario
In an ideal situation the experts of each department would be able to create and modify IT solutions directly in their own 'language'. In this case the description of which workflows are performed with what information by each individual (subject) from his/her own viewpoint would be most suitable, since it allows describing exactly what an employee of a department actually understands. He or she is the expert on what can be done with what information. If it were possible for him/her to describe this simply and create an IT application out of it, the solution would be to have IT applications created (for different types of application) by the department directly. This would have to be achieved within the technical framework conditions of the IT department, which is also responsible for providing the information.
At the same time, IT development would be relieved of the many and growing demands by the departments for applications, driven by the need for agility. The backlog of requests that is caused by capacity limits in the application development section would be significantly reduced. The IT department could then concentrate on important aspects such as standardizing the IT architecture and, above all, on ensuring data availability. The IT department would thus gain strength as a business enabler, while the department would be used as an 'extended workbench' for application development.
5.2
Needs at Fiducia
The Introduction of S-BPM
Over the last 15 years Fiducia has documented its business processes using a BPM (business process management) modelling tool (ADONIS by BOC). The modelers trained in company organization for this purpose have adapted their modelling environment so as to be able to use it highly efficiently. Realizing that this way of modelling operates on a very abstract level, it turned out not suitable for the required level of detail when modelling actual IT-supported workflows. Hence, I decided to introduce an entirely different and unique methodological approach: Subject-orientated Business Process Management (S-BPM), based on the Metasonic S-BPM suite. The aim of this shift was to be able to describe business processes from the viewpoint of the 'subjects', i.e., the roles involved in the departments. The level of detail would have to be so precise that each employee could describe all the steps and information required to perform each process. Since the employee performs these processes himself, it is easy for him/her to formulate his knowledge in a descriptive way. Since workflows need only to be described from individual perspectives, the description should also be simple. An employee describes what he/she obtains as an input to a process and where it comes from, what actions he/she performs and what outcomes he/she passes to other subjects (employees, systems, etc.).
Such as description results in a defined process for each subject, created by the role-holder.
The interplay between individual subject-based models is then described in terms of the communication between these models. Through this separation of individual processes assigned to each subject and the description of the interfaces between individual processes there emerges a modular process system that develops in its own components independently of other components, and that, if nothing changes at the interfaces, can also be modify each component independently.
The Metasonic S-BPM suite can then generate workflows directly from these process models, making the processes testable or even allowing generating a complete IT application directly.
To introduce these new methods along with the tool it was necessary to persuade two groups of staff of the need for this change: the 'experienced process modelers' and the IT specialists.
The Process Modelers
The new method was easy for the young process modelers to accept. They had no resistance to using and learning new methods or procedures. They adapted straight away to the new methodology and quickly realized that it offers many advantages. The specialists in the departments were also able to describe their subjective knowledge of workflows and the information they require for processing. The descriptions were developed in their own 'language' and thus their identification with the outcome was very strong.
Acceptance by the experienced modelers was different, however. They did not adapt to the new method at first. They expected that the subject-oriented business process management method would not be capable of describing complex processes. They felt this way in particular because the modelling was done with only five modelling symbols. The greatest hurdle, therefore, was to gain the acceptance of these modelling experts. The first attempts to demonstrate the new method would not be usable focused on very large complex processes. Again and again, workshops were held whose objective was to implement complex processes.
Yet by considering these complex processes from separate viewpoints, namely from each individual subject involved, even the most complex process lost its perceived complexity. The scope of each process was of course retained, yet the individual process steps, isolated for each subject, were not at all complex. Linking these individual process elements via the communications interfaces brought the whole process back together. It was thus possible to represent any process, however large or complex, simply and clearly in terms of each subject.
Once this procedure had gained acceptance, another point of resistance was being encountered. Having separated processes and thereby simplified the understanding of what was still a large and complex process, there was now the demand to view the entire process in a single overarching representation. Using S-BPM this also is naturally possible. The individual subjects addressed on the communication level give a complete overview. The interaction between the subjects becomes clear. In this way it is possible to fully understand the entire process. What the individual subjects are then required to do with the incoming and also the outgoing information is described by the behavior model of each subject separately. For a complete overview of the project, however, this representation is not necessary. The 'subject jigsaw pieces' and their communication via interfaces create an overall picture, while the links between these jigsaw pieces provide a detailed communication description. Now that S-BPM and the Metasonic suite had been introduced not only as a modelling tool, their benefits to generate applications became evident. From the description using the S-BPM method in Metasonic, the process workflow is directly generated as an executable IT application. This ultimately demanded a high level of precision in the description, but in turn resulted in much higher quality. Finally, the model does not have the character of something that is used once and stowed in a drawer; on the contrary, it forms the direct programming for the future IT application.
The IT Experts
Recognizing the automated execution is exactly what provoked the resistance of the IT experts. Being forced to generate IT applications from subject-orientated business management representations initially created disbelief, and then fears of having to surrender competence. The applications developers sensed a threat that the departments would chip away at their sovereignty as experts with entrenched traditions. The current handling of the technical IT architecture was targeting several aspects: scalability to the appropriate number of users, security, performance, interfaces to the operational databases, and much more. Once all these points had been tested to the highest satisfaction they had met the demands of Fiducia, with its 4000 workstations. However, these technical reservations could be dropped now.
The discussion of IT applications being developed solely by the IT department persisted. The idea of enabling the departments to create small, simple workflow applications by themselves was perceived as a loss of competence for the IT department. The IT department rather accepted being the bottleneck when the development section was simply unable to implement many of these demands due to bottlenecks in capacity. The applications developers could in fact give highest priority to the ongoing development of the core applications. Normally, this by itself results in a very good level of utilization. On the other hand, flexible IT applications demanded by the departments on short notice that are also not intelligibly described can be implemented only when conditions allow that. This results in either frequent refusals or realization dates that are far too late to be of use for the departments.
This fact, compromising the image of the IT department as a business enabler, was ignored, in addition to the increasing orientation of the specialist department towards its 'own' solutions without involving the IT department. This ignorance was precisely one of the motives for introducing a change. Introducing the S-BPM method via the S-BPM Metasonic suite was intended to offer a flexible, agile solution to the department by IT. The interfaces with the operational systems, the data and the infrastructure would be delivered by IT; the department itself, meanwhile, would provide the business logic in its own language. The two sides would meet in the Metasonic S-BPM suite to create complete applications. The IT department retains 'control' over the applications created on a uniform IT platform, while the department can implement its requirements as flexibly as IT applications.
The fact that the need for this change existed could demonstrated by over 7000 Notes databases that have increasingly multiplied; the IT department was no longer the owner of these applications, while the department had also lost control over them. It was therefore urgently necessary that a solution supported by the IT department could be made available to the department.
A Sample Project: Managed Service Hardware (IT-Supported Process Introduction)
The hardware for over 4000 workstations at Fiducia was procured centrally for the 17 departments. This hardware was supplied centrally by the internal IT department (company organization), which was also responsible for ensuring that these workstation devices were working (incident process). Fiducia decided to bundle the procurement and the allocation process within the in-house IT department. One of the company's subsidiaries had already provided this service for a major client. Hence, the internal IT department commissioned the subsidiary as provider to implement the managed service hardware. The result was a project, 'The Introduction of Managed Service Hardware', that will be described in this case study, in particular in conjunction with the description of the benefits of S-BPM.
The Need to Introduce Managed Service Hardware
Standard practice for each department was to define their budgets for PC hardware needs for their workstations themselves. The result was that while the PC hardware was normally purchased in accordance with the standard company procedures, the choice of what hardware was purchased/replaced, and at what time, was the responsibility of the department. This led to three problem areas: 1. PCs that were technically outdated were being retained; it was the departments that decided when a PC should be replaced. 2. New PCs were always purchased for new employees, despite useable machines being available by departing employees of other departments. 3. It was not always possible to verify which PC was being used where.
Managed Service Hardware as a Solution
'Managed service hardware' was intended to supply PCs to the departments on a month-by-month billing basis. Procurement of the PC hardware would be done centrally and up-to-date equipment would be supplied to the departments from a storage facility. The decision to replace a PC would be the responsibility of the internal IT department. The device types were to correspond to the employee role. A high degree of standardization means diversity is restricted to seven groups (roles), including laptops and tablet devices. Software is also bundled on the basis of role. In case of fault occurrence, an appropriate replacement (PC) with the proper software could then be supplied, and the faulty unit could be taken for repair. Fault analysis would be carried out in the repair center subsequently, which would significantly reduce the out-of-service time of PCs due to faults.
Project Start: Initial Information-Gathering Process
In order to introduce this service, initial discussions were started with the subsidiary. An already established process at one of this subsidiary's clients, which has a similar number of workstations, was selected to form the basis for the new process. The analysis began by using the descriptions available from the client's project on how the service is provided for that client.
Since it has been possible to base the required IT solution on what seems, at least, to be a similar business logic used for an external client, the possibility of letting the department develop it with S-BPM and Metasonic was not considered. Instead, the project was carried out in the 'classic' manner, with some BPM modelling, (which was no longer to be used) and implementation effort carried by an application developer-in this case, SAP customizing experts.
These descriptions, including how they can be adapted to Fiducia, were discussed in a series of workshops. The following five process elements (abbreviated to IMACR) were examined: All staff nominated as responsible for the workshops contributed its experiences to the corresponding process. These were the responsible roles nominated: • Persons responsible for interfacing with the process to be outsourced • Persons responsible for hardware specifications • Responsible persons representing the subsidiary • The dispatcher (task distributor) Since the process had already been used for a client of the subsidiary, and the process was thus known in detail, these four roles were identified as those primarily involved in the process.
However, as it became clear later on, many more roles were relevant for the process. They had not become evident in the course of modelling, as in the beginning the focus was not on the subjects involved but rather on the workflow of each partial process. Most of the relevant information was thus discussed at a highly abstract level, in terms of workflows, their sequences and the interfaces.
Framework Conditions
To allow information about the status of the PCs to be punctually updated by the service technicians, it was necessary that the service technicians collect data directly on site and send them using smartphones. This would be achieved via an IT interface. Since the inventory data is managed in the SAP system as assets, a solution within the SAP system was assessed to be the naturally most suitable one. Here, each asset would be stored including its status, in a way that the current status for each PC would be known. The following parameters were defined as status properties: • In use at a workstation • In storage • Undergoing repair • Scrapped To make this process more transparent it was modelled in the 'classic' manner (BPM) using the Adonis modelling tool. The modelling was done by internal modelling experts together with the departmental role-holders. The latter were asked in focus groups how the workflows run according to their view, and their responses were transferred to a BPM model (Adonis).
It was soon apparent that transferring the individual steps of the individual roles from the department into a model (to be created for each of the five process sections) was getting increasingly difficult. Although, e.g., an 'install' process is entirely straightforward at first sight, the different viewpoints of the different roles cause the modelling of each process step and thus tend to become ever more difficult to follow for the persons responsible in the respective roles. They do not see their individual roles as being central, but rather the workflows that have been documented across all roles.
The experienced process modelers nevertheless succeeded in modelling a process that is inherently consistent. They could achieve their objective, and validation was obtained at this very abstract level. What actually takes place in detail in the process is, however, remains open on this modelling level. It requires observing the actual role behavior, thus bringing the role to the center. As long as it is not the aim to create executable IT applications with BPM, much information can be dispensed with, which can in fact be of great importance if one models the process as it actually occurs.
The role-holders, who themselves have no experience in process modelling, were only able to test the process model under certain conditions. They could only identify themselves to a certain extent, since the modelling was performed by 'experts'. It was thus not 'their' process model. In the dialogue between the process modelling experts and the specialist role-holders from the departments, no common level for understanding could be achieved. While the modelers were constantly focusing on the overall process, the role-holders had in mind their individual areas of responsibility in detail. This was, however, not emphasized by the modelers, who necessarily held on to their overall view of the process.
After seven workshops, a comprehensive process model was established for each of the five partial processes (IMACR). These process models, together with the descriptions of the scope of each individual task (SLA), were adopted as the basis for implementing the managed service hardware scheme, including its technical realization.
Since the task descriptions were related to the subsidiary's client company, they only needed to be adapted to the present situation. The actual outlay of over 50 person-days to that point had been necessary for modelling the five partial processes. All persons involved were satisfied with the outcome, and work started with creating a specification for the technical support. Based on the outcomes of the modelling process, the service-level agreements with their requirements and the necessary extensions in the system assets in the SAP system, a specification was created. Initially, a solution was drafted that could gather the data using a Lotus Notes-based workflow; this data should then be used as the basis for updating the SAP data stock once a day.
This mechanism, however, had to be rejected. Out of a total of some 4000 PCs, roughly 20 are in use (IMACR) each day. To allow the service technicians and the other roles to find the current state of affairs in a timely fashion in the database, the changes need to be made directly in the SAP database as the leading system.
The solution scenario was now defined in such a way that all participants were provided with dialogues within the SAP system. It supported them with the necessary information to search for and/or update information. The necessary process logic could be implemented accordingly with SAP tools (service manager), enabling the accurate execution of the required workflows. The SAP dialogues could be implemented on the intranet platform, as had been done previously for other SAP solutions, and could also be invoked from there. The interface to the smartphones could be enabled when purchasing new software.
First Rough Estimate: 150 Person-Days
Once the specification was created, an initial rough estimate was made for implementing the concept. An optimistic scenario projected at least 150 person-days for customizing SAP and for modifying the SAP database.
Weaknesses Recognized
After a first inspection of the specification and its estimation for implementation, various issues became evident.
Lack of Detail
The actual tasks required for implementing an IT application, which were known to the role-holders, had not been modelled. The role-holders were not aware of that; they knew the details after all, and were already overloaded when representing the total model on the level of detail used in the process model. Despite the lack of required detail, it was too complex for them, since it was not their viewpoint that had been modelled, but rather an overall system perspective.
Redundancies in Partial Processes
Due to the focus on the partial processes (Install, Move, Add, Change and Remove), in the development of the process model the employees who are actually involved in the process were only 'assigned' to these partial processes. They were not central to the process design. Hence, redundancies appeared in the individual partial process steps. Considered from the viewpoint of the subject (role) such phenomenon would have been clear, since the role-holders would have defined their tasks from their viewpoint, their area of responsibility. Yet, in this way, each partial process was described independently of the other processes. In addition, 'merely assigning' the employees did not make evident which further roles were seen and needed by these employees in their partial processes. This knowledge was not collected by an exclusive observation of the overall process. In other words, the process has been put to the foreground rather than workflows of individual roles.
Modelling Outcomes Are not Sufficiently Detailed
Another problem concerned the quality of the modelling outcomes. The departmental specialists were mainly knowledgeable in their own areas of responsibility, being part of a large overall system. By looking at the overall process in the course of the modelling, their awareness of its complexity increased. The discussion about workflows involving many other roles was considered overloaded by the 'role specialists'. They also kept giving a coarse-grained representation of their partial processes, trying not to increase the perceived complexity.
Low Level of Identification with the Outcome
As the departmental staff members are not skilled modelers, they need to accept the developments of the modelling experts. Similarly, the modelling experts are not specialists in the non-IT topics and struggle sometimes to understand what they are modelling. Accordingly, two cultures (the process modelers and the departmental specialist roles), each with different objectives, a different understanding and a different language, have come together in a dialogue that demonstrates the typical difficulties of translation between the business areas of the company and IT. For the departmental employees the outcome of the modelling process was not 'their' solution they had created by themselves.
Lack of Confidence in Making Mistakes
The role-holders from the departments are still not used to making statements on a higher level of abstraction of a process than their viewpoint. Due to their experience, if such statements are made, they will have to be interpreted for implementation. And, in case the statement is not absolutely correct, a change request will have to be made, which (a) drives up costs, (b) delays the planned implementation date, and (c) results in an even more difficult collaboration of the department with the IT section.
Overall, due to lack of detail, needless redundancies and ambiguities stemming from different viewpoints, the quality of process models is too poor to obtain practically useable inputs for implementing them.
Project Restart from Scratch
With a minimum of 150 person-days planned for implementation, modelling of insufficient quality and, finally, too many open questions about how to implementation the processes, I decided to rethink the project from the beginning. The new approach was based on the already introduced subject-orientated business management (S-BPM), although it was not popular with the 'experienced' process modelers.
Workshops with the Role-Holders
Together with a new team from the company organization, the persons in the responsible roles for the 'managed service hardware' process were invited to a relaunch workshop. This time, with S-BPM, the role-holders were the focal point. In the first workshop the departmental specialists were informed about the 'methodology' of how their knowledge would be collected and used to develop an IT application. Hereby, three different actions were represented in different colors.
• Green for 'I'm receiving something', • Yellow for 'I'm doing something with it', • Red for 'I'm delivering an outcome'.
Using this simple structure, discussions began about the 'Install' process. The content of the different tasks and the framing conditions were already known. What needed to be questioned, just as in the earlier project, were solely the necessary workflows and the roles involved.
Three different media were provided to enable the employees to 'capture' this information.
• Direct capture on the PC with an easy-to-use interface in Metasonic.
• Direct modelling on a 'modelling table' that at this time was still at an early stage of development (today this would be by far the best medium in my view). • The 'flip chart' to which magnetic cards are attached in three colors and which can be connected in the sense of an S-BPM model. The model can then be captured directly via the PC interface.
In the project, the latter method was adopted, since no technical hurdles (working on PCs with management) should arise, and the attention would be on the methodology rather than on tools from the beginning.
S-BPM Supports the Departments' Way of Thinking
It became clear from the first workshop that the departmental employees could work with this method while maintaining a strong sense of identity. Using these three questions, each could describe the workflow known to him/her. The important details were also addressed immediately, in particular what information is required and who else also needs to be linked to this element of the workflow. It was thus the world the individual subjects perceived that was described. For each involved role (subject), a workflow with the necessary interfaces and content elements could be developed in this way. Shortly after being introduced to the methodology the roleholders took over the modelling themselves.
At the end of the first workshop the workflow models were entered directly into the Metasonic S-BPM suite. The data capture was complete in barely an hour and an initial workflow could already be visualized and simulated as a prototype. It became clear very quickly that more roles were required than those that had been originally defined. They are given in the following for Fiducia and the subsidiary.
Fiducia:
The departmental employee The employee's manager The person responsible for the hardware specifications The person responsible for the interface to the outsourced process The person responsible for the commercial stock These five roles were represented by two employees.
Subsidiary:
The responsible person of the subsidiary The dispatcher (task distributor) The technician Head of repair center Head of software loading These five roles were represented by three employees.
Methodology Can also Be Used by the Department in Connection with a Tool
These role-holders were thus also incorporated into the modelling process. In two further workshops (one day each) all the partial processes of the managed service hardware process were modelled with all participants based on the S-BPM method.
Since the outcomes of this modelling could also be run directly on the PC, it was decided at the second workshop to use the PC directly for the modelling. The hurdle of using a tool had been overcome; the method had gained acceptance. At the point in time when a finished workflow emerged from the modelling and could be verified by simulation on the PC, many details emerged that required clarification. Since, however, the model could be modified straight away, the departmental specialists continuously gained confidence bringing their experience and understanding to bear. They could make no mistakes that would be difficult to rectify. They could make changes at any time and these would take effect straight away.
Full Identification with the Outcome
A further interesting effect could also be observed. Since the subjects (here, departmental experts) were at the center stage and were themselves 'modelled', the demand for certain special requests also changed. However, now the departmental experts themselves had to describe them, rather than passing them as development requests to the application development team without being aware how much effort was involved. The result of this approach was that functions that were not strictly necessary were left out, while the departmental experts identified entirely with the completed outcome. They had, after all, developed it by themselves. This accounts for a significant potential for savings in development costs, since only the genuinely necessary functionality is developed and no 'frictional loss' occurs between the department with its demands and the IT department with its limited resources. And since changes can often be made 'on the fly' by departmental staff themselves, they are motivated to remain involved with the IT application even after it has been created.
IT Application Could Be Completed at an Early Stage
After three workshops with five attendees each (15 person-days), modelling was completed-and already in an executable version. Now work could begin on the IT application itself. The depth of detail was now sufficient to execute the workflows immediately, including all their content-related requirements. There were therefore no longer redundancies in the partial processes, as these had become apparent using the subject-centered approach and the prototypical execution. A very important insight was that the subsidiary could not provide this level of detail for the processes, although they would also have been conducted in a similar way for the external client.
However, the attempt to represent the processes using traditional BPM methods, as at the start of the project, did not result in a model representing the actual process in full detail, containing the actual workflows. Despite a very high outlay on modelling with BPM, the outcome did not represent the real-life situation. Using S-BPM, on the other hand, the workflows as actually being performed became evident. It resulted in many significant improvements with respect to standardization and clarification of interfaces, and thus in optimized work practice.
Additionally, the degree of completeness of the description of the workflows increased during these three workshops. For the first time, not only the standard processes, i.e., 'when everything works according to plan', were examined, but also the many exceptions that arise in practice. For the latter there had not been specific descriptions so far. Somehow it had always worked out, however, leading to unnecessary excess costs due to unclear definitions. Now, this excess outlay was no longer necessary.
The role-holders involved in the process still collaborated yet each from his/her own perspective or position to describe the workflows in such detail that the specification and technical concept were to a large degree already complete at this stage. To implement the IT application, data storage in SAP was still required. The SAP system was therefore considered as a subject in itself when modelling. Here again the same logic was used: what information SAP receive, what should be processed using that information, and what information should be passed on. In this respect, what a 'subject' represents is of no consequence to the method. This simplification also proved immensely helpful in facilitating the discussion when creating the model and the 'technical' interfaces.
Following this initial gathering phase of the descriptions of the current processes from each subject's viewpoint during the three workshops, the process models were further refined and complemented with additional detail. Since the IT application thus obtained needed to be available to all employees on the intranet, the design of the input dialogues was specified in greater detail. When a managed service request was made, the interface to SAP was implemented to create a ticket automatically, and to adapt the relevant status message in SAP to this asset.
The Metasonic S-BPM suite has its own solution for including the dialogues on smartphones. It was integrated along with the interface to the SAP system. The service technicians can thus use their smartphones to store information about a ticket directly in the database of SAP and can also directly view new orders or changes of orders. The complete implementation of the outcomes (from the three workshops and a few subsequent specialist discussions), i.e., the executable IT applications required about 30 person-days development effort. Compared to the previously (optimistically) estimated 150 person-days this difference represented a significant cost saving.
Another factor was owed to the limited capacities of the SAP customizing personnel implementation had earlier been planned to take some nine months. Yet using the S-BPM approach provided by the Metasonic suite, the application was running in production in just two months. Due to this much earlier availability, the benefits of the solution could come into effect seven months earlier than planned.
Summary of Experiences Gained in This Project
The former standard procedure, in which process modelers (as the developers of the model) and departmental staff (as the process experts) sit opposite each other and try to map the workflows from their own viewpoints to create an IT application, has significant disadvantages. The modelers are not experts in the departmental fields (non-IT); rather they need to represent in a model what the departmental staff tries to explain to them.
The departmental staff members, meanwhile, have their focus on those parts of the process that they deal with themselves, while the process modelers are concerned with the overall business process or work procedure. The roles involved are thus only assigned to parts of the overall process they are not the focus in reaching an outcome. Using a different approach, namely following the S-BPM method, the departmental specialists have now begun to describe, in their 'own language', the part of overall processes that they individually handle. Now the process modeler is mainly a moderator who provides support for how the method is used. The departmental staff members soon came to understand the method and are now capable of doing the modelling themselves.
Since there was no longer a media gap between the specialists and the modelers, the quality of the created model was substantially higher. The departmental specialists created the model themselves. No modelling expert was required to interpret what the specialists had told them in order to then integrate this information into a model. Since the outcome was immediately executable, it could be validated straight away, and deficiencies could be quickly spotted and corrected.
The difference between the process models created with the traditional BPM method and S-BPM could be revealed clearly, as it became clear how limited the level of detail is that can actually be portrayed with traditional BPM modelling techniques. Although this situation can certainly be improved with further expenditure, the underlying deficiencies, due to the focus on the overall process and its workflows rather than on those of the subjects, always remain.
The departmental employees identified themselves fully with the solutions they had produced. They were able to avoid excessive demands on themselves, while fully understanding in a verifiable way their workflows and actions, including exceptions in the process. This statement also holds for the final documentation, as it captures how the workflows are actually used. Any change of the IT application is based on S-BPM models. Consequently, the documentation is always up to date. The employees' understanding of the workflows, including exceptions, was deepened, which in turn increased their cooperation in terms of efficiency and the quality for the customer due to the achieved transparency of work procedures.
The standards for database interfaces in the Metasonic S-BPM suite enabled the integration of SAP as data storage system in a simple and comprehensive way. Since the user interfaces for the workflows were generated via the intranet or smartphones directly (without additional programming), the IT expenditure was significantly lower than in the solution originally conceived. The expenditure for the project was significantly below the planned effort for the original approach.
Expenditure with BPM (approx. 260 person-days): BPM modelling (approx. 40 person-days), creation of specification and technical concept for implementation in SAP (approx. 50 person-days), SAP implementation by Customizing dept., including testing, documentation and productive release (approx. 150 person-days). Acquisition and technical implementation of a smartphone support system (approx. 20 person-days). Implementation was to be expected in one year.
Expenditure with S-BPM (approx. 70 person-days):
Modelling with the S-BPM method including documentation of IT application (approx. 30 person-days), implementing interface into SAP system and adapting database for the required parameters (approx. 30 person-days). Tests and pilot runs (approx. 10 person-days). Final implementation could be done in 3 months.
Outcomes and Recognized Effects of the Actions Taken
The introduction of S-BPM into a company is initially met with various forms of resistance. They vary according to the extent of the culture of readiness to change in a given organization. Changes are often perceived as threats. Hence, once a change requires fundamental rethinking, it is necessary first to get those on board who tend to hold onto the old approach. Modelers who have used BPM for years are likely to continue working according to the logic familiar to them; they will regard any change as nothing more than an augmentation or modification of BPM. They cannot (or will not) admit the possibility of a subject-focused approach. It is certainly hard for such groups to accept this new approach when they are not keen to recognize any undermining of the dependency of the departments on the modelling experts for IT application development. Further developments in traditional BPM, such as BPMN 2.0, are generally easier to accept. Comprising at least 50 notation elements, each having different characteristics, this modelling notation is sufficiently complex to be left in the hands of modelling experts. The benefits of S-BPM in enabling the departments to develop their models themselves can never be achieved with BPMN 2.0.
Application developers do not want, on one hand, to deal with the demand of the departments for small, agile IT applications. They simply have neither the time for meeting them, nor perceive the importance of such applications to the business areas. On the other hand, they do not want to give up their 'unique selling point' of being the only group capable of creating IT applications; this has always been the case, after all, for 50 years. In general the IT department's confidence that departments can create their own applications is extremely low, and thus they tend to reject new approaches, such as S-BPM, in the beginning.
The departmental employees, the business experts, are also not immediately convinced that this S-BPM method, created specifically for their way of thinking, will provide them with a solution overcoming the IT application bottleneck. The practice that has been in place for the last 50 years and plays a major part is such that the interaction between the departments and the IT section is seen only to work in the way as experienced in the past. Nevertheless, the business experts are the group that has the highest willingness to engage in implementing new approaches. Due to the pressure of the market to provide new, agile IT applications that can respond quickly to changes in customer expectations, this willingness has increased. This development could be triggered by the need for shorter product development times, better, more flexible services, or a different sales approach.
The former 'shadow IT' in the departments suffers from the fact that it is not being supplied with the actual data of the company. Attempts had been made to make all information available to the departments by developing sophisticated data warehouse solutions; yet the workflows and actions had to be somehow supported with mails or Notes databases, in case it was not possible to wait for solutions from the IT department. And this shadow IT no longer meets today's demands of IT applications. Its functionality is far too limited, it is isolated, as few groups in the department are able to use this tool, and it is impossible, finally, to integrate databases or to link such solutions with an operative core system.
From the viewpoint of the continuously increasing compliance requirements, there are now provisions that cannot be satisfied by shadow IT, too. Consequently, all three groups-department, process modeler, and IT department-need to be persuaded when S-BPM should be accepted. Gaining the approval of the departments is relatively straightforward due to the simple and straightforward development procedure. An early implementation meeting a typical need of one of the departments can help increase the acceptance of a novel methodology. A highlight in the project described was a change request that occurred 15 min before the application was due to go live. An employee had another good idea for improving the process at a certain point. When we offered him the possibility of implementing this modification, he could not believe us. Yet we made the change, and the application went live with this modification in place ten minutes later. This was a typical positive multiplier effect.
For process modelers it needs to be clear that they will continue playing an important role in the future, however, from a different perspective. Lengthy printouts stemming from modelling large processes are no longer acceptable. Rather, by viewing an overall process from separate viewpoints according to individual subjects, intelligible processes are created. In most cases the departments will be glad for the continuing facilitation by the modeler. After all, even with the S-BPM approach it is possible to define optimal or rather suboptimal workflows. The future role of the modeler will focus on such aspects, and lead to optimized process models that support the continuous improvement process through their flexibility for adaptation and their representation close to the perceived reality.
Application developers need to realize that their importance as business enablers will be recognized by the department only, once the changed requirements can be satisfied from the business areas, such as agile IT applications that can be both created and modified quickly. By using S-BPM and the Metasonic S-BPM suite, such an approach is enabled. The application developers and the overall IT department remain the owners of the platform and the interfaces. As such, they are also responsible for the most important element in the entire data-processing operation-information. This new role creates space for large and central application systems that cannot be created by the departments themselves. At the same time, however, the business areas are supported by the IT department in such a way that they can respond to the quickly changing demands of the market.
Several Benefits Have Been Achieved by Introducing S-BPM
Significantly less expenditure when implementing an S-BPM model created in cooperation with the department reduces the production costs. By separating processes by means of individual subjects, the complexity can be significantly reduced. This in turn considerably simplifies working with the role-holders in the departments, as they understand these 'isolated' viewpoints. By describing the individual communication to the other 'isolated' viewpoints of the other role-holders, the overall process and thus also the entire IT application emerge. Achieving such a level of understanding facilitated working with the department when modelling processes. A form of 'language', S-BPM, was used for describing and implementing the IT application. This resulted in significantly better quality of outcome (no media gap), and the acceptance of the created solution in the departments was considerably higher. They had created the solutions by themselves.
Due to separating into subjects it was also much easier to make changes within complex processes. When beginning to describe a process, not all details are always present, yet with the S-BPM method it is possible to begin straight away. Changes often affect only individual, subject-related solutions. Using S-BPM they can then be modified independently of the others. Precision can thus be increased step by step.
Due to the significantly shorter production times for IT applications when using S-BPM, the benefit of a solution can become effective much earlier. Its flexibility allows meeting the need for adaptation arising from its use in production far sooner, leading to competitive advantages through application systems that can be used earlier and better adapted. Similarly, IT solutions that have not yet been thought through in detail can be made available at a very early stage. The stimuli for optimization popping up when using these applications can be implemented straightaway, dispensing with a long analysis phase that attempts to predict such optimization. It can be recognized when analyzing typical change request procedures after an application has gone live that this traditional way of development does not lead to the expected benefits. Obtaining experience directly from practical operations and then implementing work support quickly amounts to a paradigm shift in application development.
Documenting the IT applications and the associated processes and maintaining this documentation in its most up-to-date status, reflecting actual practice, offers a new level of transparency. Documentation no longer needs to be something laboriously assembled after release: it is now a component of the application itself and fully integrated. Information about the actual execution of the process steps, including content and time, is logged and can be automatically generated using a uniform procedure in S-BPM and the Metasonic suite. Such information also forms a significant element of process cost optimization, since only information actually obtained can be used to drive improvements. Often the benefit of such exact logging of process tasks is overlooked in IT applications.
Using a process interface that is uniform for all workflows, the different user interfaces of different IT applications can be aligned. Thus, e.g., in case of authorization management for data access, a single IT application was created using S-BPM to manage the various different authorization systems due to the variety of databases and systems and their specific tools. This uniform application provides the employees with a single user interface.
Closing Remarks
In conclusion I can only stress that S-BPM offers an entirely new approach to defining processes and their direct implementation utilizing IT applications. The underlying development principle is to decompose processes, however complex they are, into the individual subjects that are involved in the process execution. Apparent complexity is thus broken down and at the same time the quality of requirements of an IT application is ensured in such a way that an application can be derived from the specification directly. This decomposition leads to an understanding by the departmental employees of how they can describe a process from their own viewpoint. They are ultimately the experts who are best able to describe their work. The fact that executable IT applications can then be created immediately enables the specialists to verify and to change workflows straightaway. They are thus enabled to engage actively and to take responsibility for the outcome, while identifying themselves with the results.
Using the standard Metasonic platform provided by the IT department, IT applications automatically generated from the modelling can be put into operation straightaway, still under the supervision of the IT department. The interfaces to data and systems are provided centrally by the IT department and can be selected by the departments. Changes in the course of modelling, and even during execution of an application already in use, are often very easy to achieve owing to the isolating subject view. On the basis of my experience, the adoption of this change process for this kind of IT application development is a must for agile organizations. S-BPM enables such significant benefits for the IT support to the business areas that considerable savings and, above all, quality improvements can be achieved only after completing few projects. Using a corresponding tool, agility can also be achieved professionally with IT applications.
Open Access This chapter is distributed under the terms of the Creative Commons Attribution Noncommercial License, which permits any noncommercial use, distribution, and reproduction in any medium, provided the original author(s) and source are credited. | 2018-04-03T04:38:49.313Z | 2015-01-01T00:00:00.000 | {
"year": 2015,
"sha1": "9afbfd2f78ca1d2ae603bc1fffdf45d7d195aba8",
"oa_license": "CCBYNC",
"oa_url": "https://link.springer.com/content/pdf/10.1007/978-3-319-17542-3_5.pdf",
"oa_status": "HYBRID",
"pdf_src": "Adhoc",
"pdf_hash": "2c822c0c16b3949a2f600249223035906a4d0062",
"s2fieldsofstudy": [
"Computer Science"
],
"extfieldsofstudy": [
"Computer Science"
]
} |
17204540 | pes2o/s2orc | v3-fos-license | Mystery in experimental psychology, how to measure aesthetic emotions?
SUMMARY The wealth of human emotional expe-rience is made by aesthetic emotions.There are possibly thousands of aestheticemotions. They evolved relatively recentlycompared to basic emotions designatedby specific words. Measuring aestheticemotions, which may not be designatedby specific words, is more complicatedthan measuring basic emotions. Specificdifficulties have been discussed as wellas tentative approaches to overcomingthese difficulties. I challenge the experi-mental community to develop proceduresfor measuring aesthetic emotions beyondwords. REFERENCES Blood, A. J., and Zatorre, R. J. (2001). Intenselypleasurable responses to music correlatewith activity in brain regions implicated inreward and emotion. Proc. Natl. Acad. Sci.U.S.A. 98, 11818–11823. doi: 10.1073/pnas.191355898Bonniot-Cabanac, M.-C., Cabanac, M., Fontanari,F., and Perlovsky, L. I. (2012). Instrumentalizingcognitive dissonance emotions. Psychology 3, 1018–1026. doi: 10.4236/psych.2012.312153Darwin, C. R. (1871). The Descent of Man, andSelection in Relation to Sex.
experienced while listening to music are the same as those described by standard emotional words and mixtures of them. Scherer (2005) maintains that there are specifically musical emotions, the number of emotions is very large, however he doubts that they could be measured and that such measurements could be useful. Many researchers (Zentner et al., 2008) suggest that there is a tremendous number of aesthetic emotions and develop approaches to their measurements (e.g., tenderness, transcendence, nostalgia). The author of this article (Perlovsky, 2014) suggests specific and fundamental cognitive functions of musical emotions, which qualities and numbers are beyond possible language descriptions.
The theory of drives and emotions (Grossberg and Levine, 1987) suggests that emotions and related feelings correspond to satisfaction or dissatisfaction of drives and instincts. These measure vital bodily parameters (such as sugar level in blood), and emotional neural signals convey their satisfactory or unsatisfactory ranges to decision-making parts of the brain. These are "bodily" emotions, of ancient origins, and there are words in every language for describing them. In English there are approximately 150 emotional words (Shaver et al., 1987); between 5 and 14 of these, are identified as significantly different by various researchers (Scherer, 2005;Petrov et al., 2012).
The Grossberg-Levine theory has been extended to aesthetic emotions (Perlovsky and McManus, 1991;Perlovsky, 2001Perlovsky, , 2007Perlovsky, , 2010Perlovsky, , 2014. The knowledge instinct has been suggested to drive improvement of mental representations in their correspondence to objects and events in the world (knowledge). In addition to increasing knowledge, the knowledge instinct drives the brain-mind to resolve contradictions between knowledge and bodily instincts, and among various aspects of knowledge. Satisfaction or dissatisfaction of this instinct is experienced as aesthetic emotions. A combinatorially large number of potential contradictions in knowledge predicts a very large number of emotions of cognitive dissonance and musical emotions. Musical emotions have been predicted to help overcoming emotional contradictions of cognitive dissonances among elements of knowledge and accumulate contradictory knowledge (Perlovsky, 2010(Perlovsky, , 2012a(Perlovsky, ,b, 2014. These predictions have been experimentally confirmed Perlovsky, 2012, 2013;Perlovsky et al., 2013;Perlovsky, 2014).
Prosodial emotions that we hear in human voice are usually discussed in their ancient and primitive aspects, which unify us with pre-language animals, such as signals of danger, rage, anger, disgust, and happiness. Less discussed aesthetic emotions of prosody are specifically human emotions motivating us to connect sounds and meanings in speech or more generally in language (although emotions of prosody are contained in sounds, we used to associate them with language). These emotions sound usually below the level of consciousness in everyday unarticulated speech. They constitute the essence of poetry beginning before the Bible, Homer, or Koran. Despite the importance of these emotions they have not been sufficiently studied. Not a single experimental publication addressing these emotions could be found. Among rare studies are publications by Wierzbicka (2009); among other things she emphasizes that English being de facto the international scientific language may interfere with studying emotions. Prosodial emotions in everyday "unemotional" speech might be least pronounced in English (as a result of the recent 500 years of changes in English grammar and sounds); prosodial emotional functions in English have been taken by songs more than in other languages (Perlovsky, 2009(Perlovsky, , 2010(Perlovsky, , 2013. The current state of experimental study of such a fundamental aspect of human psychology as emotionality of everyday speech is inadequate. And in general, measuring aesthetic emotions, their number, properties of their spaces (clusters, evolution with culture), remains elusive.
DIFFICULTIES OF MEASURING AESTHETIC EMOTIONS
Emotions motivate every human action and intention in the world and in the mind (e.g., Markus and Kitayama, 1991). They are among most ancient mental mechanisms. Their cognitive-mathematical models are straightforward (Grossberg and Levine, 1987;Perlovsky, 2001). Still, some scientists perceive emotions as more complex than concepts, emotions sometimes may seem as almost mysterious. This might be related to the fact that emotions are not completely logical and not always completely conscious. Therefore, I start discussing how to measure emotions with standard, usually conscious, everyday emotions.
A classical approach (Shaver et al., 1987) uses emotional words. Shaver first selected near 250 English words with "emotional" content, and had a group of participants to sub-select words designating emotions. This procedure resulted in approximately 140 emotional words. Then he had another group estimating subjective similarity measures between every pair of words. This produced a 140 × 140 matrix of similarity measures, which was used in a procedure similar to multidimensional scaling (Torgerson, 1952). A somewhat different approach was used by Petrov et al. (2012); instead of subjective similarities this approach used objective measures of differences among contexts in which emotional words are used. Results of both studies are similar to many publications identifying relatively few "important emotions" (e.g., see Plutchik, 1962;Scherer, 2005); in Petrov et al. (2012) 5 largest eigenvectors (combinations of emotions) describe about 25% of the "volume" of the emotional space occupied by 130 emotional words ("vectors"). The main point here is that these methods based on emotional words and pair wise similarity-distance measures give a method to analyze objectively properties of emotional spaces, including their dimensionalities (the number of distinct emotions).
The first method in the above paragraph based on subjective similarity can be directly extended to aesthetic emotions. Let me discuss a few difficulties to be expected. Consider first musical emotions. A first hypotheses to test could be that virtually every musical phrase of every significant composer expresses (or creates) a new distinct emotion. The experiment could consist in measuring subjective similarities or differences among a large number of different musical phrases and then establishing dimensionality of the resulting space. The following difficulties can be expected. (1) Differences among musical styles and composers (say Chopin and Eminem) are much stronger and more pronounced than differences, say among various Chopin phrases; it is likely that fine differences among Chopin phrases could be masked by differences among styles and composers.
(2) Specifics of fine differences among musical emotions could be fleeting, different among participants, different for the same participant at various times, depending on his/her psychological state, in other words, unrepeatable. But repeatability is a cornerstone of experimental procedures. Some substitute of "usual" repeatability would have to be invented.
The same method could be tried for aesthetic emotions of cognitive dissonances, prosodial emotions, and aesthetic emotions corresponding to visual arts. In addition to discussed difficulties, we can anticipate that (3) emotions reported by participants may be different from those intended to be measured. For example, aesthetic emotions of prosody in usual non-articulated speech might be unconsciously mixed up with much stronger basic emotions in the contents of speech. Similarly, conceptual contents of a visual piece of art could be much stronger than its emotional contents. Or consider cognitive dissonance among two different dishes; differences in imagined gustatory emotions are likely to mask the target emotions of cognitive dissonance (e.g., see Bonniot-Cabanac et al., 2012).
APPROACHES TO MEASURING AESTHETIC EMOTIONS
Despite the discussed difficulties I suggest that aesthetic emotions can be measured. Consider again subjective emotional differences among pairs of musical phrases in a large data base. Even as gross emotional differences among styles and composers might mask some aspects of fine differences say between two phrases of Chopin, nevertheless hundreds of thousands of people attentively listen for hours to music of Chopin, or Schubert, or Bach, or Beethoven without losing attention, and music listeners report that the main interest and attraction of music is emotional experience (Zentner et al., 2008). This by itself is a tentative evidence for the hypothesis that every musical phrase brings new emotion.
It might follow that to fine-tune experimental procedure for measuring subjective differences among musical emotions experimental setups could concentrate on fine emotional differences and exclude gross differences, in other words, could explore similar music sets, e.g., a single composer, or even a single piece of music. This will stimulate listeners to concentrate on fine differences, and not get distracted by gross difference in styles, genres, instruments, large orchestra vs. solo, etc. After establishing dimensionalities of emotional spaces of individual musical pieces, the next step could concentrate on an individual composer, and then gradually explore emotional spaces of various composers, styles, genres, etc. In parallel, experimental and mathematical techniques could be developed to explore conjunctions of different emotional spaces.
Aesthetic emotions could differ depending not only on stimuli but also on individual psychological states of a perceiving individual. Therefore, averaging emotional differences over experimental participants may not be appropriate. Possibly measures obtained at different sessions with the same participants might not be averaged either. Diversity of individual perceptions on different occasions might be valid emotional differences.
Objective confirmations of the results might be found in similarities of properties of emotional spaces (such as dimensionalities, areas, and volumes of emotional spaces of individual music pieces, composers, etc.). As this kind of experimental data will become available, appropriate measures of statistical significance will be developed.
Alternatively to subjective measures of emotional diversity, experimental procedures might concentrate on comparing musical texts. This approach is somewhat similar to measuring properties of emotional spaces using contexts in Petrov et al. (2012). Using chordal harmonies music notations might be preferable for this purpose.
Another approach to measuring aesthetic emotions can be based on brain imaging (Blood and Zatorre, 2001;Schmidt and Trainor, 2001;Koelsch et al., 2006;Wilkins et al., 2012). Can we identify brain image "signatures" corresponding to different aesthetic emotions? This experimental approach can be helped by recent neural models suggesting mechanisms of aesthetic emotions and brain regions involved (Levine and Perlovsky, 2010;Levine, 2012) as well as by discussions of brain networks involving emotions and cognition (Pessoa, 2014).
SUMMARY
The wealth of human emotional experience is made by aesthetic emotions. There are possibly thousands of aesthetic emotions. They evolved relatively recently compared to basic emotions designated by specific words. Measuring aesthetic emotions, which may not be designated by specific words, is more complicated than measuring basic emotions. Specific difficulties have been discussed as well as tentative approaches to overcoming these difficulties. I challenge the experimental community to develop procedures for measuring aesthetic emotions beyond words. | 2016-06-18T01:49:12.954Z | 2014-09-10T00:00:00.000 | {
"year": 2014,
"sha1": "17a1886ed1528d78bf41a4c17cb54707dc3ba651",
"oa_license": "CCBY",
"oa_url": "https://www.frontiersin.org/articles/10.3389/fpsyg.2014.01006/pdf",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "17a1886ed1528d78bf41a4c17cb54707dc3ba651",
"s2fieldsofstudy": [
"Psychology"
],
"extfieldsofstudy": [
"Psychology",
"Medicine"
]
} |
19878924 | pes2o/s2orc | v3-fos-license | New Single Input Multiple Output Type Current Mode Biquad Filter Using OTAs
This paper presents a new current mode (CM) single-input and multi-output (SIMO)-type biquad using two multiple output OTAs and one current follower as an active device and having two grounded capacitors. This SIMO type circuit realizes all the five filter functions as low pass, band pass, high pass, band reject and all pass filter transfer functions simultaneously. This circuit has the unity gain transfer function for all the five types of filters. The circuit enjoys electronic tunability of angular frequency and bandwidth. The 0.18 μm TSMC technology process parameters have been utilized to test and verify the performance characteristics of the circuit using PSPICE. The sensitivity analysis, transient response and calculations of total harmonic distortion have also been shown.
Introduction
OTA-C structures are suitable for realizing electronically-tunable continuous-time filters in a variety of technologies such as bipolar, CMOS and Bi-CMOS and therefore, widely used for designing voltage mode (VM) as well as current-mode (CM) filters.Although a number of CM OTA-C biquads are reported in earlier literature [1]- [22], those of [1]- [6] are of multiple-input-multiple-output type or multiple-input-single-output type [7]- [9].Thus, only the circuits of [10]- [22] realize single-input-multiple-output (SIMO) type CM biquad filters, with which this paper is concerned.SIMO type biquad using five active devices and two grounded capacitors (GCs) is given in [15] [18] [20], that by using four active devices and two GCs is given in [12]- [14] [16] [17] [22].In [10], SIMO type biquad is made by using three active devices, one grounded and one floating capacitors pre-sented whereas in [19] [21] SIMO type biquad made by using three active devices, two GCs and one additional passive element.In [25] SIMO type biquad presented using two active devices and two grounded capacitors, in this circuit input current is not at low impedance terminal.In the next section proposing a new SIMO type CM mode biquad using three active devices and two GCs with following desirables features: 1) realizibility of all the five standard filter functions namely, low pass (LP), band pass (BP), high pass (HP), band reject (BR) and all pass (AP); 2) realizibility of all the five functions without requiring any design constraint/matching conditions; 3) availability of explicit current outputs at high-output-impedance nodes and the input at low input impedance node; 4) independent tunability of ω 0 and bandwidth (BW) with the gain is unity (v) employment of both grounded capacitors; and 5) use of a small number of active building blocks.
Proposed Circuit
The Operational Transconductance Amplifier (OTA) is the widely used and most significance active building block.This happens to be an attractive active device because it offers electronic tunability of filter parameters.So the circuit diagram of the Operational Transconductance Amplifier (OTA) is Figure 2(a).The characteristic equations of the OTA are given as ( 1)-( 2) and its MOS implementation is shown in Figure 2(b) as shown in [24].
( ) The proposed new circuit configuration is shown in Figure 1.
A straight forward analysis reveals the following transfer functions for the configuration of Figure 1 LPF: BPF:
BRF:
( ) HPF: ( ) APF: ( ) where, . The various parameters of the realized filters are given by: Note that from Equation ( 8) it is clear that the bandwidth can be fixed by g m1 and then center frequency can be controlled by g m2 .
Sensitivity analysis for this confirmation shows that which all are no more than 0.5.
Simulation Results
The workability of the proposed new circuits has been verified by SPICE simulations using CMOS OTAs is shown in Figure 2 (as shown in [24]) and CMOS current followers is shown in Figure 3 (as shown in [23]).
Whereas aspect ratios for various MOSFETs employed in the differential-input-dual-output (DIDO)-type OTAs is shown in Table 1, the supply voltages used were V DD = 1.0 V, V SS = 1.0 V, and the aspect ratio for the CMOS current follower is shown in Table 2, and the supply voltages used were To achieve the SIMO-type filters with f 0 = 1 MHz and quality factor of Q 0 = 0.73, the capacitors values were selected C 1 = C 2 = 9 pF and the trans-conductance (g m ) parameters were g m1 = 80.6 µA/V and g m2 = 39.2 µA/V.All the five filter responses are shown in Figure 4. Continuous line in Figure 4(a) shows the simulated outputs whereas dashed line represents the ideal or theoretical output of the derived filter configuration.For AP response, the magnitude as well as phase response is shown in Figure 4(b).One can see in Figure 4(a) the simulated and ideal responses are in good agreement to each other.
To test the input dynamic range of the proposed filters, the simulation of the BPF as an example has been done for a sinusoidal input signal of f 0 = 1 MHz. Figure 5 shows that the input dynamic range of the filter extends up to amplitude of 5 µA.The dependence of the output harmonics distortion on the input signal amplitude is illustrated in Figure 6.
For input signal amplitude lower than 8µA the total harmonic distortion (THD) has been found to be of less than 4.0%.The obtained results show that the circuit operates properly even at signal amplitudes of about 8 µA. Figure 7 shows the simulation results for variation of Q 0 while keeping f 0 fixed (1 MHz) with C 1 = C 2 = 9 pF (see Table 3).Figure 8 shows the simulation results for variation f 0 of while keeping Q 0 =1 with C 1 = C 2 = 9 pF (see Table 3).
Conclusion
This paper introduces a new universal filter which employs one current follower, two OTAs and only two grounded capacitors.This circuit has been verified theoretically and simulated using Orcad PSPICE software.The circuit consists of the following features: 1) all the five standard filtering responses are available without putting any additional active or passive component/device; 2) the current mode circuit offers electronic tunability for the frequency as well as for the bandwidth; 3) the input is applied at low impedance port and the output is taken from high impedance port, and makes this circuit a better proposition for utilizing this circuit for making the higher order filter; 4) all the used capacitors are grounded in nature, which is an important parameter for integrated circuit realization; 5) all the current outputs are explicitly available; 6) this SIMO type universal filter offers unity gain.All the simulated results are in agreement with the theoretical results.
Figure 4 .
Figure 4. PSPICE Simulation results: (a) Gain response of LPF, BPF, HPF and Notch; (b) Gain and Phase response of APF.
Figure 5 .
Figure 5.Time domain response of the band-pass filter of the proposed circuit for 1 MHz sinusoidal input current of 5 µA.
Figure 6 .
Figure 6.Dependence of output current total harmonic distortion on input current amplitude of the band-pass filter of proposed configuration.
Figure 7 .
Figure 7. Simulation results for control of Q 0 while keeping f 0 fixed (1 MHz) for band pass filter.
Figure 8 .
Figure 8. Simulation results for control of f 0 while keeping Q 0 (=1) fixed for band pass filter.
Table 3 .
The g m1 and g m2 values for controlling of Q 0 and for controlling f 0 | 2017-12-04T22:54:31.439Z | 2016-04-13T00:00:00.000 | {
"year": 2016,
"sha1": "e63537a1be79654221cc17caca18b8fefc1f801d",
"oa_license": "CCBY",
"oa_url": "https://www.scirp.org/journal/PaperDownload.aspx?paperID=65732",
"oa_status": "GOLD",
"pdf_src": "ScienceParseMerged",
"pdf_hash": "e63537a1be79654221cc17caca18b8fefc1f801d",
"s2fieldsofstudy": [
"Engineering"
],
"extfieldsofstudy": [
"Computer Science"
]
} |
231901667 | pes2o/s2orc | v3-fos-license | Sensitivity of Osteosarcoma Cells to Concentration-Dependent Bioactivities of Lipid Peroxidation Product 4-Hydroxynonenal Depend on Their Level of Differentiation
4-Hydroxynonenal (HNE) is a major aldehydic product of lipid peroxidation known to exert several biological effects. Normal and malignant cells of the same origin express different sensitivity to HNE. We used human osteosarcoma cells (HOS) in different stages of differentiation in vitro, showing differences in mitosis, DNA synthesis, and alkaline phosphatase (ALP) staining. Differentiated HOS cells showed decreased proliferation (3H-thymidine incorporation), decreased viability (thiazolyl blue tetrazolium bromide-MTT), and increased apoptosis and necrosis (nuclear morphology by staining with 4′,6-diamidino-2-phenylindole-DAPI). Differentiated HOS also had less expressed c-MYC, but the same amount of c-FOS (immunocytochemistry). When exposed to HNE, differentiated HOS produced more reactive oxygen species (ROS) in comparison with undifferentiated HOS. To clarify this, we measured HNE metabolism by an HPLC method, total glutathione (GSH), oxidized GSH (ox GSH), glutathione transferase activity (GST), proteasomal activity by enzymatic methods, HNE-protein adducts by genuine ELISA and fatty acid composition by GC-MS in these cell cultures. Differentiated HOS cells had less GSH, lower HNE metabolism, increased formation of HNE-protein adducts, and lower proteasomal activity, in comparison to undifferentiated counterpart cells, while GST and oxGSH were the same. Fatty acids analyzed by GC-MS showed that there is an increase in C20:3 in differentiated HOS while the amount of C20:4 remained the same. The results showed that the cellular machinery responsible for protection against toxicity of HNE was less efficient in differentiated HOS cells. Moreover, differentiated HOS cells contained more C20:3 fatty acid, which might make them more sensitive to free radical-initiated oxidative chain reactions and more vulnerable to the effects of reactive aldehydes such as HNE. We propose that HNE might act as natural promotor of decay of malignant (osteosarcoma) cells in case of their differentiation associated with alteration of the lipid metabolism.
Introduction
Oxidative stress occurs in cells as a consequence of oxygen metabolism. The reactive oxygen species (ROS) produced during oxidative stress may damage intracellular components, including lipids causing chain reaction of lipid peroxidation. Reactive aldehydes, end-products of lipid peroxidation, are involved in the onset and progression of many diseases such as cardiovascular diseases, neurodegeneration, fibroproliferative disorders, cancer, etc. [1][2][3]. Unregulated or prolonged production of ROS as well as of reactive aldehydes may influence cancer development and progression not only directly as a mutagen but also through modification of gene expression [4]. Tumor cells are under persistent mild oxidative stress which seems to be beneficial for them, increasing their metastatic potential and genetic instability, thus helping tumor cells to survive and progress [5,6]. It is well documented that antioxidant defense systems are altered in tumorigenesis promoting tumor progression [7]. On the other hand, severe oxidative stress is harmful for tumor cells; additional ROS production caused by chemotherapy, irradiation, or innate immune response is cytotoxic and leads to cellular destruction [1,8]. Nowadays, numerous strategies in cancer therapy rely on inducing excessive ROS, promoting lipid peroxidation and ferroptosis [9]. Osteosarcoma cells are often resistant to oxidative stress induced by chemotherapy [10,11].
The focus of our research is differences in response of differentiated and undifferentiated cells to lipid peroxidation, in particular with respect to the role of second messenger of ROS 4-hydroxynonenal (HNE). HNE alters cellular functions such as membrane integrity, mitochondrial respiration, etc.; but it is also a signaling molecule modulating expression of stress genes [4,12].
Normal and malignant cells of the same origin differ in sensitivity to oxidative stress. We have previously described differential sensitivity to HNE of CEM-NKR leukemic cells and normal human peripheral mononuclear cells, where HNE inhibited the growth of malignant cells, but not normal [13]. The same result is observed when normal and malignant mesenchymal cells are analyzed. Normal human osteoblasts and WI38 fibroblasts are less sensitive to HNE than 143 B and HOS osteosarcoma cells [14]. In this article we wanted to clarify whether differentiation of the mesenchymal cells would influence sensitivity to HNE. HOS cell line is able to differentiate in cell culture, so it is used for this purpose [15]. We analyzed HOS cells with different degrees of differentiation in respect to their sensitivity to HNE, ability to detoxify HNE, GSH content, GST activity, and proteasomal activity. We also analyzed the composition of fatty acids in those cells as they serve as a substrate for oxidation, increasing the damage to the cells.
HOS Cell Line
The human osteosarcoma cell line HOS was obtained from American Type Culture Collection (ATCC). Cells were maintained in DMEM (Dulbecco's modified eagle's medium, Sigma-Aldrich, St. Louis, MO, USA) with 5% (v/v) fetal calf serum (FCS, Sigma-Aldrich, St. Louis, MO, USA) in T75 cell culture flasks (Sarstedt, Nümbrecht, Germany), in an incubator (Heraeus, Hanau, Germany) at 37 • C, with a humid air atmosphere containing 5% CO 2 . For the experiments, cells were detached from semiconfluent cultures with a 0.25% (w/v) trypsin solution for 5 min. Viable cells (upon trypan blue exclusion assay) were counted on a Bürker-Türk hemocytometer and used for experiments. All cell culture experiments were performed in such conditions if not stated otherwise for each particular experiment.
In order to differentiate the HOS cell culture, cells were grown for 10 days without detaching and medium was changed every second day. After this period, cells were used for experiments, and are referred as differentiated HOS. Likewise, HOS cells which were maintained in semiconfluent state are referred as undifferentiated HOS.
3 H-Thymidine Incorporation Assay
The rate of radioactive 3 H-thymidine incorporation into DNA was used to measure proliferative activity of differentiated and undifferentiated HOS. Differentiated and undifferentiated HOS were detached, seeded 2 × 10 4 cells in 96-well microtiter plates (Greiner Bio-One GmbH, Frickenhausen, Germany) in a final volume of 200 µL and were cultured for 48 h. For testing effects of HNE on cell proliferation, HOS were treated with different concentrations of HNE (0, 1, 5, or 10 µM) for 48 h. After the first 24 h 0.1 µCi of radioactive thymidine ([6-3 H] thymidine, 1 mCi/mL, Amersham Biosciences, Amersham, UK) was added to each well. The cells were harvested on glass filters in a cell harvester (Skatron, Lier, Norway) and 3 H-thymidine incorporation was measured using a liquid scintillation β-counter (Beckman 7400, Brea, CA, USA) [15].
GSH Measurement
Differentiated and undifferentiated HOS were detached, washed with PBS and frozen immediately in liquid nitrogen. Total and oxidized glutathione were determined by method of Teitze [17]. Briefly, 1 × 10 6 cells were resuspended in 50 µL of 0.1 M phosphate buffer with 5 mM EDTA, pH 7.5. Next, 5 µL of such sample was resuspended in 250 µL of phosphate buffer, vortexed, centrifuged for 7 min at 500× g and supernatant was taken for analysis. GSH standards were prepared from freshly prepared 1 mM GSH (Sigma-Aldrich, St. Louis, MO, USA) stock solution. A total of 10 µL of standards and samples were pipetted to 96-microwell plates with 50 µL phosphate buffer and background absorbance was measured at 415 nm (Easy-Reader 400 FW, SLT Lab Instruments GmbH, Salzburg, Austria). After that, 50 µL of 0.948 mg/mL DTNB (5,5'-dithio-bis-2-Nitrobenzoic Acid, Sigma-Aldrich, St. Louis, MO, USA), 50 µL of glutathione reductase, 8 U/mL and 0.667 mg/mL NADPH were added. The reaction mix was incubated for 3 min at room temperature, when absorbance was measured at 415 nm. The cellular GSH content was calculated from the standard curve. The same cell lysates were used for determination of oxidized GSH. The procedure was the same, only with 0.02 M NEM (N-ethylmaleimid, Sigma-Aldrich, St. Louis, MO, USA) in phosphate buffer used in the second step cells were resuspended. NEM blocks free GSH and leaves only oxidized GSH in cell sample [17]. The amount of GSH was calculated according to amount of cellular proteins determined by Bradford assay [18].
GST Activity
Differentiated and undifferentiated HOS were detached, washed with PBS, and frozen immediately in liquid nitrogen. GST was determined by enzymatic method [19]. Samples of 1 × 10 6 cells were lysed with 500 µL of distilled water by vortexing for 2 min. Cell lysates were centrifuged at 500× g for 7 min and supernatant was used for analysis. In total, 25 µL of sample or GST (Sigma-Aldrich, St. Louis, MO, USA) standards were added into plastic cuvette followed by 750 µL of 100 mM KH 2 PO 4 (Kemika, Zagreb, Croatia) pH 6.25 and incubated at room temperature for 5 min. Background absorbance was measured at 340 nm (Shimatzu, Kyoto, Japan). Then, 100 µL of 7.5 mM 1-choloro-2,4-dinitrobenzene (CDNB, Sigma-Aldrich, St. Louis, MO, USA) was added immediately followed by 100 µL of 10 mM GSH (Sigma-Aldrich, St. Louis, MO, USA). Samples were incubated at room temperature for 15 min and second absorbance was measured at 340 nm. First absorbance was taken from the second one and results were calculated from standard curve. The amount of GST activity was calculated according to amount of cellular proteins determined by Bradford assay [18].
Cell Viability Assay
Thiazolyl blue tetrazolium bromide (MTT) was used to measure mitochondrial activity which reflects viability of the cells. Differentiated and undifferentiated HOS were detached; the cells were plated at density of 2 × 10 4 /well in quadruplicates into 96-microwell plates (Greiner Bio-One GmbH, Frickenhausen, Germany) in final volume of 200 µL/well and incubated for 24 h in DMEM with 5% of FCS containing different concentrations of HNE (0, 1, 2.5, 5, 10, or 20 µM). After 24 h, the medium was removed and replaced with 200 µL of Hank's balanced salt solution without phenol red and 20 µL of the MTT substrate solution (EZ4U, Biomedica, Vienna, Austria). Cells were incubated at 37 • C for 2 h and the absorbance was measured at 450 nm with 620 nm as a reference wavelength [20] on a plate reader (Easy-Reader 400 FW, SLT Lab Instruments GmbH, Worgl, Austria).
Cell Treatments with HNE for Free HNE Analysis, GSH Analysis, and HNE-ELISA Analysis
Undifferentiated and differentiated HOS were detached, cells were washed twice with sterile Krebs-Henseleit buffer, and suspension of 1 × 10 6 cells/mL was pipetted into sterile glass tubes. Cells were then treated with HNE at final concentration of 20 µM (20 nmol/10 6 cells) and incubated at 37 • C in incubator for 120 min unless specified differently. This particular HNE:cells ratio was chosen because it corresponds to 2 µM HNE in experiments in microwell plates we used through this work.
For cell viability by Trypan blue exclusion assay, cells were centrifuged at 200× g for 5 min (Heraeus, Hanau, Germany), the supernatants were discarded, and cell viability was determined immediately by Trypan blue.
For free-HNE analysis, samples were taken at different time points (30,60,90, and 120 min), and centrifuged at 200× g for 5 min (Heraeus, Hanau, Germany). The supernatants were mixed with an equal volume of acetonitrile/acetic acid (24:1 v:v, (Merck, Darmstadt, Germany), centrifuged, and the supernatants were further stored at −80 • C for free HNE analysis on HPLC.
For the HNE-ELISA (HNE-binding studies), cells were centrifuged at 200× g for 5 min (Heraeus, Hanau, Germany) the supernatants were discarded, and the cell pellets were washed twice with PBS, centrifuged, and stored at −80 • C till analyses.
For the GSH analysis, cells were centrifuged at 200× g for 5 min (Heraeus, Hanau, Germany), the supernatants were discarded, and the cell pellets were washed twice with PBS, centrifuged, and stored at −80 • C till the analyses.
Determination of Free HNE by HPLC Method
HNE standards were prepared by serial dilution from 1M HNE stock solution stored at −20 • C. Samples stored at −80 • C where thawed prior to analysis. After thawing, samples were vortexed, centrifuged at 500× g 20 min at 4 • C (Sigma Laborzentrifugen GmbH, Osterode am Harz, Germany) and analyzed by HPLC as already described [21]. The samples (20 µL) were injected into the HPLC system (Beckman System Gold Solvent module 128 with the UV Detector, Beckman, Brea, CA, USA and a Midas Spark Holland autosampler, Spark Holland, Emmen, The Netherlands). The mobile phase consisted of acetonitrile/water (42:58, v/v) (Merck, Darmstadt, Germany). The flow was set to 0.9 mL/min and the absorbance at 223 nm. The samples were analyzed on a Beckman Ultrasphere ODS, 5 µm, 4.6 × 150 mm column (Beckman Coulter, Brea, CA, USA) at room temperature.
Determination of HNE-Protein Adducts by HNE-ELISA
Cell pellets were lysed with 400 µL of lysis buffer (6M guanidine, 0.6055 g TRIS, 0.8766 g NaCl in 100 mL H 2 O, pH 7.5 with 1% (v/v) Triton X-100, 2% (w/v) sodium deoxycholate and 2% (w/v) SDS; all Sigma-Aldrich, St. Louis, MO, USA) and immediately before use, phenylmethylsulfonyl fluoride (PMSF, Sigma-Aldrich, St. Louis, MO, USA) was added to reach final concentration of 1 mM per 1 × 10 6 cells. The lysates were set to concentration of 5 × 10 4 cells in 20 µL. Samples were analyzed by the ELISA as described before [21]. The results obtained in the experiments are expressed as nmol of HNE-His/mg of proteins.
2.10. Analysis of Nuclear Morphology with 4 ,6-diamidino-2-phenylindole Cells were prepared the same way as for MTT assay and plated on an 8-well Nunc chamber slide (Sigma, St. Louis, MO, USA). Differentiated and undifferentiated HOS were treated with different concentrations of HNE (0, 1, 5, or 10 µM) as described above. After 24 h cells were fixed with 4% formaldehyde. Cells were then incubated with 0.3 µM 4 ,6-diamidino-2-phenylindole (DAPI) solution in PBS for 5 min, rinsed with distilled water, and air-dried in the dark. Slides were mounted with glycerol and were scored blind for gross nuclear morphology under fluorescence microscope (Zeiss Axiovert25, HBO 50 Oberkochen, Germany). Morphology evaluation included scoring for nucleus size, chromatin condensation (condensation of chromosomes and necrotic shrinkage of chromatin), and presence of micronuclei, providing information about cell cycle phases, apoptosis, and necrosis, as described before [15,22].
ROS Measurement
Cellular ROS production was measured by a method based on oxidation of 2,7dichlorodihydrofluorescein diacetate (DCFH-DA, Fluka, Charlotte, NC, USA) to the fluorescent compound 2,7-dichlorofluorescein (DCF). This probe is highly reactive with hydrogen peroxide and has been used to evaluate ROS generation in cells [23,24]. HOS cells were seeded in white 96-well plates at density of 2 × 10 4 cells per well in DMEM supplemented with 5% FCS 2 h prior to treatment. After 2 h, medium was removed, the cells were washed with Hanks balanced salt solution (HBSS) and incubated with 10µM DCFH-DA in the HBSS. After DCFH-DA was removed, the cells were washed and incubated with HBSS buffer with different concentrations of HNE (0, 1, 2.5, 5, or 10 µM HNE) for 2 h and the fluorescence intensity was measured with a Varian fluorescence spectrophotometer (Varian, Palo Alto, CA, USA) with an excitation wavelength of 500 nm and emission detection at 529 nm and under fluorescence microscope (Zeiss Axiovert25, HBO 50, Oberkochen, Germany). Results are expressed as relative fluorescence units (RFU).
Immunocytochemical Fluorescence Labeling for c-FOS and c-MYC
Cells were prepared the same way as for MTT assay and plated on an 8-well Nunc chamber slide (Sigma, St. Louis, MO, USA) and left to attach for 6 h. Differentiated and undifferentiated HOS were treated with different concentrations of HNE (0, 1 µM) as described above. After 24 h cells were fixed 2 min with methanol and stored in 4% buffered formaldehyde until analysis. Samples were washed 3 × 5 min with PBS before immunocytochemistry.
C-MYC immunostaining was performed with primary antibody against c-MYC (SC-42 Santa Cruz Biotechnology, Dallas, TX, USA) diluted 1:50 with 1% of bovine serum albumin (BSA) in PBS incubated overnight at ±4 • C. After that, samples were washed with 3 × 5 min with PBS and overlaid with secondary antibody labelled with Texas red (TR, sc 3797 Santa Cruz Biotechnology, Dallas, TX, USA) diluted 1:100 in 1% BSA with PBS for 2 h. Samples were washed 3 × 5 min with PBS, mounted in glycerol, and analyzed under fluorescence microscope (Zeiss Axiovert25, HBO 50, Oberkochen, Germany) with ImageJ software (NIH and LOCI, Bethesda, WI, USA). Results are expressed as fluorescence intensity.
C-FOS immunostaining was performed with primary antibody against c-FOS (F7799 Sigma-Aldrich, St. Louis, MO, USA) diluted 1:100 with 1% of bovine serum albumin (BSA) in PBS incubated overnight at ±4 • C. After that, samples were washed with 3 × 5 min with PBS and overlaid with secondary antibody labeled with fluorescein isothiocyanate (FITC, F1262 Sigma, St. Louis, MO, USA) diluted 1:100 in 1% BSA with PBS for 2 h. Samples were washed 3 × 5 min with PBS, mounted in glycerol and analyzed under fluorescence microscope (Zeiss Axiovert25, HBO 50, Oberkochen, Germany) with ImageJ software (NIH and LOCI, Bethesda, WI, USA). Results are expressed as fluorescence intensity.
Fatty Acid Analysis
For the analysis of fatty acid composition, undifferentiated and differentiated HOS were grown in flasks and triplicates of two different cultures were prepared. Cells were trypsinized, washed, counted, and 1 × 10 7 cell was used for fatty acid analyses. Lipids were extracted in chloroform:methanol (2:1) according to Folch [26]. Heptadecanoic acid C17:0 was added as an internal standard (IS). Fatty acid methyl esters (FAME) were prepared by transesterification with 14% (v/v) boron trifluoride and dissolved in petroleum ether. Fatty acid methyl esters were analyzed with GC-MS, using a Trace GC and a DSQ mass spectrometer (Thermo, San Jose, CA, USA). Separation was performed on a DB-5MS column (60 m, ID 0.32 mm, 0.25 µm film thickness) (Agilent, Waldbronn, Germany), helium was used as carrier gas and a temperature gradient from 130 to 250 • C within 50 min was applied. Data analysis was done with Xcalibur 1.4 software (Thermo, San Jose, CA, USA) and the NIST library for spectrum identification [27,28].
Statistical Analysis
All assays were carried out in triplicates unless otherwise stated for each particular method. The comparison of the mean values was done using Student's t-test considering values of p < 0.05 as significantly different.
Characterization of Undifferentiated and Differentiated HOS
Differentiated and undifferentiated HOS cells stained for the presence of alkaline phosphatase (ALP) are presented in Figure 1A,B. Undifferentiated HOS cell cultures ( Figure 1A) were not stained blue indicating lack of ALP activity. Differentiated HOS cell cultures ( Figure 1B) showed blue nodules, which indicated presence of ALP activity.
DAPI staining of differentiated and undifferentiated HOS are presented in Figure 1C,D. Undifferentiated HOS cell cultures ( Figure 1C) had high numbers of mitotic cells and presence of these cells is indicated by pink arrows, while differentiated HOS cell cultures had low numbers of mitotic cells. Differentiated and undifferentiated HOS cells are different (Table 1) with respect to the number of alkaline phosphatase positive nodules in cell culture (p < 0.0002), mitosis (p < 0.001), and 3 H-thymidine incorporation (p < 0.05). Apoptotic cells were not detected in either of the cell cultures.
GSH, ox GSH, GST, and Proteasomal Activity in Undifferentiated and Differentiated HOS
Total glutathione (GSH) content in cell cultures expressed in nmol/mg of cellular proteins is presented in Figure 2A. Differentiated HOS contained lower amount of GSH than undifferentiated HOS (p < 0.05). Content of oxidized GSH in cell cultures is presented in Figure 2C. Both cell cultures had level of oxidized glutathione below 10% of total GSH and it was not different between undifferentiated and differentiated HOS (p > 0.05). Glutathione transferase (GST) activity expressed in U/mg of cellular proteins is presented in Figure 2B. Undifferentiated and differentiated HOS cell cultures had the same GST activity (p > 0.05). Proteasomal activity was calculated as activity in µmol/(mg*min) and presented in Figure 2D. Differentiated HOS had lower proteasomal activity than undifferentiated HOS (p < 0.05).
GSH, ox GSH, GST, and Proteasomal Activity in Undifferentiated and Differentiated HOS
Total glutathione (GSH) content in cell cultures expressed in nmol/mg of cellular proteins is presented in Figure 2A. Differentiated HOS contained lower amount of GSH than undifferentiated HOS (p < 0.05). Content of oxidized GSH in cell cultures is presented in Figure 2C. Both cell cultures had level of oxidized glutathione below 10% of total GSH and it was not different between undifferentiated and differentiated HOS (p > 0.05). Glutathione transferase (GST) activity expressed in U/mg of cellular proteins is presented in Figure 2B. Undifferentiated and differentiated HOS cell cultures had the same GST activity (p > 0.05). Proteasomal activity was calculated as activity in µmol/(mg*min) and presented in Figure 2D. Differentiated HOS had lower proteasomal activity than undifferentiated HOS (p < 0.05).
Cell Viability, Proliferation, Mitosis, Apoptosis, and Necrosis after Treatment with HNE
Cell viability of HOS treated with different concentrations of HNE is presented in Figure 3A. Low concentrations of HNE (1 µM, 2.5 µM) did not show differences when compared to control, untreated cells (p > 0.05), while higher concentrations of HNE (5 µM, 10 µM) significantly decreased viability in both, undifferentiated and differentiated HOS cells (p < 0.05). While there was no difference between viability of differentiated and undifferentiated HOS treated with 1 µM HNE, differentiated HOS had significantly lower viability when treated with 2.5 µM, 5 µM, and 10 µM HNE (p < 0.05).
Proliferation of HOS treated with different concentrations of HNE is presented in Figure 3B. Similarly to cell viability assay, both undifferentiated and differentiated HOS treated with 1 µM HNE did not show any differences when compared to control, untreated cells nor compared to each other (p > 0.05). Higher concentrations of HNE (5 µM, 10 µM) significantly decreased proliferation in both, undifferentiated and differentiated HOS compared to the control (p < 0.05). Treatment with 5 µM HNE significantly decreased proliferation of differentiated HOS compared to undifferentiated HOS (p < 0.05). Treatment with 10 µM HNE completely blocked proliferation of both undifferentiated and differentiated HOS (p > 0.05).
Cell Viability, Proliferation, Mitosis, Apoptosis, and Necrosis after Treatment with HNE
Cell viability of HOS treated with different concentrations of HNE is presented in Figure 3A. Low concentrations of HNE (1 µM, 2.5 µM) did not show differences when compared to control, untreated cells (p > 0.05), while higher concentrations of HNE (5 µM, 10 µM) significantly decreased viability in both, undifferentiated and differentiated HOS cells (p < 0.05). While there was no difference between viability of differentiated and undifferentiated HOS treated with 1 µM HNE, differentiated HOS had significantly lower viability when treated with 2.5 µM, 5 µM, and 10 µM HNE (p < 0.05).
HNE Metabolism, GSH Content, Formation of HNE-Protein Adducts
Effects of concentration of HNE 20 nmol/10 6 cells on HNE metabolism, GSH content, formation of HNE-protein adducts, and cell viability in undifferentiated and differentiated HOS cell cultures are presented in Figure 4. Cells 2020, 9, x 10 of 20 Free HNE in cell cultures supernatants is presented in Figure 4A. Undifferentiated and differentiated HOS decreased initial HNE concentration already after 30 min (p < 0.05 for both cultures). Undifferentiated HOS were more efficient in decreasing free HNE to 40% of initial value after 120 min (p < 0.05) while in the same period differentiated HOS decreased HNE to 60% of initial value (p < 0.05). Kinetic of GSH in undifferentiated and differentiated cell cultures after treatment with HNE are presented in Figure 4B. GSH decreased during observed period of 120 min in both undifferentiated and differentiated HOS (p < 0.05). Concentration of GSH remained higher in undifferentiated cells throughout the observed period of 90 min (p < 0.05). HNE-protein adducts formed after treatment with HNE are presented on Figure 4C. In undifferentiated HOS cells HNE-protein adducts increased until 30 min when they reached plateau, while in differentiated HOS the plateau was reached after 60 min. Amount of HNE-protein adducts in differentiated HOS cell cultures was higher than in undifferentiated (p < 0.05). Cell viability evaluated upon trypan blue exclusion assay presented in Figure 4D shows that both undifferentiated HOS and differentiated HOS were viable throughout the whole experiment, (p > 0.05).
ROS Production in Cells after Treatment with HNE
ROS production in undifferentiated and differentiated HOS 2 h after treatment with different concentrations of HNE (0, 1, 2.5, 5, and 10 µM) is presented in Figure 5. Green fluorescence indicated presence of ROS in cell cultures. HNE caused concentration-dependent increase in ROS production in both cell cultures (p < 0.05 for 5 and 10 µM HNE). More ROS was present in differentiated HOS cell cultures (p < 0.05 for 5 and 10 µM HNE). Figure 4A. Undifferentiated and differentiated HOS decreased initial HNE concentration already after 30 min (p < 0.05 for both cultures). Undifferentiated HOS were more efficient in decreasing free HNE to 40% of initial value after 120 min (p < 0.05) while in the same period differentiated HOS decreased HNE to 60% of initial value (p < 0.05). Kinetic of GSH in undifferentiated and differentiated cell cultures after treatment with HNE are presented in Figure 4B. GSH decreased during observed period of 120 min in both undifferentiated and differentiated HOS (p < 0.05). Concentration of GSH remained higher in undifferentiated cells throughout the observed period of 90 min (p < 0.05). HNE-protein adducts formed after treatment with HNE are presented on Figure 4C. In undifferentiated HOS cells HNE-protein adducts increased until 30 min when they reached plateau, while in differentiated HOS the plateau was reached after 60 min. Amount of HNE-protein adducts in differentiated HOS cell cultures was higher than in undifferentiated (p < 0.05). Cell viability evaluated upon trypan blue exclusion assay presented in Figure 4D shows that both undifferentiated HOS and differentiated HOS were viable throughout the whole experiment, (p > 0.05).
ROS Production in Cells after Treatment with HNE
ROS production in undifferentiated and differentiated HOS 2 h after treatment with different concentrations of HNE (0, 1, 2.5, 5, and 10 µM) is presented in Figure 5. Green fluorescence indicated presence of ROS in cell cultures. HNE caused concentration-dependent increase in ROS production in both cell cultures (p < 0.05 for 5 and 10 µM HNE). More ROS was present in differentiated HOS cell cultures (p < 0.05 for 5 and 10 µM HNE).
c-FOS and c-MYC in Undifferentiated and Differentiated HOS
Immunocytochemical fluorescent staining for c-FOS and c-MYC is presented in Figure 6. There is no difference in the intensity of c-
Fatty Acids Composition in Undifferentiated and Differentiated HOS
Samples of fatty acids chromatograms of undifferentiated and differentiated HOS cell cultures separated by GC are presented in Figure 7. Differentiated cell cultures have one peak higher and this was designated on a chromatogram. Based on the obtained data, peaks eluting at the selected retention times were further analyzed by the MS. Fatty acids determined by GC-MS in cell culture samples are presented in Table 2. The most abundant fatty acids were palmitic acid C16:0, oleic acid C18:1, and stearic acid C18:0, although they did not significantly differ between two cultures (p > 0.05 for all). We observed very small amounts of C18:2 in both undifferentiated and differentiated HOS as three peaks (RT 19.52,19.64,19.71 min), but exact structures we could not determine so those data are not presented in the table. Difference according to t-test significant between two cultures, * p < 0.05; and 0 and 1 µM HNE, ** p < 0.05.
Fatty Acids Composition in Undifferentiated and Differentiated HOS
Samples of fatty acids chromatograms of undifferentiated and differentiated HOS cell cultures separated by GC are presented in Figure 7. Differentiated cell cultures have one peak higher and this was designated on a chromatogram. Based on the obtained data, peaks eluting at the selected retention times were further analyzed by the MS. Fatty acids determined by GC-MS in cell culture samples are presented in Table 2. The most abundant fatty acids were palmitic acid C16:0, oleic acid C18:1, and stearic acid C18:0, although they did not significantly differ between two cultures (p > 0.05 for all). We observed very small amounts of C18:2 in both undifferentiated and differentiated HOS as three peaks (RT 19.52,19.64,19.71 min), but exact structures we could not determine so those data are not presented in the table. Cells 2020, 9, x 13 of 20 In agreement with the initial screening performed by GC-FID one fatty acid was found to significantly differ between differentiated and undifferentiated HOS cells (Figure 8). Differentiated HOS cells have significantly increased level of 5,8,11-eicosatrienoic acid C20:3 n-9 (p < 0.05). The amounts of other fatty acids were not different between cultures (p > 0.05).
In agreement with the initial screening performed by GC-FID one fatty acid was found to significantly differ between differentiated and undifferentiated HOS cells (Figure 8). Differentiated HOS cells have significantly increased level of 5,8,11-eicosatrienoic acid C20:3 n-9 (p < 0.05). The amounts of other fatty acids were not different between cultures (p > 0.05).
Discussion
Evidence suggest that normal cells are less sensitive to HNE than corresponding malignant cells, such as normal human peripheral lymphocytes compared to CEM-NKR leukemic cells [13] and normal mesenchymal cells such as human osteoblasts and WI38 fibroblasts compared to malignant 143B and HOS osteosarcoma cells [14]. Therefore, we wanted to investigate how the process of differentiation affects sensitivity of malignant cells to HNE. For this purpose, we used HOS osteosarcoma cells as they are able to differentiate in cell culture. When HOS cells reach confluence, they start to differentiate and express alkaline phosphatase as functional and morphological biomarker of osteogenic differentiation [15]. These cells retained viability and do not undergo apoptosis, but were able to proliferate when seeded at lower density and even completely recover ability to proliferate reaching the same percentage of mitosis in the culture as undifferentiated HOS. If the cultures are regularly maintained, it is not expected that deprivation of nutrients is caused. Similar effect is observed on adipocytes where contact inhibition and growth arrest cause differentiation [29]. It is important to notice that HNE itself is also a growth regulating factor which can induce differentiation in different cell types [30,31]. In HOS model we used HNE decreased differentiation when added repeatedly every second day for 10 days [15].
Therefore, we expected that differentiated HOS cells will exert characteristics of osteoblasts because of the presence of ALP, and will be more resistant to HNE due to our previous results on mesenchymal cells [14].
Discussion
Evidence suggest that normal cells are less sensitive to HNE than corresponding malignant cells, such as normal human peripheral lymphocytes compared to CEM-NKR leukemic cells [13] and normal mesenchymal cells such as human osteoblasts and WI38 fibroblasts compared to malignant 143B and HOS osteosarcoma cells [14]. Therefore, we wanted to investigate how the process of differentiation affects sensitivity of malignant cells to HNE. For this purpose, we used HOS osteosarcoma cells as they are able to differentiate in cell culture. When HOS cells reach confluence, they start to differentiate and express alkaline phosphatase as functional and morphological biomarker of osteogenic differentiation [15]. These cells retained viability and do not undergo apoptosis, but were able to proliferate when seeded at lower density and even completely recover ability to proliferate reaching the same percentage of mitosis in the culture as undifferentiated HOS. If the cultures are regularly maintained, it is not expected that deprivation of nutrients is caused. Similar effect is observed on adipocytes where contact inhibition and growth arrest cause differentiation [29]. It is important to notice that HNE itself is also a growth regulating factor which can induce differentiation in different cell types [30,31]. In HOS model we used HNE decreased differentiation when added repeatedly every second day for 10 days [15].
Therefore, we expected that differentiated HOS cells will exert characteristics of osteoblasts because of the presence of ALP, and will be more resistant to HNE due to our previous results on mesenchymal cells [14].
Firstly, we performed MTT assay and 3 H-thymidine proliferation assay which, surprisingly, showed that differentiated HOS are more sensitive to HNE. They also had more apoptotic cells and late apoptotic/necrotic cells than undifferentiated HOS. Furthermore, we made immunostaining HOS cells for the two transcription factors, c-FOS and c-MYC. The results of c-FOS presence in the cells are in the agreement on HL-60 cells where HNE does not affect c-FOS [32], but inhibits c-MYC [33] indicating that this decrease can be the cause of lower proliferation and viability at higher HNE concentrations. Interestingly, c-MYC showed to be differently present in differentiated and undifferentiated HOS. Overexpression of c-MYC increases proliferation of mesenchymal stem cells [34]. As c-MYC is a transcription factor which activates genes involved in proliferation [35], this result supports lower proliferation index in differentiated HOS.
One of the possible explanations of those results is that differentiated HOS probably reached senescence. In support of this possibility is the data showing that senescent chondrocytes are more sensitive to oxidative stress [36]. We expected that one of the factors causing different sensitivity to HNE could be the changes in reduced glutathione (GSH) level. GSH is an important intracellular protector against free radicals as well as radical scavenger responsible for HNE detoxification [37].
The GSH content in differentiated and undifferentiated HOS was measured showing that differentiated HOS cells had lower levels of GSH, which might explain their higher sensitivity to HNE. Levels of oxidized GSH were below 10% in both nondifferentiated and differentiated HOS cell cultures, showing that those cells were equally viable. The process of HOS cells differentiation was associated with changes in cell metabolism resulting in decreased GSH content in those cells. GSH content in cells is different depending on the cell cycle: it increases from G phase through S phase and reaches maximum at G2/M phase [38]. Differentiated osteosarcoma cells had lower number of mitotic cells, thereby agreeing with this study. Some malignant cells like hepatoma have higher GSH than normal hepatic cells [39], while nonmalignant mesenchymal cells have higher amounts of GSH than osteosarcoma cells [14]. Differences in GSH content appear to play an important role in cellular sensitivity to HNE.
Treatment with HNE additionally decreased GSH content in cells [37]. Lower content of GSH and its consumption resulted in higher HNE-protein adducts in differentiated HOS cells. Previously, we presented linear correlation between GSH content and HNE-protein adducts formed in cells after exposure to HNE [14]. After a certain period of time, both GSH level and HNE-protein adducts reached equilibrium, regardless of the free HNE in cell supernatant and free GSH. In this equilibrium state GSH and HNE-protein adducts reached plateau. One of the possibilities for these results is the method for measuring HNE-protein adducts, which detects only HNE-histidine adducts, but not other modifications, such as lysine, cysteine, or arginine [40]. HNE-protein adducts are formed in vivo in cancer cells and normal tissue and change during tumor progression. Depending on tumor origin and stage, formation of adducts could be lower, the same, and higher than corresponding normal tissue [41,42].
HNE is a substrate of glutathione S-transferases (GST) [43], a family of enzymes that catalyzes the conjugation of chemicals to glutathione. Two isozymes of α-class of GSTs, hGSTA4-4 and hGST5.8 have high catalytic affinity for HNE [44,45]. GS-HNE conjugates are exported from the cells by active ATP-dependent transport through RLIP76 protein [46]. We measured total glutathione transferase activity in HOS and there was no difference between differentiated and undifferentiated HOS. Overexpression of GSTs is related to an increase in resistance to anticancer drugs or alkylating agents [47,48]. In keratinocytes, HNE metabolites are determined by MS; 48% are attributed to two unconjugated metabolites created by aldehyde dehydrogenase [49] and 52% are four metabolites created by conjugation with GSH further metabolized by oxido-reductive enzymatic processes. In erythrocytes, 70% of metabolites are conjugates with glutathione and 25% one of the unconjugated metabolites [50]. It seems that cells of different origin differ in preference of metabolic pathways which is used for HNE degradation. Earlier, we determined linear correlation between HNE-modified proteins and GSH content in mesenchymal cells [14], so we supposed that these cells use GSH preferentially to eliminate HNE.
Proteasomes are responsible for degradation of damaged proteins and its role in tumor progression is not yet clear although there are attempts to use proteasomal inhibitors in tumor treatments [51]. HNE damages proteins which are then substrate for degradation by 20S proteasome subunit of 26S proteasome, responsible also for degrading oxidized proteins. Mildly modified proteins with HNE concentrations such as 1-10 µM are easily degraded by the proteasome, while high concentrations of HNE such as 100 µM extensively modify proteins. This results in formation of protein aggregates which inhibit proteasomal activity [52]. Undifferentiated and differentiated HOS were checked for 26S proteasomal activity and we found that differentiated HOS had lower proteasomal activity which makes processing damaged proteins more problematic. HNE induced production of ROS in undifferentiated and differentiated HOS after treatment in a concentration dependent manner, the higher the HNE concentration used, the higher were the ROS levels measured. In support to GSH results, HNE caused higher increase of ROS in differentiated HOS. The method we used for ROS determination with DCFH-DA is widely used for H 2 O 2 , although there are some controversies about the compound which causes oxidation of this substrate. It is suggested that hydroxy radicals or peroxinitrite could do that also [53]. Fatty acid composition in undifferentiated and differentiated HOS was measured because PUFAs can serve as a substrate for oxidation. Only one fatty acid was found to be significantly different between HOS with different degree of differentiation and this was 5,8,11-eicosatrienoic acid (C20:3 n-9, mead acid) which was found to be higher in differentiated cells.
This particular fatty acid belongs to the group of omega-9 fatty acids and is the only one formed de novo in the body in the state of fatty acid deficiency. Essential fatty acid deficiency (EFAD) is considered when the ratio between triene/tetraene fatty acids is >0.4 [54]. C20:3 is formed from oleic acid when there is restriction in omega-6 fatty acids [55]. The elevated level of C20:3 in differentiated HOS cells could be due to increased expression of enzymes involved in omega-9 fatty acid synthesis. Indeed, studies on NIH3T3 and Hepa1-6 cells demonstrated that Elovl5, Fads1, or Fads2 are involved in synthesis of 20:3 omega-9 fatty acid and their downregulation causes a decrease in C20:3 fatty acid level [56]. Furthermore, EFAD changes composition of fatty acids in bone tissues toward high C20:3 and low C20:4 [57]. Normal, young cartilage has very high content of C20:3 levels and low level of C20:4 [58]. This is supposed to be result of fatty acid deficiency due to low vascularization, and C20:3 is also blocking angiogenesis [59]. However, data are not straightforward because results in growing chicks show very high C20:4 in bone as well as in cartilage, but not C20:3 [60]. Perhaps consummation of food with a higher amount of omega-6 linoleic acid than omega-9 oleic acid causes this [60].
There are no literature data about the role of C20:3 fatty acid in cells of osteogenic origin as well as osteosarcoma cells. It is known that it serves as a substrate for 5-lipooxigenase and is converted into LTA3 which inhibits synthesis of proinflammatory LTB4 [61]. It is supposed to block osteoblasts activity, decreasing ALP activity in osteoblasts [62]. In our study differentiated HOS had both increased alkaline phosphatase activity and more C20:3, therefore, our results do not support this. It is known that addition of other fatty acids such as docosahexaenoic acid n-3 induces apoptosis via ROS production [63]; addition of PUFAs can induce oxidative stress [64] because PUFAs can be autoxidized or enzymatically oxidized by oxidases [65,66]. In our study differentiated HOS had more PUFAs in total, which can increase their sensitivity to cytotoxic activity of HNE.
The differentiation of HOS was accompanied by a decrease in the ability of HOS cells to metabolize HNE and protect themselves from its toxic effects. Since differentiation of HOS cells was accompanied also by increased production of C20:3 fatty acid, we assume that could make them more subjected to free radical-initiated oxidative chain reactions and more vulnerable to the effects of reactive aldehydes such as HNE. In favor of this possibility are findings of novel, selective anticancer effects of HNE observed for the other cancer cell types [67], related to the lipid metabolism and ROS production by cancer and surrounding nonmalignant cells [68], resembling findings observed also for another reactive aldehyde acrolein [69,70], thus supporting further studies on biomedical relevance of these bioactive markers of lipid peroxidation [71,72]. | 2021-02-13T06:16:37.587Z | 2021-01-29T00:00:00.000 | {
"year": 2021,
"sha1": "04ed4651f1592756c127e49d65b972939efd0762",
"oa_license": "CCBY",
"oa_url": "https://www.mdpi.com/2073-4409/10/2/269/pdf",
"oa_status": "GREEN",
"pdf_src": "PubMedCentral",
"pdf_hash": "3e2ab4de039c8c9f8db95482a4d7f84fabe2b3ce",
"s2fieldsofstudy": [
"Medicine",
"Biology"
],
"extfieldsofstudy": [
"Medicine"
]
} |
261305798 | pes2o/s2orc | v3-fos-license | Preparation and application of caffeic acid imprinted polymer
In the present study, molecularly imprinted polymers were synthesized using caffeic acid (CA) as a template molecule and then used for the extraction of CA and chlorogenic acid (CLA) from complex matrices. Syntheses were carried out in tetrahydrofuran as porogenic solvent using 4-vinyl pyridine, methacrylic acid, acrylamide, and 1-vinyl imidazole as monomers, ethylene glycol dimethacrylate as crosslinker and 2,2′-azobisisobutyronitrile as initiator. In polymerization processes, different ratios of the template:monomer:crosslinker (T:M:CrL) were used to obtain the most suitable polymer. Caffeic acid:4-vinylpiridine:ethylene glycol dimethacrylate’s 1:4:16 mole ratio of MIP was determined as the most convenient polymer for CA recognition. In addition, nonimprinted polymers (NIPs) without templates were prepared. Dynamic and static adsorption tests were applied to determine the absorption features of the NIPs and CA-MIPs. Separation and purification studies of CA and CLA were performed with molecular imprinted solid phase extraction (MISPE) application. All steps of MISPE (loading, washing, elution) were optimized by HPLC analysis.
Introduction
Hydroxycinnamic acids are derivatives of phenyl-propanoids and are found in plant foods and are phenolic constituents that play a central part in the phenolic metabolism of plants and are biosynthetic derivatives of phenylalanine [1,2].These components are found in herbal (or plant-based) products such as fruit, vegetables, seeds, coffee, flowers, nuts, wine, tea, and olive oil.CA, p-coumaric acid (p-COA), and CLA the quinic acid ester of caffeic acid, are the most abundant and essential hydroxycinnamic acids in fruits such as apples, pears, grapes, and plants.CLA displays carcinogenic, antimutagenic, and antioxidant activities in vitro [3].CLA which is also known as 5-caffeoylquinic acid is stored as the CA (4).
CA is a phenolic antioxidant found in many herbs and beverages.Seventy percent of the total hydroxycinnamics in fruits is caffeic acid.It is an antioxidant that slows down inflammation and thus has important biological effects such as protection to the free radicals' harmful influences and endothelial damage [2].Hence, the isolation and enrichment of caffeic acid is an important research topic, and to achieve this goal, CA imprinted polymers are synthesized by using CA as a template [5][6][7][8][9].
Coffee is a drink which is highly consumed in the world [10].Green coffee has a large caffeine and polyphenol content.Among the polyphenols, it contains the highest amount of chlorogenic acid, caffeic acid, as well as CA, p-COA and ferulic acid (FA).Polyphenol content of coffee changes during the roasting process.A cup of coffee which approximately contains 10 g of coffee has a mean of 15-325 mg CLA [10,11].Coffee has antioxidant and antineoplastic effects.Green coffee has a light aroma which is similar to green bean [12].Extracts of green coffee have proven that have antihypertensive effects with studies conducted in recent years.For humans and mice, it has a restrictive impact on fat gathering and the body heaviness and regulates the metabolism of glucose for humans [13].The caffeine contained in coffee has influences on endocrine system, cardiovascular system, and central nervous system [10].
MIPs are polymers comprising a template that can chemically recognize a particular molecule (or a derivative thereof) [14,15].MIPs are among the fields of interest in recent years due to their low cost, superior mechanical power, resistance to pressure and temperature, physical strength, stability in the presence of extreme conditions such as organic solvents, metal ions, acids and bases, and high storage durability [16][17][18][19][20][21].
Abstract: In the present study, molecularly imprinted polymers were synthesized using caffeic acid (CA) as a template molecule and then used for the extraction of CA and chlorogenic acid (CLA) from complex matrices.Syntheses were carried out in tetrahydrofuran as porogenic solvent using 4-vinyl pyridine, methacrylic acid, acrylamide, and 1-vinyl imidazole as monomers, ethylene glycol dimethacrylate as crosslinker and 2,2'-azobisisobutyronitrile as initiator.In polymerization processes, different ratios of the template:monomer:crosslinker (T:M:CrL) were used to obtain the most suitable polymer.Caffeic acid:4-vinylpiridine:ethylene glycol dimethacrylate's 1:4:16 mole ratio of MIP was determined as the most convenient polymer for CA recognition.In addition, nonimprinted polymers (NIPs) without templates were prepared.Dynamic and static adsorption tests were applied to determine the absorption features of the NIPs and CA-MIPs.Separation and purification studies of CA and CLA were performed with molecular imprinted solid phase extraction (MISPE) application.All steps of MISPE (loading, washing, elution) were optimized by HPLC analysis.
The continual requirement for rapid and productive novel ways in environment, biotechnology and medicine has guided investigators to canalize more selective, preferable, and susceptible analytical works [22].So far, MIPs have been used in a vast majority of analytical applications including but not limited to solid-phase extraction (SPE) [23], liquid chromatography [18,24], capillary electrochromatography, and capillary electrophoresis [25].MIP adsorbents are used in the separation of peptides, proteins, amino acids, hormones, DNA and RNA; SPE of drugs; and removal and purification of many substances from foods and solid phase extraction of drugs [6,[23][24][25][26].
A few CA imprinted polymers prepared by different researchers and their applications are available in the literature.However, their preparation and application methods are different.When these studies are examined, it is seen that there is still a need for simple, fast, and easy-to-apply studies to be carried out for this purpose.Valero-Navarro et al. [8] synthesized the CA imprinted polymer by precipitation polymerization using 4-vinylpyridine (4-VP) as functional monomer and used it for the extraction of CA from the juice sample.The polymer has been used as an HPLC stationary phase, but in the chromatogram (λ = 274 nm), the tailed peak of CA could be determined in a very wide time interval (approximately 5-17.5 min).Besides, this tailed CA peak highly overlaps with the protocatechuic acid (PCA) and CLA peaks.The same authors pointed out that there were only three studies in which CA was used as a template for MIP synthesis [6,9,27].On the other hand, they stated that their selectivity for separating CA from complex matrices is not very high [8].In a few recent studies, MIPs were applied to different plant extracts for the isolation of quinic acid and/or its derivatives (CA and CLA) [28,29].In a study by Miura et al., MIPs for CA have been prepared using 4-VP and methacrylamide (MAM) as functional monomers by precipitation polymerization by using a 0.66 (CA):3 (MAM):3 (4-VP) ratio.This polymer was applied for the extraction of CA and CLA in the leaves of Eucommia ulmodies.In this study, the retention and molecular recognition properties of the MIPs have been evaluated using water-acetonitrile, and sodium phosphate buffer-acetonitrile as mobile phases in hydrophilic interaction chromatography (HILIC) and reversed-phase chromatography, respectively.As a result, the MIP showed higher molecular-recognition ability for CA in HILIC mode than in reverse-phase mode [28].In another study, a quinic acid (QA) imprinted MIP has been prepared with a 1:5 template:monomer (QA:4-VP) ratio and the selectivity of MIP towards QA has been tested versus its analogues found in coffee (CA and CLA) with MISPE.The result has shown a recovery percent of 81.92 ± 3.03, with a significant reduction in the amounts of other components (i.e.CA and CLA) in the extract [29].
The primary aim of this study was to design a new suitable CA imprinted polymer for the separation and purification of CA from a synthetic mixture containing phenolic compounds, and CLA from green coffee bean extract, respectively, and to optimize new MISPE application conditions.For purpose a new template:monomer:crosslinker ratio as 1:4:16 was used for the first time by noncovalent imprinting technique [5,6,9,30,31].In polymerization processes, CA was used as a template and monomers, porogens, and ratios of the template:monomer:crosslinker (T:M:CrL) were optimized to obtain the most suitable polymer.Dynamic and static adsorption tests were used for CA-MIPs and NIPs.MISPE trials were performed with synthesized CA-MIPs by filling into SPE cartridges in determined amounts.Thus, MISPE was carried out with a synthetic mixture consisting of antioxidant standards.In this way, all steps of MISPE (loading, washing, elution) were optimized by HPLC so that the highest efficiency of CA recovery could be obtained.Then, a new MISPE method was improved for natural sample extracted with a suitable solvent system.In the CA-MISPE application, CLA was recovered from green coffee bean extract as a natural sample.Therefore, our study enables not only in terms of preconcentration and cleaning of CA and CLA, but also selective extraction of these phenolic compounds in complex or contaminated samples.
Preparation of caffeic acid imprinted polymer
For preparation of dried THF, 10 g of benzophenone was transferred into a 1-L flask with 500 mL of THF and the contents were stirred after when 5 g of very finely cut sodium pieces were put into the flask.It was observed that the color of the solvent was between green and blue with the addition of sodium.Then, the mouth of the balloon was closed, and the mixture was left in dark at 25 °C for one night.It was observed that the color of the mixture became purple due to the benzophenone radical anion formed.Anhydrous THF was obtained by distilling the mixture containing this anion, which is an indicator of THF drying [32,33].HPLC analysis An HPLC system (Waters Breeze 2, Milford, MA, USA) comprising a PDA detector (Waters 2998), a binary gradient pump (Waters 1525), and a C18 column (4.6 mm × 250 mm × 5 mm) was used for performing the analyses.The brand of column was ACE (Aberden, Scotland, UK).Empower PRO software (Waters Associates, Milford, MA, USA) was utilized for data analysis.
In HPLC analysis, a gradient elution using binary mobile phase consisting of solvent A (MeOH) and solvent B (0.2 percent o-H 3 PO 4 in H 2 O) was used [30,31].This method was developed in the following order: 20% A for 3 min, 5%-35% A for 3 min, between 16% and 80% A for 5 min, and 22%-100% A for 16 min.Total analysis time is 22 min.In this method, the slope was carried out 2.0 in all steps.The flow rate was applied as 1 mL min -1 and the injection volume was 25 µL.The studying wavelengths were 280 nm (for the analyses of gallic acid and catechin derivatives) and 290 and 320 nm (for the analyses of caffeic acid derivatives).
By utilizing these studying conditions, calibration curves were drawn by using peak area vs concentration of each antioxidant.
CA-MIPs and NIPs were synthesized [5,8,9,19,30,31] by using 1-VI, 4-VP, AA, MAA and as monomers separately.Figure 1 shows the schematic presentation of the interaction mechanism of MIP.For example, to synthesize the 1:4:16 polymer, in a glass vial 0.4625 mmol of caffeic acid was dissolved in dried THF (9 mL) as porogen, after which in order to prepolymerization 1.85 mmol of monomer was added and mixed for about 10 min.Afterwards, 7.4 mmol of EDMA as crosslinker and 0.93 mmol of AIBN as initiator was put into this mixture.At once nitrogen was passed for 15 min by placing in an ice bath.
After finishing the period, the glass vial was closed and heated in a 60 °C water bath and stayed for 24 h in order to carry out polymerization.Under the same conditions without the use of template molecules; at ratios of nonimprinted polymers with ratios of 0:4:12; 0:4:16; 0:4:20; 0:5:30; 0:6:30; 0:8:40 were synthesized.Afterwards, in order to clear away CA (template), soluble oligomers, and unreacted monomers from the polymer, the product was washed through a Soxhlet extractor by using MeOH-HAc (4:1, v/v) (250 mL × 2) and MeOH (250 mL × 2).Then the polymers were shaken in ACN in a water bath until a steady baseline at UV spectrum of cleaning solvent was acquired.The resulting polymers were milled and sifted to 150-200 µm size of particles in a Retsch Sieve Shaker (Haan, Germany).Then, the drying of the polymers was performed at 50 °C overnight in a vacuum oven (Figure 1).
Adsorption features of CA-MIPs 2.4.1. Time and solvent effects
To determine the time and solvent influences on CA rebinding, batch adsorption tests were used [30].For this purpose, adsorption solutions were put in a shaker for certain times at room temperature.At the end of these experiments, the obtained supernatant was decanted and filtered?to clarify by using a GF/PET (glass fibre/ polyethyleneterephthalate) 1.0/0.45µm microfilters.The quantities of working compounds of the ending filtrates were established by using their absorbances in their specific wavelengths with the Shimadzu 2600 (Kyoto, Japan) UV-Vis spectrophotometer.
Adsorption tests
Adsorption experiments consisted of static and dynamic tests [6,19,30].For static tests, 30 mg 1:4:16 CA-MIPs and NIPs were put in conical flasks, separately, then they were blended with 4 mL of CA in certain amounts (20-100 µM) in ACN.At that time these samples were shaken at 250 rpm for 6 h at room temperature.
Dynamic adsorption tests was performed with an Agilent (Waldbronn, Germany) 12-port SPE system and Hamilton (Bonaduz, Switzerland) vacuum pump.For this purpose, 3 mL Hamilton (Bonaduz, Switzerland) empty cartridges were filled with 100 mg of 1:4:16 CA-MIPs by a wet packing method.Then, a polyethylene disc frit was fitted on the top and the bottom of the MIP bed.ACN was used to condition cartridges.Afterwards, 40 µM of CA solution in ACN were loaded into the SPE system at a flow rate of 0.3 mL min -1 until releases were detected.The quantities of CA in the effluents were defined were defined by their absorbances at 320 nm.Breakthrough curves were plotted by using concentrations of effluents and volume of sample passed through the CA-MIPs.
Batch tests were carried out in conditions previously stated.Thus 30 mg of 1:4:16 CA-MIP and NIP were added in conical flasks and put in 80 µM (4 mL) of compounds indicated above.The quantities of compounds in the resulting filtrates were defined with their absorbances at max absorption wavelengths in the UV spectra.
Usage of molecularly imprinted solid phase extraction (MISPE) methods
Fifty milligrams of 1:4:16 CA-MIP and 1 mL empty SPE cartridge and a polyethylene (PTFE) disc frit that were placed on the bottom and top of the MIP bed were used [6,30,31].MIP which was suspended in ACN was filled into this cartridge, afterwards this filled cartridge with MIP was conditioned with ACN.CA and CLA were isolated by using these MISPE methods from the synthetic mixture and green coffee bean extract, respectively.MISPE processes were applied by using loading (flow rate: 0.3 mL min −1 ), washing (flow rate: 1 mL min −1 ) and elution (flow rate: 0.5 mL min −1 ) steps.Before HPLC analyzes of MISPE steps, all eluates were diluted with water at 1:1 (v/v) ratio.Then CA and CLA were analyzed.
MISPE application for the synthetic mixture
For the isolation of CA from synthetic mixture with CA-MISPE application [6,30]; after cartridge conditioning with ACN, the mixture was loaded in the cartridge with the total volume of 1.5 mL (0.5 mL parts of mixture was loaded in all loading steps).Then washing processes were performed comprising 2 stages with ACN (2.5 mL+0.5 mL) targeting to eliminate compounds that were held by the polymer nonspecifically.Finally, the elution step was applied with MeOH:HAc (4:1,v/v) in 4 stages (each stages comprising 0.5 mL solvent).
MISPE application for the green coffee bean
For the isolation of CLA from green coffee bean extract with CA-MISPE application; firstly, the green coffee bean extract was obtained with aqueous 70% (v/v) methanol as the extraction solvent [30,31].One and six-tenths grams of ground green coffee beans were put in stoppered flasks and 70% (v/v) methanol was added.Then the mixture was put in an ultrasonic bath.Extraction was carried out in three stages and took for about 120 min with aqueous 70% (v/v) MeOH as extraction solvent.In order of, 10 mL solvent for 60 min, 10 mL solvent for 45 min and 5 mL solvent for 15 min.Finally, every solvent fraction was collected and fulfilled to 25 mL.Then, the obtained extract of 20 mL was evaporated by using a Büchi R210/215 (Flawil, Switzerland) rotary evaporator at 40 °C under vacuum.After this process, the residue was dissolved in 7 mL of ACN:DMSO (98:2, v/v).One milliliter of that solution was diluted to 4 mL with ACN and dried with anhydrous Na 2 SO 4 .
For MISPE application, 3.5 of the dried green coffee bean extract was loaded into 100 mg of 1:4:16 CA-MIPs conditioned with 10 mL ACN in 3 mL SPE cartridge.Washing steps consisted of 2 steps (12+8 mL ACN) in 1 mL min -1 flow rate.Elution was applied by using MeOH:HAc (4:1,v/v) in 0.5 mL min -1 flow rate for two successive times consisting of 3 mL and 2 mL, respectively.Afterwards, the residue was dissolved with 7 mL of ACN:DMSO (98:2, v/v).One milliliter of this extract was diluted with ACN to 4 mL and dried with anhydrous Na 2 SO 4 .Three and five-tenths milliliters of the dried green coffee bean extract was loaded into 100 mg of 1:4:16 CA-MIPs conditioned with 10 mL ACN by using 3 mL SPE cartridge.Washing steps comprised 2 steps (12 + 8 mL ACN) in 1 mL min -1 flow rate.The elution process was performed with MeOH:HAc (4:1,v/v) in 0.5 mL min -1 flow rate for 2 times as 3 mL and 2 mL.
Arranging of MIP synthesis
These studies were evaluated with imprinting factors (Eq. 1) by batch tests with utilizing 60 mM CA and 30 mg MIP and NIP.To determine the most suitable monomer for polymer synthesis, MIPs and NIPs were synthesized in 1:4:16 ratio of template:monomer:crosslinker using four several monomers comprising MAA, AA, 1-VI, and 4-VP.The imprinting factors (IFs) of the polymers were estimated with adsorption experiments using 60 µM CA with these polymers (Eq.1).The results are shown in Table 1.CA adsorption did not occur in polymers synthesized by using AA as monomer, and CA adsorption was higher in NIPs than MIP in polymers synthesized using 1-VI and MAA.This is an indication that CA does not form in MIP and that adsorption occurs nonspecifically.With polymers synthesized by using 4-VP as monomer, more CA was adsorbed in MIP and NIP compared to other monomers, and since the adsorption on MIP was higher than NIP, the formation of CA in MIP was confirmed in this way and imprinting was achieved.
Imprinting factor (IF) = Q MIP / Q NIP (1) Q MIP : adsorption amount of MIP (µg g -1 ) Q NIP : adsorption amount of NIP (µg g -1 ) Syntheses were made using methanol, ACN and THF as porogenic solvents for CA-MIPs, but it was observed that adsorption occurred in the polymer synthesized in THF.The imprinting factor (IF) for the polymer prepared with this solvent was calculated as 2.02 since the amount of CA adsorbed by CA-MIP is 930 and NIP has 460 mg g -1 .Also, the availability of polar solvents like water disturbs template and monomer interaction outcoming in polymers with a weak grade of identification.Because of this reason, THF is dried with Na (32,33).
Adsorption characteristics of CA-MIPs 3.2.1. Time and solvent impacts on rebinding
In order to examine time effect, for batch rebinding tests, 6 h was approved of shaking period as adequate.To the adsorption quantity of CA vs. hour did not indicate a clear alteration later 4 h.At the 6th hour, it achieved a plateau.
Adsorption tests
To determine impact of CA concentration on adsorption capacity, 30 mg 1:4:16 CA-MIP and various concentrations of CA (0.02-10.00 mM) in ACN were used in static adsorption experiments.Table 3 indicates the adsorption amounts of CA and imprinting factors, Figure 2 demonstrates adsorption isotherms for imprinted and nonimprinted polymers [9].As can be seen from Table 3, it was determined that although imprinting factor was elevated at low CA concentration, considerably low at high CA concentration.In addition, the amount of CA adsorbed by NIP at concentrations below 0.40 mM is lower than MIP.But the amount of CA adsorbed by NIP at a 0.40 mM CA approaches MIP adsorptivity and even exceeds MIP adsorption at higher concentrations.It is thought that the reason for this is the low solubility of CA in acetonitrile, and therefore, at high concentrations, CA collapses over time.Therefore, it was concluded that at high concentrations, the measured absorbances could not be related to the polymer's binding to CA.The imprinting factor also decreases rapidly above 0.06 mM CA concentration.For these reasons, concentrations above 0.20 mM were not used in adsorption studies (Table 3).
In dynamic adsorption tests (column experiments), breakthrough curve was plotted with CA concentration (C e ) vs. the volume (V e ) of effluent in ACN solvent (Figure 3).The dynamic adsorption capacity of 1:4:16 CA-MIP was determined with calculation by the integration the area above breakthrough curve was 5.7 × 10 -3 mmol (1.03 mg g -1 ).
Adsorption isotherms of CA-MIP and NIP
Adsorption features of 1:4:16 CA-MIP and NIP were estimated by using Freundlich and Langmuir isotherms.
The linear form of Freundlich isotherm is explained as Eq.2: lnQ e = lnK f +1/n lnC e (2) Q e : adsorption amount of CA on MIP and NIP (mg g −1 ) C e : equilibrium concentration (concentration remaining in solution at equilibrium, mM).It was concluded that the adsorption complies with the Freundlich isotherm due to the high coefficient of agreement for CA in CA-MIP and the n number being >1 (Table 4; Figure 4).
Linear Langmuir adsorption isotherm was drawn according to Eq. 3 between c e and c e /q e values for CA in the same CA-MIP and NIP (Figure 5) and calculation of q max was performed by using the slope of the obtained line, and b values were calculated from the shift value.Table 5 shows the magnitudes of the Langmuir isotherm of MIP and NIP.
) It has been concluded that the Freundlich isotherm is more suitable for this MIP, since the coefficients fit obtained for CA adsorption in CA-MIP are lower than those obtained with the Freundlich isotherm and the q max value is much larger than the experimentally found one.
Selectivity tests
Selectivity tests for 1:4:16 CA-MIP and NIP were carried out by utilizing 80mM of sinapic acid (SA), p-coumaric acid (p-COA), ferulic acid (FA), chlorogenic acid (CLA), rosmarinic acid (RA), from the hydroxycinnamic acid class such as caffeic acid (CA); 4-hydroxybenzoic acid (4-HBA), gallic acid (GA), prothocatechuic acid (PA), 3,4-dihyroxybenzoic acid (3,4-diHBA), vanillic acid (VA) from the hydroxybenzoic acid class, catechin (CAT) form the flavonoids class of solutions, respectively.The reason for using these antioxidant standards in selectivity studies is to determine specific and nonspecific adsorptions by testing the selectivity of the polymer we obtained for antioxidants from the phenolic acid and flavonoid class, which are mostly found in plant extracts.Figure 6 indicates the adsorption quantities of MIP and NIP.Thus, it has been often observed that analytes can be retained by MIPs and NIPs through nonspecific interactions assisted mainly by solvophobic effects [34].Due to nonspecific ionic interactions, the adsorption amounts obtained by MIP and NIP are close to each other because these interactions are not specific to MIP.Since such adsorptions are not specific, they can be easily removed from the polymer with suitable solvent systems during the washing steps in MISPE studies.Figure 6 The increasing adsorption amount of phenolics (µmol g -1 ) for MIP was as follows: FA (1.26) < p-COA (1.70) < VA (1.72) < SA (1.86) < 4-HBA (2.24) < CAT (3.46) < 3,4-diHBA (4.30) < GA (5.37) < CA (6.65) < CLA (7.65) < RA (7.90).
CA-MISPE applications 3.3.1. Synthetic mixture
One and five-tenths milliliters mixture in ACN comprising 3 × 10 -4 M VA, CA, p-COA, FA, CAT was loaded into the 1-mL SPE cartridge packed with 50 mg 1:4:16 CA-MIP at a flow rate of 0.3 mL min −1 .Figure 7 shows the chromatograms of all MISPE step solutions.
Figure 7A shows other phenolic compounds, mostly CA, were also retained in the cartridge.p-COA and FA have hydroxycinnamic acid structures like CA, whereas CAT is among the flavanol class of flavonoids.But CAT has a 3,4-dihydroxyphenyl structure similar to that of CA.Moreover, VA is a small molecule derivative of hydroxybenzoic acid.Therefore, such structural properties of these compounds cause some nonspecific adsorptions by the polymer.However, since these retentions are expected to be weaker than CA, which is the template molecule, it is aimed to remove these compounds from MIP by washing with suitable solvent systems.Figure 7B indicates the chromatograms of the washing solutions.In this chromatogram, some CA also moves away from the MIP with the washing steps.After the washing step, CA retained in the polymer was recovered with MeOH:HAc (4:1, v/v).Figure 7C shows the elution chromatogram.Other compounds were removed from the cartridge in the washing step and CA recovery was achieved in the elution step with a slightly lower yield (49%) (Figure 7C).
MISPE application of green coffee bean extract with CA-MIP
After evaporation, 1 mL of extract dissolved in ACN:DMSO (98:2, v/v) was taken and diluted to 4 mL with ACN.Three and five-tenths milliliters of this extract, which was dried with anhydrous Na 2 SO 4 , was taken and loaded into a 3-mL SPE cartridge containing 100 mg 1:4:16 CA-MIP conditioned with 10 mL ACN at a rate of 0.3 mL min -1 .Figure 8A compares the chromatograms of the extract before and after loading.The detection wavelength was 320 nm, which is close to the maximum adsorption of CLA.
As can be seen, other components were retained in the colon besides chlorogenic acid.Since components other than chlorogenic acid are not specifically retained in CA-MIP, it is aimed to remove them from the cartridge by washing with suitable solvent systems.Figure 8B shows the chromatograms of the washing solutions taken from the column.For green coffee bean extract, as can also be seen in these chromatograms, CLA showed the strongest affinity to the CA-MIP, but caffeine was adsorbed onto the CA-MIP in small amounts.This result was thought, because of the abilities of phenolic compounds to create hydrogen bonds by their -OH groups without entering the cavities of the MIP and nonspecific ionic interactions with the heterogeneous binding sites of the MIP.Using those benefits, caffeine was removed by the washing processes.Therefore, the elution process was applied with 3 mL and 2 mL of MeOH: HAc (4:1, v/v) as two steps at a flow rate of 0.5 mL min -1 (Table 6).
Discussion
The 4-VP monomer was preferred for the preparation of CA-MIPs because the IF value for CA-MIP prepared with this monomer was determined higher than those prepared with other monomers (MAA, AA, 1-VI).This result can be explained by the favorable interactions between basic monomer 4-VP (pKa 5.62) and the acidic template CA (pKa 4.62) [35].Indeed, many other CA-MIP studies have reported that this monomer is more suitable [5,8,9].The porogenic solvent has a very important role in noncovalent interactions of polymer structure.Because the porogen provides the generation of specific cavities for the template.On the other hand, the polarity of the porogen also affects imprinting.Therefore, moderately polar solvents such as THF will positively affect the imprinting factor of CA-MIPs.Indeed, it was determined THF offers a preferred imprinting factor for CA-MIP.This situation may be due to a competition between the interaction of CA and 4-VP and their interaction with porogen [36] Since methanol has a high polarity and -OH group that can hydrogen bond with caffeic acid, the tendency of CA to hydrogen bond with the monomer during polymerization decreases and the pore formation of CA cannot occur during polymerization.When ACN is used, it is necessary to heat it to ensure dissolution owing to very little CA dissolution in ACN, and since the amount of CA used during polymerization is high, some of the CA remains insoluble, which prevents the formation of CA pore during polymerization.Therefore, it was decided that THF is the most suitable porogenic solvent in synthesis of CA-MIPs.It was stated that in a CA-MIP study the THF is the most suitable porogenic solvent [9].The highest imprinting factor was obtained at a ratio of 1:4:16 (T:M:CrL).In the selectivity experiments with CA-MIP and NIP, the adsorbed amounts of many phenolic substances were compared besides CA and its quinic acid ester CLA.According to Figure 6, the order of the investigated phenolic substances according to the amount adsorbed on CA-MIP is as follows: FA < p-COA< VA< SA< 4-HBA< CAT< 3,4-diHBA < GA < CA < CLA < RA.The order of the same compounds according to the amount adsorbed on the NIP is as follows: p-COA < SA < VA < FA < 4-HBA < CAT < 3,4-diHBA < CA < GA < RA < CLA.Since in the case of the NIPs there are no specific sites present, the interactions are mainly of ionic nature, and thus, nonspecific.According to Michailof et al. [9], the interaction differences between MIP and phenolic compounds can be explained by their molecular structures-namely, the sizes and shapes of the molecules, and the presence of double bonds, hydroxyl, carboxyl, or methoxy groups in the molecule.Valero-Navaro et al. [8] have reported that MIP imprinted with CA showed very high selectivity for CLA.They have stated that since CLA is an ester formed between CA and QA, similar shape-selective interactions have been expected for both CA and CLA although stronger retention of CLA than CA can be explained by its high polarity and multifunctionality.On the other hand, Li et al. [6] have shown the opposite result and reported that CLA was retained weaker in their MIPs than CA and other structurally similar compounds (VA, and GA).In this study, it was observed that CLA was first separated from the polymer column when acetonitrile containing 2% acetic acid was used as the eluent, due to its weak retention of it on the monolithic stationary phase of the polymer.In our study, considering the adsorption of FA, which is in the structure of hydroxycinnamic acid, it is seen that although it is similar in structure to CA, the presence of 3-methoxy structure in the phenyl group creates a steric hindrance and reduces the hydrogen bonding efficiency; therefore, its adsorption in MIP and NIP is not very high.Since CLA is the quinic acid ester of CA, it is expected to create a steric barrier to enter the MIP mold, but because it contains too many hydroxyl groups, it is highly adsorbed by both MIP and NIP.Since RA is the phenyl ester of CA and contains many hydroxyl groups, it is highly adsorbed in CA-MIP and NIP.Since the adsorption distinction between MIP and NIP of highly adsorbed CLA and RA is lower than that of CA, the adsorption of these compounds is not as specific as CA.Adsorption of SA is determined less on MIP and NIP due to the steric hindrance of the 3,5-dimethoxy group on the phenyl ring.Unlike CA, p-COA also contains only 4-hydroxy structure in the phenyl ring; therefore, its adsorption is low in MIP and NIP.Since GA and 3,4-diHBA, which are in the structure of hydroxybenzoic acid, have a smaller molecular structure than CA and contain 3,4-dihydroxy structure like CA, their retention rate in the polymer is high.The adsorption of VA and 4-HBA by the polymer is low, as they have a single mold attachment point like the CA structure.CAT, which has a flavonoid structure, seems to be adsorbed to some extent by the polymer, since CA and 3,4-dihydroxyphenyl structure are common.
Conclusion
In this work, synthesis of CA-MIPs was aimed to isolate, purify and preconcentrate CA and CLA from synthetic mixture and natural extracts.These compounds are among the hydroxycinnamic acid class of phenolic acids, and thus, they have important biological effects such as slowing down inflammation and protection against the harmful effects of free radicals.Therefore, CA-MIPs were obtained by noncovalent bulk polymerization method and adsorption features (recognition and selectivity) of the polymers were determined by binding experiments with CA and several phenolic compounds.MISPE applications were performed to preconcentrate and purify CA from synthetic mixtures and CLA from coffee bean extract for the first time.
At the end of these experiments, recovery yield of CA was 42% and CLA 49%.This investigation demonstrates that obtained CA-MIP can ensure sufficient extraction CA from complex matrices.In addition, one of the most important advantages of the synthesized new mole ratio of MIP (1:4:16) and the applied new MISPE method is to facilitate and speed up the following chromatographic analysis as sample preparation (clean up) material.Thus, our study enables not only in terms of preconcentration and cleaning of CA and CLA, but also selective extraction of these phenolic compounds in complex or contaminated samples by using new ratio 1:4:16 of MIP in the literature.
Figure 1 .Figure 1 .
Figure 1.Schematic presentation of the interaction mechanism of MIP Figure 1.Schematic presentation of the interaction mechanism of MIP.
Figure 6 .Figure 6 .
Figure 6.Adsorption selectivity of phenolic compounds for CA-MIP and NIP
Figure 7 A
Figure 7 A,B,C.Chromatograms of A) (a)before and (b) after sample loading into the MISPE cartridge B) washing steps of MISPE a: Washing 1: 2.5 mL ACN, b: Washing 2: 0.5 mL
Table 1 .
Estimation of CA-MIPs and NIPs (in parenthesis) obtained with several molar ratios, monomers and porogens.
Table 3 .
Adsorption amount of CA on 1:4:16 CA-MIP and NIP and imprinting factors (IFs) with several concentration of CA.
CA concentration (mM) Adsorption amount of CA a (mg g -1 ) Imprinting factor (IF)
a Values shows mean ± SD, n = 3. b Values in parentheses are the amounts of CA adsorbed by NIP.
Table 4 .
Freundlich adsorption isotherm values of MIP and NIP for CA.
Table 5 .
Langmuir adsorption isotherm values of CA-MIP and NIP for CA.
Table 6 .
CA-MISPE outcomes for synthetic mixture and green coffee bean extract. | 2023-08-30T15:28:37.312Z | 2023-05-22T00:00:00.000 | {
"year": 2023,
"sha1": "88b6e5032520d5d3dcb4e93c9ecbb421603431a3",
"oa_license": "CCBY",
"oa_url": "https://journals.tubitak.gov.tr/cgi/viewcontent.cgi?article=3572&context=chem",
"oa_status": "BRONZE",
"pdf_src": "PubMedCentral",
"pdf_hash": "ceef3f6b5fca0c3279e29ae8138be10e16ebc0ee",
"s2fieldsofstudy": [
"Chemistry"
],
"extfieldsofstudy": []
} |
118635051 | pes2o/s2orc | v3-fos-license | Glauber coherence of single electron sources
Recently demonstrated solid state single electron sources generate different quantum states depending on their operation condition. For adiabatic and non-adiabatic sources we determine the Glauber correlation function in terms of the Floquet scattering matrix of the source. The correlation function provides full information on the shape of the state, on its time-dependent amplitude and phase, which makes the coherence properties of single electron states essential for the production of quantum multi-particle states.
Introduction -The recent realization of triggered electron sources that inject single electrons on demand into high mobility semiconductors attracts increasing attention to the field of quantum coherent electronics [1][2][3][4][5]. Future applications in quantum information processing demand a full characterization of the coherence of the states emitted by such sources [6,7]. The important feature of on-demand injected particles is that they are traveling wave-packets with a spatial extend that is less than the distance between them. Depending on the operating conditions of the source, wave-packets of different spatial and temporal shape can be created [1,4]. Such wave packets are able to interfere with themselves over a restricted interval of space and time, which sets the limits on the synchronization of multiple single electron sources needed to generate on demand multi-particle states. It is the purpose of this work to present a full characterization of the coherence of the single particle states generated by on-demand sources.
In optics the coherence of light is discussed with the help of correlation functions introduced by Glauber [8].
The first-order correlation function reads, G (1) (r 1 t 1 , r 2 t 2 ) = E (−) (r 1 t 1 ) E (+) (r 2 t 2 ) , where the electric field of a light-beam is split into positive E (+) and negative E (−) frequency terms [9]. The first-order Glauber correlation function can be extracted from timeand space-resolved intensity (optics) or current (electronic) at the output of an interferometer, see Fig. 1. Remarkably the characterization of single photons has been achieved very recently with space-resolved measurement of the intensity [10,11]. In mesoscopic systems, timeresolved current measurements on the scale of single electron wave packets have recently been demonstrated [1]. This makes it possible to reconstruct the single-particle state from current measurements, as well as the complex wave function, the duration of the wave packet and the coherence time. Therefore the Glauber correlation function is the central object and, in this Letter, we discuss 2 SES ⌧ FIG. 1: Schematic representation of a MZI, threaded by a magnetic flux Φ. With a time-resolved measurement of the current in one of the output arms, one can access the firstorder correlation function G (1) as a function of the time delay of the interferometer ∆τ and time t. This allows to reconstruct the incoming single-particle state emitted by the SES.
In the adiabatic regime, the current pulse emitted by the SES has a Lorentzian shape, with a width 2ΓSES.
it for the states of adiabatic and non-adiabatic emitters. Importantly, for a single-particle state the second and higher-order correlation functions are zero since not more than one particle can be measured at a time [12]. For a source that emits particles periodically, the second order correlation function is measured to demonstrate a singlephoton source [13,14]. By analogy the single-particle nature of an electron state of interest here can be inferred from the zero frequency current noise measurement [4].
The fermionic first-order correlation function can be defined in close analogy with the bosonic one [15]. However the single electrons we are interested in are injected into the conductor with other electrons constituting the Fermi sea. Importantly the underlying Fermi sea has a non-zero correlation function which can be naturally treated as the reference point [6]. Therefore, we define the first-order correlation function as G (1) (t 1 , t 2 ) = Ψ † (t 1 )Ψ (t 2 ) − Ψ † (t 1 )Ψ (t 2 ) 0 , withΨ (t 1,2 ) a single-arXiv:1212.0088v1 [cond-mat.mes-hall] 1 Dec 2012 particle electronic field operator at times t 1,2 . We omit the spatial coordinates (r 1 , r 2 ) of the correlation function, since the current is measured in the reservoir at r 1 =r 2 . The angular brackets denote the quantum-statistical average over the state of the Fermi sea and the subscript 0 indicates that the single-electron source (SES) is not active. This electronic first-order correlation function G (1) is accessible in a Mach-Zehnder interferometric (MZI) set-up. The electronic MZI was first reported in the twodimensional electron gas in high magnetic field in the quantum Hall regime [16]. Experimentally it has been shown to exhibit high visibility while varying a phase φ by tuning the magnetic flux Φ enclosed by the arms of the MZI and/or the time delay between its arms. Below we show that the interference part of the current at the output of the MZI is written in terms of the correlation function G (1) as follows, Here τ u,d are the traversal times for the upper and lower arms of the MZI. Fixing the phase φ to zero or π/2 gives access experimentally to the real or imaginary parts of the correlation function respectively. This allows us to extract the shape of the single-particle state, its phase and its coherence properties from a measurement of the full time-dependence of the first-order correlation function. The most challenging step, the time-resolved measurement of a current at a nano-second scale characteristic for a single-electron wave-packet, was recently shown to be possible [1].
Model and first-order correlation function -To be specific we focus on single-particle states emitted by the on-demand source of Ref. 1. This source consists of a mesoscopic capacitor [17][18][19][20] driven by a periodic potential V (t). Built in the quantum Hall regime, the SES is made of a small cavity with a confined circular edge state, which is connected via a quantum point contact (QPC) with transmission T SES 1 to the nearby linear edge state. By shifting the levels of the cavity above and below the Fermi sea level with V (t), the emission of a single electron and a single hole in one period of the potential is achieved [1]. Within a scattering-matrix approach, the SES is described by a Floquet scattering amplitude S SES (E m , E), calculated in Ref. 21, where the energy of the outgoing particle E m = E + mhΩ differs from the energy E of the incoming particle by mhΩ.
Here Ω is the frequency of the periodic potential and m is an integer. In the quantum Hall regime, the chirality of the edge states due to the absence of backscattering [22,23] allows us to write the scattering amplitude of the entire system S(E m , E) as the product of the scattering amplitude of the MZI, calculated at energy E m , with the Floquet scattering amplitude S SES (E m , E) of the source [3,7]. Then the outgoing current is expressed in terms of a current emitted by the cavity, I SES , and of the first-order correlation function introduced above, G (1) : The coefficients R L,R and T L,R are the reflection and transmission probabilities for the left and right QPCs of the MZI respectively. The term φ = 2πΦ/Φ 0 + k µ v D ∆τ corresponds to the phase difference acquired by an electron with Fermi energy µ traveling along the upper and lower arms of interferometer, where Φ 0 = h/e is the quantum flux, k µ and v D are the wave vector and the drift velocity both evaluated at the Fermi energy and ∆τ = τ u − τ d is the time-delay of the interferometer. The time-dependent current emitted by the source is [24] and the first-order correlation function is expressed in terms of the Floquet scattering amplitude of the source S SES as follows (we denote t u ≡ t − τ u and t d ≡ t − τ d ): Here we have introduced the Floquet scattering amplitude of the source in a mixed energy-time representation, S SES (t, E) = n e −inΩt S SES (E n , E). Importantly Eq.(4) derived here is valid at arbitrary emission conditions. This is in contrast to Ref. [7] where we used the version of Eq.(4) valid in the adiabatic regime only. Moreover, in Ref. [7], we defined the single-particle coherence on the basis of an interference current. In contrast, in the present Letter, we adapt the Glauber definition of the correlation function and show precisely how it is connected to the interference current. Adiabatic versus non-adiabatic regimes -We illustrate our claim that we can fully characterize the singleparticle state by its first-order correlation function, Eq.(4), by considering the source of Ref. [1] in the two operation regimes in which single-particle emission can be achieved, namely the adiabatic and non-adiabatic regimes. In the following, we assume zero temperature. If the temporal shape of the periodic driving potential V (t) = V (t + 2π/Ω) varies on a time scale much smaller than the dwell time τ D of the source, defined as the time that the particle remains inside the cavity, the operation regime of the source is called adiabatic [25]. Experimentally, it can be reached with a sinusoidal potential V ad (t) = V 0 cos(Ωt) with Ωτ D T SES [24]. This last assumption ensures that an electron has t µ 0 t enough time to leave the cavity during the time when the topmost occupied level crosses the Fermi energy., see Fig. 2 a). Here V 0 is the amplitude of the potential. In this regime, the single-particle states are emitted close to the Fermi sea and the energy in Eqs. (3,4) is therefore well approximated by the Fermi energy µ. The SES is described by the frozen scattering amplitude [21], which, close to the emission time t − of an electron, reads [26]: S ad SES,e (t, µ) = (t − t − + iΓ)/(t − t − − iΓ). The corresponding current emitted by the SES consists of a Lorentzian pulse, I ad where the half-width of the current pulse Γ is proportional to T SES /Ω. Importantly, it sets the lifetime (or the relaxation time) of the emitted single-particle state, T ad 1 = Γ. To find the coherence time of the emitted state T 2 we look at the correlation function which now reads: The characteristic time of decay of G (1) with respect to the time delay ∆τ = τ u − τ d is by definition the coherence time T 2 of the single-particle states. To make clear the dependence on ∆τ we introduce the middle time t = (t u + t d )/2 and write t u = t − ∆τ /2, t d = t + ∆τ /2. e,ad function at ∆τ = 0 corresponds to the current pulse emitted by the source as a function of t , whereas its imaginary part is zero as expected.
Thus we find from Eq. (5) that T 2 is set by twice the lifetime of the current pulse, T ad 2 = 2Γ. The relation T ad 2 = 2T ad 1 means that the emitted state is a Fouriertransform limited one [27]. This important result tells us that the SES has no intrinsic dephasing time T ϕ , since the three times are related via 1/T 2 = 1/(2T 1 ) + 1/T ϕ [12,28]. Additional dephasing processes within the MZI would lead to a faster decay of the interference part of the measured current [29][30][31][32], but would not modify the coherence properties of the states emitted by the source. The real and imaginary parts of the correlation function for adiabatically emitted electrons are shown in Fig. 3. They allow to reconstruct the shape of the incoming wave-packet as well as its phase [33]. The correlation function for the hole, G h,ad , is given by the complex conjugate of Eq.(5), where the electron emission time t − is replaced by the hole emission time t + .
The non-adiabatic regime is reached when the driv-ing potential varies much faster than the dwell time τ D . Experimentally the emission of single-particle states has been observed in this regime with a square potential in the GHz range [1]. Importantly, while the potential changes on a time scale faster than τ D , the overall cycle remains much longer than τ D , ensuring that an electron has been emitted before the excitation leading to the hole emission starts, see Fig. 2 b), [34]. This corresponds to the condition τ D π/Ω, which can be fulfilled at higher frequencies than the condition for an adiabatic regime. To provide simple analytical equations we assume the optimal conditions used in the experiment [1,35]: the Fermi level lies exactly in the middle of two successive cavity's levels and the square potential V na (t) applied to the cavity shifts the levels sharply by one level spacing ∆ at time t − . With such a driving, the Floquet amplitude given in Ref. 21 can be cast into a form appropriated for analytical calculations: Here is the scattering amplitude of the cavity with stationary potential. Since τ D = h/(T SES ∆) 2π/Ω, the emissions of an electron and a hole close to t − and t + are independent of each other. Therefore, as before, we concentrate on electron emission only. Calculating the current emitted by the SES close to t − from Eq.(3), we reproduce a well-known exponential decay [1,21], I na e (t) = (e/τ D )Θ(t−t − ) e −(t−t − )/τ D with Θ(x) the Heaviside step function. From the temporal shape of the current pulse, we extract the lifetime of the single-particle state in the non-adiabatic regime, namely T na 1 = τ D . Remarkably, in contrast to the current pulse in the adiabatic regime, the pulse I na e (t) is highly asymmetric in time as shown in Fig. 2 b) [36,37]. This strong asymmetry is a signature of a non-adiabatic emission process and is also present in the first-order correlation function. Indeed, inserting Eq.(6) into Eq.(4) we find: The factor exp (−iπ∆τ /τ ) reflects the fact that the single-particle states are emitted at energy ∆/2 above the Fermi energy µ, (τ ≡ h/∆). Due to the presence of the Heaviside step functions, the middle time t = (t u +t d )/2 has to be larger than t − + ∆τ /2 for G (1) e,na to be nonzero, as shown in Fig. 4. Thus we see that the first-order e,na(t − ∆τ /2, t + ∆τ /2), Eq. (7), in units of 1/(τDvD). The exponential factor e −iπ∆τ /τ is omitted as it sets the energy at which the single-particle state is emitted (see text). Here t − is set to 0. The correlation function clearly reflects the temporal shape of the single-electronic state emitted by the source, which is set by T1 and T2 as a function of t and ∆τ respectively. correlation function decays with increasing ∆τ with a characteristic time T na 2 = 2τ D . Similarly to the adiabatic regime, the coherence time is equal to twice the lifetime, T na 2 = 2T na 1 , witnessing the absence of intrinsic dephasing in the SES.
Conclusions -We have shown that an MZI setup is appropriate for the full characterization of the coherence properties of single electrons and holes propagating in solids. We have provided a general expression for the Glauber correlation function G (1) in terms of the Floquet scattering amplitude of the source. The coherence time enabled us to show that the source of Ref. 1 has no intrinsic dephasing time, which makes the emitted singleparticle states of high interest for future experiments in quantum electronics. Importantly, the time-resolved measurement of the first-order correlation function G (1) is within the reach of the present-day experimental capabilities, permitting a direct access to a single-electronic quantum state. | 2012-12-01T09:23:42.000Z | 2012-12-01T00:00:00.000 | {
"year": 2012,
"sha1": "8ca6d0f39a7f6cf08de73e312458e733201ef410",
"oa_license": null,
"oa_url": "https://refubium.fu-berlin.de/bitstream/fub188/14076/1/PhysRevB.87.201302.pdf",
"oa_status": "GREEN",
"pdf_src": "Arxiv",
"pdf_hash": "8ca6d0f39a7f6cf08de73e312458e733201ef410",
"s2fieldsofstudy": [
"Physics"
],
"extfieldsofstudy": [
"Physics"
]
} |
98320338 | pes2o/s2orc | v3-fos-license | 10.1007/s11434-011-4524-x Numerical investigation on the cluster effect of an array of axial flow fans for air-cooled condensers in a power
2011 The aerodynamic behavior of tens of axial flow fans incorporated with air-cooled condensers in a power plant is different from that of an individual fan. Investigation of the aerodynamic characteristics of axial flow fan array benefits its design optimization and running regulation. Based on a representative 2 600 MW direct-dry cooling power plant, the flow rate of each fan and the overall flow rate of the fan array are obtained in the absence of ambient wind and at various wind speeds and directions, using CFD simulation. The cluster factor of each fan and the average cluster factor of the fan array are calculated and analyzed. Results show that the cluster factors are different from each other and that the cluster effect with ambient wind is significantly different from the cluster effect with no wind. The fan at the periphery of the array or upwind of the ambient wind generally has a small cluster factor. The average cluster factor of the array decreases with the increasing wind speeds and also varies widely with wind direction. The cluster effect of the axial flow fan array can be applied to optimize the design and operation of air-cooled condensers in a
An increased focus on water conservation has combined with continued concern over the environmental effects of both once-through and evaporative cooling, resulting in increased popularity of dry cooling technology. In particular, the use of air-cooled condenser (ACCs) in power plants for condenser heat rejection is expected to increase [1]. Largescale, air-cooled condensers in a power plant consists of an array of the condenser cells. For each condenser cell, the finned tube bundles are arranged in the form of an A-frame fitted with an axial flow fan below. Ambient air is impelled by the fans to flow through the finned tube bundles and the thermal duty of the exhaust steam from the turbine is removed in the air-cooled condensers. Many studies have found that finned tube bundles and axial flow fans work poorly in a wide range of specific climates, especially with large wind speeds and adverse wind directions.
Meyer and Kroger [2] developed a numerical model to simulate the effect of an axial flow fan on the velocity field in the vicinity of the fan blades. In this model, the axial flow fan is regarded as an actuator disc. By using the actuator disc fan model, Meyer [3] numerically studied the effect of inlet flow distortions and found that inlet flow losses of the periphery fan are dominated by flow separation around the inlet lip of the fan inlet section. These flow losses can be reduced by the installation of a walkway at the edge of the fan platform or by removal of the periphery fan inlet section. Hotchkiss et al. [4] used computational fluid dynamics (CFD) methods to investigate the effects of ambient wind cross flows on the performance of axial flow fans in air-cooled condensers. van Rooyen and Kroger [5] studied the air flow around and through certain air-cooled condensers, assessing the performance of the fan with the actuator disc model. Bredell et al. [6] also used the actuator disc fan model to simulate the effect of fans in the flow domain of air-cooled condensers, and concluded that the distorted inlet flow conditions of fans can cause azimuthal variation in fan blade bending moments. Coetzee and du Toit [7] used blade element theory to determine the torque and thrust exerted on the air by fan blades, and investigated the influence of end-effects on flow fields in the vicinity of the heat exchanger. Meyer and Kroger [8] studied the aerodynamic behavior of a air-cooled heat exchanger plenum chamber for different fan performances, using CFD simulation. Bredell et al. [9] numerically investigated the effect of inlet flow distortions on the flow rate through the fans, and considered the volumetric effectiveness of two different types of axial flow fans at different platform heights. Their results also showed that the addition of a walkway can significantly increase the flow rate through the fans near the edge of the fan platform. Duvenhage and Kroger [10] investigated the influence of wind on fan performance and cooling air recirculation in an air-cooled condenser bank, concluding that cross wind significantly reduces the air flow rate in the upwind condenser cells, and that the wind along the longitudinal axis cause increased hot plume air recirculation. Duvenhage et al. [11] numerically and experimentally studied fan performance in air-cooled condensers during inlet flow distortions. Salta and Kroger [12] experimentally investigated flow rate reduction in air-cooled condensers from distorted inlet air flows. Meyer and Kroger [13] experimentally investigated effects of different fan and heat exchanger characteristics as well as the plenum chamber geometry on flow losses of air-cooled condensers. Meyer and Kroger [14] empirically measured the influence of the air-cooled heat exchanger geometry on inlet air flow losses. An equation based on their experimental results was formulated that calculates the heat exchanger inlet flow losses. Wang et al. [15] investigated the overall flow and temperature fields of air in the power plant by CFD simulation, finding that wind effects and fan suction induces plume recirculation. Installation of a side board below or above the fan platform was suggested to avoid such recirculation.
The aforementioned research has shown that reduced axial flow fan performance is caused by the inlet flow distortions produced by ambient wind cross flows. Ambient wind can have adverse impacts on the fans and air-cooled condensers. In most of the studies, the turbine and boiler buildings of the power plant are neglected, so the impacts of these buildings on the inlet air flows of the axial flow fan array are not considered. Furthermore, the A-type plenum chamber of the condenser cell is commonly simplified as a cubic box. For air-cooled condensers with no ambient wind, the performance of the axial flow fan array is still not clearly recognized. In this paper, the cluster effect of the axial flow fan array used by air-cooled condensers in a representative 2×600 MW power plant is investigated, with no wind and for various wind speeds and directions. This investigation benefits the design and operation of axial flow fans for air-cooled condensers in a power plant.
Computational models
The thermal-flow characteristics of an array of A-frame heat exchangers and fans are essentially different from those of an individual condenser cell. A novel concept of cluster factor is proposed to describe the cluster effect of an array of axial flow fans in air-cooled condensers [16]. This factor is defined as the ratio of the volumetric flow rate of each fan in the axial flow fan array to that of an individual fan, as follows: where Q v,i is the volumetric flow rate of each fan in the axial flow fan array. Q v,ind is the volumetric flow rate of an individual fan while running independently. For all fans in the array, the average cluster factor m is defined as where n is the total number of fans in the array. It can be seen from eq. (1) that the cluster factor denotes the flow difference between one particular fan in the fan array and the independently running fan. The greater the flow difference, the greater the cluster effect of the fan. The average cluster factor is used to evaluate the whole cluster effect of all the axial flow fans, as expressed in eq. (2). This factor is a measure of the aerodynamic interaction among all the axial flow fans. The typical axial flow fan used with air-cooled condensers in a power plant is schematically shown in Figure 1, which is manufactured by Baoding Huiyang Fan Plant, China. It consists of 6-8 fan blades and has a small hub-tip ratio. For the combined finned tube bundle and fan system, the flow and heat transfer through the finned tube surfaces should be solved simultaneously with the aerodynamic characteristics of the axial flow fans. This method is a possible alternative to an experimental investigation using commercially available CFD code to resolve the flow field through fans and thermal-flow fields through finned tube bundles. However, the flow complexities and geometrical modeling difficulties associated with the fan blade passages and finned tube bundles represent a great computational challenge [9]. Simplification of the fans and finned tube surfaces should be considered.
The fan model described in this paper is a lumped parameter model. It can be used to determine the impact of a fan with known characteristics upon some larger flow field. This model allows input of an empirical fan curve that governs the relationship between the head (pressure rise) and flow rate (velocity) across a fan element. The radial and tangential components of the fan swirl velocity can also be specified. The fan model does not provide an accurate description of the detailed flow through the fan blades. Rather, it predicts the amount of flow through the fan. For the combined finned tube bundle and fan system, the flow rate is determined by the balance between the losses in the bundle and the fan curve.
In the fan model, the fan is considered to be infinitesimally thin, and the discontinuous pressure rise p across it is specified as a polynomial function of the axial velocity v through the fan.
where n f is the polynomial coefficient. In terms of the performance curve of the typical fan used by air-cooled condensers, the polynomial coefficients are calculated and listed in Table 1.
Owing to the three-dimensional flow complexities caused by the fan blades, tangential and radial velocities are imposed on the fan surface to generate swirl. These velocities can be specified as functions of radial distance from the fan center. In this paper, the radial velocity is neglected. The tangential velocity component U can be specified by the following equation: where r is the radial distance from the fan center. g n is the polynomial coefficient. When the geometric parameters of the fan blade are known, the tangential velocity at different radial distances can be obtained. The polynomial coefficients that fit the tangential velocity are listed in Table 2. The finned tube bundle of air-cooled condensers is simplified as a lumped-parameter radiator. In the radiator model, the pressure drop p varies with the normal component of velocity v through the finned tube bundle as follows: where is the air density and K L is the non-dimensional loss coefficient, which can be simplified as a polynomial function where r n is the polynomial coefficient. For the widely adopted Single Row Condenser (SRC) design in power plants, the heat exchanger surface is generally the wavefinned flat-tube bundle. According to the flow loss experiment of cooling air through the wave-finned flat-tube bundle, the polynomial coefficients are obtained as shown in Table 3.
The heat flux q from the radiator to the surrounding air is given as where T s is the condensing temperature of the exhaust steam.
If the conductive thermal resistance through the wall and the condensation thermal resistance are neglected, the tube outer wall temperature can be regarded as T s . T a,d is the air temperature downstream of the radiator. The convective heat transfer coefficient h is normally specified as a polynomial function of the normal component of velocity: where h n is the polynomial coefficient. These coefficients are given in Table 4 in terms of the experimental heat transfer data of cooling air in air-cooled condensers. For an individual running fan, the physical model of the A-frame condenser cell is schematically shown in Figure 2. The windwall and steam duct are taken into consideration. For the fan array, a representative 2600 MW direct-dry cooling power plant is investigated. The layout of the ACCs and main buildings, including the boiler, turbine houses and chimney, is schematically shown in Figure 3. Per the typical design data of the power plant, the space between the turbine house and the ACC platform is 12.6 m and the platform and windwall heights are 45 m and 12 m, respectively. An air-cooled condenser consists of an array of A-frame condenser cells, each fitted with an axial flow fan as shown in Figure 4. There are two ACCs in this power plant, each having 56 (7×8) condenser cells. Along the x direction, the left ACC is designated as No. 1, and on the right is No. 2. The specification for the condenser cell and fan serial number is also shown.
The aerodynamic behavior of the fan array of with no wind, and with wind at various speeds and directions, is studied. Figure 5 shows the computational domain for these two cases. To eliminate the near-wall effect of the boiler and turbine houses on the flow of cooling air into the ACC, the physical domain of interest should be large enough. Under the no-wind condition, the cylinder volume at the base and the semi-sphere volume at the top form the computational domain, at the center of which the ACCs and main buildings are situated. With non-zero ambient wind, the whole computational domain is represented as a hexahedron. Because of the symmetric structure of the ACCs and main buildings, only half of the wind directions are taken into account (Figure 5(b)). For the central domain with the ACCs and main buildings, a tetrahedral unstructured grid is used. For other zones, a hexahedral structured grid is adopted. After validation of grid independence, the final grid number is about 2125770 for simulation of the fan array of with no wind, and about 2089900 with non-zero ambient wind.
The steady-state fluid and heat flow governing equations on the air-side of the ACCs are as follows [17]: eff eff 2 3 , , 1, 2, 3, and , where u i is the velocity in the x i direction, p is the pressure, and g i is the gravitational acceleration in the x i direction. In this model, g i only exists in the -z direction. Because of the large dimension of the A-frame condenser cell and great heat rejection of the exhaust steam, the buoyancy effect of the cooling air is considered and the air is regarded as an incompressible ideal gas. e = h e p/+u i 2 /2, is the is the stress tensor, eff is the effective thermal conductivity. S i is the momentum sink and is equal to the pressure drop per flow passage length through the tube bundles. S h is the heat source, namely the heat rejection per volume of the air-cooled condenser. The standard k-ε turbulent model is used to describe the flow through the fans and finned tube bundles.
where k and ε are the turbulence kinetic energy and its rate of dissipation. k and are the turbulent Prandtl number for k and ε, respectively. C 1 , C 2 , C 3 are constants. G k represents the generation of the turbulence kinetic energy arising from mean velocity gradients.
G b is the generation of the turbulence kinetic energy by buoyancy. For an ideal gas, where Pr t is the turbulent Prandtl number for energy. The model constants have the following default values: Under the no-wind condition, the surfaces of the cylinder volume at the base and the semi-sphere volume at the top of the computational domain are assigned pressure inlet and outlet boundaries, respectively. The air temperature at the surfaces of the computational domain is set to 15°C, which is the usual design ambient temperature for the ACC in China. With non-zero ambient wind, the windward surface at the exterior of the computational domain is the velocity inlet boundary, for which the power-law equation is used to calculate the wind speed at different heights.
where u 10 is the wind speed at 10m height, usually measured by the local weather office. Wind speeds ranging from 3 m/s to 15 m/s are designated to u 10 in the present study to investigate the wind effect on the aerodynamic behavior of the fan array. z is the height. The exponent m is related to the roughness of the ground and atmospheric stability. In this paper, m is set equal to 0.2. The outflow boundary condition is set on the downstream surface. On the other surfaces, symmetry boundaries are designated. The ground is given the adiabatic boundary condition as an approximation.
On the surfaces of turbine house, boiler houses and chimney, constant heat flux is assumed. The surfaces of the support column are given approximately adiabatic conditions. The surfaces of the steam duct are given a constant temperature equal to the saturated temperature of the exhaust steam. The commercial finite volume-based solver Fluent is used to solve eqs. (9)-(13) with the given boundary conditions. The governing equations for momentum and energy are discretized with a finite-volume formulation, using a fully implicit first order upwind differencing scheme. The SIMPLE algorithm is adopted for the pressure-velocity coupling. A divergence-free criterion of 10 -4 , based on the scaled residual, is prescribed.
Numerical results are validated by comparing the computed and measured inlet air temperature of a particular condenser cell, as shown in Figure 6. The experiment was conducted in a 4×600 MW direct-dry cooling power plant in the Shaanxi province of China [17]. The modeling and numerical solution methods used in the 4×600 MW power plant simulation are the same as those used for the 2×600 MW power plant in this paper. The computed and measured inlet air temperatures agree well with each other. Though the computed results show some overestimation, the error is small enough to adequately predict the performance of the fan array and finned tube bundles. The results show that the modeling and numerical methods associated with the air-cooled condensers and main buildings, as well as the fin-tube bundle and fan, are reliable enough for the purposes of this investigation.
Cluster effect with no wind
Even under no-wind conditions, fans in the array work interdependently. Figures 7 and 8 give the streamlines of the cooling air at a particular longitudinal and transverse cross section of the computational domain. The figures show that the cooling air in the central air-cooled condenser cells flows through the fan almost vertically. This is similar to an individual fan running. Cooling air at the periphery of the air-cooled condensers flows slantwise through the fan, and inlet flow distortions are clearly visible. The velocity vectors of the swirl flows at the outlet of some fans are shown in Figure 9. The swirl flows of all fans are similar in every detail with no ambient wind.
Volumetric flow rates of individual fans and of all fans in the array are all obtained by CFD simulation. The cluster factor of each fan in the condensers is then calculated(shown in Figure 10). The cluster factor differences of Figure 7 also result in a reduced cluster factor. Further still, for the fans at the columns 7-10 that are located at the center of the row 1, the cluster factor reaches its minimum when compared with the fans along the same column. For fans in the middle rows, the cluster factor is relatively high because of the nearly vertical inlet flows shown in Figure 7.
For the fans at both sides of the condensers, namely at columns 1 and 16, the cluster factor is lower than that of fans in the same row. This result is from deteriorated inlet flow distortions of the cooling air, as shown in Figure 8. The diverse cluster factors of the array fans show that fans at the periphery of the condensers work poorly owing to serious inlet flow distortions.
Cluster effect with ambient wind
Cross flows at fan inlets induced by ambient wind result in more serious inlet flow distortions. As an example of the cluster effect with ambient wind, Figures 11 and 12 show streamlines at a particular longitudinal and transverse cross section, with a wind speed of 9 m/s and characteristic direction of 90°. For the upwind fans facing the ambient wind, the strong cross wind flows inhibit the cooling air flow through them. Furthermore, reversed flows from the finned tube bundle toward the fan may occur, causing a negative flow rate through the fan. This situation is different from hot plume recirculation, in which part of the exit hot air returns to the inlet of the fan, as shown in Figure 12. For the downstream fans, the cross flows from ambient wind are restrained and the cooling air can easily flow through the combined finned tube bundle and fan system. For the transverse cross section across the fans in Row 4, as shown in Figure 12, hot plume recirculation can be observed at both sides of the air-cooled condensers. For other fans in this row, however, there is no plume recirculation and the cooling air easily flows through the fan and finned tube bundle. The swirl flow fields at the outlet of some fans are shown in Figure 13. In contrast to the swirl flows of fans with no wind, the swirl flow fields of the upwind fans are seriously affected by ambient wind. Reduced tangential velocities for the upwind fans are clearly observed. Figures 14 and 15 show the cluster factor of the fan at a wind speed of 9 m/s and characteristic wind directions of 90° and 90°, respectively. Because of the cluster effect of the fan array, the cluster factor of each fan varies widely from each other. At the 9 m/s wind speed and 90° characteristic direction, the cluster factor of upwind fans is lower than that of downstream fans. For some of the fans in Row 1 Figure 14 Cluster factor of fans with wind speed of 9m/s and characteristic wind direction of 90°.
Figure 15
Cluster factor of fans with wind speed of 9 m/s and characteristic wind direction of 90°.
facing the ambient wind, the cluster factor is even negative. This finding shows that the cluster effect for fans facing the ambient wind is serious, and the resulting deterioration of fan performance is unfavorable to the thermal characteristics of the finned tube bundles. Except for the fans facing the wind, the cluster factor of other fans is comparatively high. The wind direction of 90° is relatively beneficial to fan array performance. At the characteristic wind direction of 90°, the cluster factor of upwind fans is also lower than that of downstream fans, with a similar distribution as that with the 90° characteristic direction. For fans in the same row at both sides, the cluster factor is lower than that of fans in the central part, except for fans in the Row 1. This result stems from the serious inlet flow distortions for the fans at both sides of the condensers.
When the volumetric flow rate of each fan in the array is obtained, the average cluster factor can be calculated according to eq. (2). Variations of the average cluster factor of fans with wind speed and direction are shown in Figure 16, which illustrates that the average cluster factor varies widely with wind speed and direction. The average cluster factor decreases dramatically with increasing wind speed. For example, at a characteristic wind direction of 90°, the average cluster factor decreases from 0.81 at a wind speed of 3 m/s to 0.52 at a wind speed of 15 m/s, a decrease of about 36%. The cluster effect of fans is most serious at a wind direction of 90°, indicating that ambient wind blowing from the boiler and turbine houses toward the air-cooled condensers is most disadvantageous to fan operation. When designing the layout of the ACCs and main buildings of the power plant, this prevailing wind direction should be avoided. At wind speeds under 9 m/s, the average cluster factor reaches its maximum at a wind direction of 0°. However, at wind speeds greater than 9 m/s, the wind direction of 90° is most favorable to fan performance. Optimization of the design and operation of the axial flow fans for air-cooled condensers is aided by detailed inquiry into cluster factor variation with ambient wind.
Figure 16
Average cluster factor of fans versus wind speed and characteristic wind direction.
Conclusions
The cluster effect of the axial flow fan array used by air-cooled condensers in a power plant is investigated, through simultaneous resolution of the thermal-flow characteristics of the combined finned tube bundles and fan system.
With no ambient wind, inlet flow distortions of fans at the periphery of the air-cooled condensers result in poor fan performance and reduced flow rate of cooling air. The near-wall impacts of the turbine and boiler houses on the inlet flows of fans also reduces fan performance. The swirl flows of all fans in the array are nearly the same under no-wind conditions.
With ambient wind, the cluster factors of upwind fans are generally lower than those of downstream ones. The cluster effect for fans facing the ambient wind is most serious. The average cluster factor varies widely with wind speed and direction. The average cluster factor falls dramatically with increasing wind speed. The ambient wind blowing from the boiler and turbine houses toward the air-cooled condensers is most unfavorable to fan performance. At wind speeds less than 9 m/s, the average cluster factor is maximized at a wind direction of 0°. However, at wind speeds greater than 9 m/s, a 90° wind direction is most beneficial to the fans. | 2019-04-06T00:42:38.126Z | 2011-06-30T00:00:00.000 | {
"year": 2011,
"sha1": "087bf306a9eeb86c903634b38de96c321ebd1a09",
"oa_license": "CCBY",
"oa_url": "https://link.springer.com/content/pdf/10.1007/s11434-011-4524-x.pdf",
"oa_status": "HYBRID",
"pdf_src": "Adhoc",
"pdf_hash": "33f823c8b51a1f63ab153ac732f67658bd55861d",
"s2fieldsofstudy": [
"Engineering",
"Physics"
],
"extfieldsofstudy": [
"Environmental Science"
]
} |
145592151 | pes2o/s2orc | v3-fos-license | Global Drums and Local Masquerades
TV broadcasting has been in Nigeria for more than 50 years (1959-2009). Its development has brought about a series of local responses to global socioeconomic and political environments and “soft” stimuli. This conclusion is based on a critical, interpretive reading of the history, form, and content of television in Nigeria from Obafemi Awolowo’s Western Nigeria Television in Ibadan through the federal government’s reactive establishment of the national network: the Nigeria Television Authority, and later, states and private television stations. The ultimate deregulation of television broadcasting in 1992, perceived as Babangida’s “politically-correct” reaction to the pressures from the Bretton Woods institutions, opened up national media markets for global penetration, and fast-tracked media globalization and its effects in Nigeria. While television stations in Nigeria have multiplied in numerical terms, programming/content/form have followed the global market/technological determinism turning Nigerian TV into localized versions of commercialized western master-scripts with very little local ideological direction.
Introduction
There appears to be intersections and convergences in theoretical approaches to the study of national and/or international communication, probably because of the apparent cyclical nature of human progress. Thussu (2000) notes that "it's not surprising that theories of communication began to emerge in parallel with the rapid social and economic changes of the Industrial Revolution in Europe" (p. 53). Communication at the international level also reflects its significance in the growth of "capitalism and empire" following advances in science and technology. This becomes obvious when we recognize that a theory like Daniel Lerner's modernization theory essentially runs through many other theories. Modernization theory gained patronage from the United Nations Educational, Scientific, and Cultural Organization (UNESCO) as a way of spreading modernity, and also later in the 1990s by international nongovernmental organizations and global media corporations to spread "capitalism on the wheels of electronics" characterized by calls for open airwaves and free flow of information. This move was to allow global media multi-nationals to operate without hindrance anywhere in the world. The dependency and structural imperialism theories also exemplified how colonial "centers" (political and economic) perpetrated and benefited from continual maintenance of their "peripheralities" as dependents.
The capitalist-empire-states, according to Amoda (2000), reconstitute the developing world with structures and functions, which ensure their dependency; thus, export-import dependency equals structural dependency in the third world.
The beginning of empire service broadcasting by the British Broadcasting Corporation (BBC) in December 19, 1932, to British colonial outposts typicalizes these theoretical tendencies. It is noteworthy that Galtung's structural imperialism theory earlier identified five types of imperialistic exchanges between centers and peripheries (viz: economic, political, military, communication, and cultural flows), which tend to recur in contemporary Information society; and in media globalization theories' identification of what Appadurai calls "homogenizing scapes" (viz: ethnoscapes, technoscapes, financescapes, mediascapes, and ideoscapes; Appadurai, 1996). It is therefore not surprising that even in the 21st century Information Age dispensation, there seems to be the prevalence of a blend of "critical political-economy" (Thussu, 2000, p. 81), whereby transnational 515685S GOXXX10.1177 1 University of Calabar, Nigeria businesses, which are supported by their respective national states, increasingly push for the creation and maintenance of different "links" and "scapes" in global structures and relationships. In essence, therefore, the theoretical backbone of a historical narrative such as this is the persistence of modernization/dependency theories whose links can be seen in other theories like structural imperialism to 21st century media globalization theories. We tend to see national broadcasters replicating global broadcast structures and content on the one hand, and mutual cooperation between national/international organizations or grants/aids toward modernizing broadcasters in the periphery. What is the nature of this "soft influence" or subliminal power that these assistance and aids mean to the receiving nation, or is aid really gratis and without conditionality in our contemporary globalization setting?
This reading is based on literature and decades of personal field observation of TV broadcasting in Nigeria, especially in its first 50 years, and it looks to see the various subtle, indirect methods and approaches that dominant centers and hegemonies deploy to shape the development of the broadcasting industry in a "peripherality" like Nigeria. However, it must be stated that this "soft-power" exercised by regional/global or extra-governmental influences may or may not have directly influenced the form and even content of public service TV broadcasting in Nigeria.
Historical Influences in the Development of TV Broadcasting in Nigeria
The development of television broadcasting in Nigeria in the last 50 years has brought about a series of reactive jerks to the multiple local and global environments. With its umbilical cord in the Radio Distribution Service of the BBC, which became operational in Nigeria since 1935, this colonial contrivance was designed to "provide communication link between the colonial officer with the imperial home" (Nwuneli, 1985, p. 240). To begin with, the development of Western Nigeria Broadcasting Corporation (WNBC) Television as first TV in Nigeria (or Africa), in 1959 as an act of protest against the colonial governor, John McPherson, by Obafemi Awolowo, then premier of Western Nigeria, spawned a model for the spiral of new TV stations as a way of managing political propaganda through the airwaves. Both acts conceived later years' paradigm of undue bureaucratization and political domination of state media. This trend has continued today among the nation's public service broadcasters, whereby the medium is governed by authoritarian principles with the political leadership and power elite dominating and interfering with media content, management, and ownership (Lasode, 1994;Maduka, 1998;Opubor, 1985). Like its beginnings, the evolution of TV broadcasting has continued to be a series of local responses to global socioeconomic and political promptings. Folarin (2000) breaks the timeline for television broadcasting in Nigeria into four distinct phases. The first phase took off with the establishment of the Western Nigeria Television/Western Nigeria Broadcasting Service (WNTV/ WNBS) Ibadan in 1959 to the outbreak of the civil war in 1967. The second phase continued from the hostilities in 1970 till the Federal government takeover of television services and establishment of the Nigerian Television Authority (NTA). Even though Folarin considers the beginning of the civil war as a disruption of broadcasting services, it needs to be mentioned that the civil war, especially in Eastern Nigeria, contributed to the evolution of guerilla-style or revolutionary practice of broadcasting even though it was only transitional within the Biafran experience. That experience may also have later contributed to Wole Soyinka's Radio Democrat (later Radio Kudirat) during Sani Abacha's despotism.
The third phase was marked by the return of broadcasting to the concurrent legislative list from 1979 through 1990s when the National Broadcasting Commission (NBC) was established in 1992 (vide Decree 38 and later Act 55 of 1999) to deregulate the industry. The fourth stage is the era of deregulated and competitive broadcasting in Nigeria in the present day, which has witnessed an astronomic rise in the number of terrestrial and cable TV stations to more than about 166. This periodization can also be articulated politically in terms of the First Republic: 1960Republic: -1966First Military Intervention: 1966-1979the Second Republic: 1979-1983Second Military Intervention-Buhari Years: 1983Babangida Years: 1985-1993Abacha Years: 1993-1996Shonekan Transitional Government: 1996-1999and Third Republic: 1999-date. The dominant character of these regimes, as perceptively noted by Osaghae (2002), was their susceptibility to the "new-found benevolence" of the West that "could not disguise the self-serving policies of the advanced capitalist countries, which aimed at perpetuating the peripheralisation of the underdeveloped countries" (p. 314). Continuing, Nnoli adds that during the 1980s and 1990s, "Nigerian governments became powerless to influence not only economic activities in the international system but also within the country" (cited in Osaghae, 2002, p. 314). The foregoing "soft" influences may be read against the above periodization and dispositions of the various administrations.
Conceived as a medium for "mass information and instruction," Awolowo during the commissioning of WNBC in Ibadan asserted that television "is a powerful influence for good" that will make our country greater (cited in Lasode, 1994, p. 15 Nigeria, 1990;cf. Osinbajo & Fogam, 1991). This Act, however, only authenticated the already established regional stations. With it, both state and federal governments could establish and run broadcast stations on appropriate licenses from the Federal Ministry of Communications. This Ministry, a forerunner of today's NBC, was by this Act charged with allocation of wavelength, regulating output and station location, and ensuring compliance in line with the International Telecommunications Conventions.
The objectives of the various stations were the humdrum roles of educating, informing, and entertaining. Even though there was a National Communications Policy which drew inspiration from the Cultural Policy and Constitution, there has been no definite policy thrust for broadcasting in terms of grassroots cultural development (Opubor, 2005). This tendency is corroborated by a two-time (1977)(1978)(1979)(1980)(1981)(1982)(1983)(1984)(1985)(1986) Director-General of NTA, Vincent Maduka, when he regretted the general lack of form in TV broadcasting in Nigeria: "we have not attempted to use television for any purpose. We just ran it as a freewheeling medium, without any clear-cut objective . . . so any gain we have made is accidental" (Ukoha, 2000, p. 9). Television has been and remains a "manipulative one way" medium for governments both at the federal and state levels to achieve their short-sighted political ends (Opubor, 2005, p. 236).
With the creation of the Midwest region in 1963 and the 12 state structures in 1967, new TV stations were established in those states. This was given impetus with the proposed hosting of the Black and African Festival of Arts and Culture (FESTAC) in 1977. Color TV transmission was introduced first at Benue Plateau Television, Jos, in July 1974. Other states' TV stations took off accordingly: Rivers State TV (1974), North Western State TV, Sokoto (1976), Kano State TV (1976), and Kwara TV (1977. By the end of 1976, there were nine television stations in Nigeria. The First World Black and African Festival of Arts and Culture (FESTAC '77) also made it possible for the television in Nigeria to acquire a Domestic Satellite System (DOMSAT) to improve transmission and networking that took effect in April 1976. Apart from this acquisition, funding improved, with the consequent acquisition of hard and software broadcast materials.
The establishment of the Broadcasting Organizations of Nigeria (BON) earlier in 1973 contributed to improve and establish more coordinated broadcasting. Formed as a child of necessity at the instance of Christopher Kolade, then Director-General of Nigerian Broadcasting Corporation (NBC), the members, made up of all regional and state stations, wanted to present a common front at a West German organized seminar on broadcasting in Nigeria. The new organization now served as a platform for exchange of professional advice, programming, and programs, as well as personnel exchange, and so on. This also initiated the television network idea (Maduka, 1998, p. 113).
The Federal Government in April 1976 annexed stateowned TV stations that are now about 19 in number. This authoritarian move was to stem their proliferation, make for increased central coordination, and enhance program exchange and networking as well as promote unity. The first network program was telecasted on April 1, 1976 from NBC-TV in Lagos. Even though this system created its own editorial management problems due to diverse cultural differences, it forged some mythical ancestral fireside oneness among Nigerians through its network news, which, however, remains government-centered, manipulative, politically partisan, and serving more of the ends of leadership than that of the people (Nwuneli, 1985). This happens most often in momentous events like mass protests, strikes, or unpopular elections, which pitch the people against the government. Probably due to the authoritarian control of state-owned media through legislation, appointments of chief executive officers and funding are done by central and state governments.
The NTA was established in May 1977 through its enabling Decree No. 24 of 1977. According to Maduka (1998), the Decree gave NTA monopoly over broadcasting just like it was with Radio as it stated that "[t]he Authority shall, to the exclusion of any other broadcasting authority or any person in Nigeria, be responsible for television broadcasting" (p. 115). It was also established to carry out the duties of providing "public service in the interest of Nigeria, independent and impartial television broadcasting" for Nigeria, and emphasizing the unity in diversity of its cultures, but with a duty to the federal government. The Decree also provided for the zoning system with a zonal management structure. The six-zone structure was politically motivated to cater for the diverse national cultures and grassroot coverage and networking, with zonal headquarters in Lagos, Benin, Enugu, Kaduna, Maiduguri, and Sokoto. Section 11(1) specifically obliges NTA to broadcast government programs and announcements at its own expense and whenever requested by an "authorized officer," in the public services of the Federation as may be declared by the President or governor of a state. This also includes a provision that the Authority shall broadcast materials from an "authorized public officer" during an emergency (Osinbajo & Fogam, 1991, p. 37). This provision makes television and radio the first port of call for power-hungry coup plotters in Nigeria as well as popular media for political abuse by the so-called democratic leadership.
This monopoly and centralization of broadcast service on NTA were unpopular with state governments who felt disempowered by the central government. The Constituent Assembly sitting in 1978 returned to the states the rights to own broadcast stations, enshrining it in the 1979 Constitution (Section 26(2)). The NTA was directed to be fair, objective, and impartial at all times in matters of public and industrial controversies and other "competing ideas or interests, within the country" (Maduka, 1998, p. 119). The 1979 Constitution also provided for private broadcasting in the country even though this was not realized until 1992 by the Ibrahim Babangida administration, a move Shehu (2004) considers ironical as "it took the military to deregulate the broadcast industry" in Nigeria (p. 19).
However, the fairness and impartiality doctrine could not be realized by NTA due to continued undue bureaucratic interference, ethnic politics, and the role conflict in-built in Nigerian public service radio and television. In spite of NTA's house codes on electioneering responsibilities, the Second Republic (1979)(1980)(1981)(1982)(1983) pushed the NTA to the lowest depths of abuse as even NTA personnel rights were threatened with flagrant political interference and domination by the power centers. In the 1983 and 2003 elections, the media also acted irresponsibly in consort with the political and ruling party leadership of the time. Interestingly, Nigerian TV and other media acted as true watchdogs during the 2010 general elections, which validated Goodluck Jonathan's presidency. State media performance in Nigeria, therefore, becomes a barometric test for the disposition of political leadership.
As a result of all these political manipulations, the states began to establish their state-owned stations. This was to counteract the NTA stations that had been installed in all the states of the federation, thus demonstrating Maduka's (1998) observation that the reason for establishment of TV stations in Nigeria have always been politically motivated, far and removed from the professed inherited roles of education, information and entertainment that BBC public service broadcasting means. Meanwhile, the cyclical and reactive development in the media continued.
By 1983, with the abortion of the Second Republic, the economy had been vandalized and run-aground by the National Party of Nigeria (NPN) political class; the cost of running NTA became too high and NTA was running at a deficit. The new interventionist Muhammadu Buhari government cut down on subvention, and NTA stations were asked to commercialize in line with the administration's International Monetary Fund (IMF)-engineered austerity measures. This development took its toll on the quality of journalism and programming on television, thus proving as it were that "intense commercialization" could not cohabit with public service broadcasting (Maduka, 1998, p. 133). The Buhari administration promoted a national re-orientation through radio and television (through his War Against Indiscipline), promulgated the draconian Decrees 4 and 7 (Public Officers Protection Against False Accusation Decree and Detention of Persons Decree), and finally abrogated the NTA Zonal structure.
The latter action was based on the recommendations of the Christopher Kolade Committee set up in 1984 to consider rationalizing the NTA. Part of the committee's recommendations was the two-tier public service model, whereby "stateowned television (was) left to carry on with the communication effort and mobilization at grassroot level while NTA concentrated . . . on national activities" (Maduka, 1998, p. 136). It is noteworthy that a management structure that encourages the development of professional policy and the government's approach to media regulation reveals what that government expects from or how it views its citizenry as well as its attitude to power.
According to Lasode (1994), Ibrahim Babangida's administration embarked on seven government policies, which directly influenced the development of Television (and radio) in Nigeria. These policies included the abrogation of Decree 2 of 1984, the Structural Adjustment Program (SAP), Transition to Civil Rule Program, Mass Mobilization for Social and Economic Reconstruction and Recovery (MAMSER), Commercialization (Decree 25, 1988, in Laws of the Federal Republic of Nigeria, 1990), and later, the deregulation of broadcast industry.
These programs again needed the popular promotion and support from the media as a way of getting legitimacy and support from the people. Moreover, the Structural Adjustment Program, privatization and commercialization programs, were the "market driven" programs promoted by the IMF, which Babangida freely espoused in traditional Nigerian leadership's deference to the dictates of Bretton Woods' institutions.
Decree 25 of 1988 gave birth to the Technical Committee on Privatization and Commercialization (TCPC). Among the 90 agencies slated for the exercise were the Federal Radio Corporation of Nigeria (FRCN), NTA, and News Agency of Nigeria (NAN). In May 1990, Christopher Kolade again headed another Committee in conjunction with TCPC, to examine "modalities for commercialization of the NTA and exploring the feasibility of private radio and television stations" (Lasode, 1994, p. 43). The final recommendations of the Kolade/TCPC Committees resulted in the promulgation of Decree 38 of 1992, which established the NBC and gave it the enabling instruments to license and regulate operations of private broadcasting organizations in Nigeria.
The NBC Decree 38 of 1992 was later amended and strengthened by Amendment Act 55 of 1999, which expanded the NBC's role to regulate all federal, state, and private stations; collect television and radio licenses on behalf of the broadcast organizations; and ensure qualitative manpower development in the industry. The argument for deregulated broadcasting industry was not unconnected with Babangida's "politically correct" desire to please his western overlords, especially neo-liberalist USA's Ronald Reagan and UK's Margaret Thatcher whose economic policies planted the seeds of modern day economic globalization. The argument was that "a free market would become the new driving force for economic growth," and satellite technology was going to be the arrowhead of deregulation (Shehu, 2004, p. 19). In addition, the already thriving example of a deregulated print media in Nigeria provided a local paradigm to support the broadcast deregulation argument.
By 2004, there were more than 97 federal governmentowned NTA stations, 32 state-owned stations, 14 private stations, three Direct-to-Home TV stations, two private direct satellite television (DSTV) stations, and 37 cable-multipoint microwave distribution system (MMDS) re-broadcasting stations in Nigeria, out of a combined total of about 350 radio and TV stations (NBC, 2004). Today, there are more than 166 TV stations (terrestrial and cable inclusive) in Nigeria.
Public Service Television
An evaluative narrative of the development of TV in Nigeria cannot be completed without TV's public service function, especially as both NTA and states' TV stations profess to be public service in orientation in view of their BBC ancestry. Public service broadcasting or the trusteeship model refers to the operation of broadcasting services in the interest of the public good. It was a creation that sought to diversify the airwaves as a way of providing access for other voices and views, which are excluded by a commercialized broadcast media. Public service broadcasting, considered as an invention of the British BBC, is designed to be a nonprofit broadcasting organization that is operated through public funds from taxation. Like the BBC, it is charged with the duty of providing "information, education and entertainment," in the public interest, even though it becomes problematic to define the content of the term "public good" (Media Network on the Review of the Constitution, 1999, p. 30).
However, the trusteeship model of broadcasting aims at catering for a "pluralist society" with a commitment to the basic tenet that "radio and TV have specific civic functions and not simply ways of selling programming or commodities to the public" (Hutchinson, 1999, p. 156), while its programming is basically aimed at informing and enriching the listener and viewer. But to meet these public or social obligations, some basic conditions have to be met by the organization and the government that run this system on behalf of its people. These conditions include a secure method of finance, a strong broadcasting organization with freedom of programming, a commitment by the organization to universal coverage, and a regulated system (of management) designed to encourage the media practitioners to take both cultural and political risks (Hutchinson, 1999). Other reputable national public service televisions with a reasonable degree of public service content and structure include South Africa's South African Broadcasting Corporation (SABC), India's Doordarshan, Iran's Press TV, Germany's Deutschwelle TV (DWTV), and Canada's Canadian Broadcasting Network (CBN).
Even though the Nigeria Broadcasting Service was fashioned after the BBC, and the many enabling acts, decrees, or edicts were to provide broadcasting services for the new plural entity called Nigeria, Maduka observes that the enabling laws of the NTA (1977) and FRCN (1978) did not give specific guidelines on the media's involvement in the promotion of Nigerian cultural values and entertainment, but rather concentrated more on news, politics, and other controversial matters. To complicate this scenario, commercialization, politics, and its propagandist tendencies, systemic problems of state-appointed managements have posed great challenges on the NTA in the discharge of its functions in Nigeria. Because most media enabling laws in Nigeria were crafted during military regimes, government control of the nation's public media is very visibly authoritarian (Media Network on the Review of the Constitution, 1999, p. 23).
In spite of this apparent debilitating scenario newly compounded by a deregulated broadcasting market, Yaya Abubakar (1998), erstwhile Director-General of Voice of Nigeria (VON), notes that public service broadcasting in Nigeria has played a remarkable positive role over the years: These stations bridge distances and cross language barriers, and mobilize rural masses for development. They also provide an exciting platform for political communication and debate; promote local culture; provide the medium for educational broadcasting, stimulate national dialogue and consensus; as well as providing the market place for locally produced goods and services. (p. 12) The hyper-capitalism of media globalization has challenged the practice of public service broadcasting in its pure form. Beyond this, media globalization with its tradition of penetrative, market-driven trans-national media practice has like Buridan's ass placed the management of public service broadcasting organizations between two opposite attractions: the nobility of public service broadcasting and the capitalistically attractive though relentless pull of market-driven broadcasting.
It is noteworthy that even the BBC, the grand model of public service broadcasting, has gone through this chequered threat of restructuring due to changing political dispositions, cut funding, and the new challenge of the digital information revolution. But the BBC management responded through innovative programming, creation of advertising-financed channels, and the commercial-driven BBC Worldwide, and like the Indian TV Doordarshan, created more local language stations all over the world.
The Indian Doordarshan example is a rather unique model of television broadcasting. The corporation, which also inherited the BBC model, was deliberately designed to serve not only the ends of state hierarchies but also for the upliftment of the people in furtherance of an economic nationalism, which the then Minister of Information and Broadcasting, Sushma Swaraj, called "swadeshi" (Fernandez, 2000, p. 624). Indian TV broadcasting system combines the best tradition of BBC public service and American market-driven broadcasting, but within a regional protectionist policy. According to Chin (2003) "even though transnational media pose a challenge to a national media system and culture, the local state still plays a crucial role in regulating domestic cultural policies and guiding the development of broadcasting" (p. 17). This goes further to show that developing economies can and should tailor their media systems to fit their development contexts.
The South African SABC picture shows an equally competitive television environment that has leap-frogged into the Information Technology Age as though to make up for the lost years of apartheid. The liberalism and commercialism that drive the independent cable/satellite and terrestrial broadcasting sectors is equally matched with a determined, policied, multi-channel national public service broadcaster (SABC-TV). The software and content infiltration from the West and its consequent dissemination over the African continent appear to be taken for granted by the South African TV regulators and managers, because of the belief that South Africa's integration into the world economy will strengthen her international bargaining power, even at the expense of weaker African countries. In essence, therefore, the type of broadcasting system a nation prefers would depend on what function is expected of the media as well as the nation's economic and cultural orientation.
Global Influences and TV Programming in Nigeria
Schupman identifies the threefold purpose of broadcasting to be the "maintenance, extension and transmission of a culture"; even though broadcasting concerns itself with emergent values, it "must concern itself even more with those transmitted values without which no society can achieve continuity and stability" (Peigh, 1979, p. 9). Programming is the art of planning, producing, promoting, and placement of a program package for broadcast in a broadcasting station. It involves the planning and identification of values and the packaging and transmission of those values to a critical audience. Unlike production per se, programming is a cardinal management function in any television establishment, even though the actual spade work is done by the producer who is a line person in the establishment. A program, which is the end product of programming, according to the NBC Code is a unified presentation on radio, television, or cable retransmission that occupies a distinct period with a beginning and an end (2002).
The evolution of programming in the Nigerian broadcasting industry, like the broadcasting organizations themselves, has a chequered history. From early 1960s when programs were wholly imported, and were "equally divided between local productions and foreign films . . . for the so-called elite" (Lasode, 1994, p. 157), television broadcasting programming has evolved through a systematic "Nigerianization" orientation to the programming boom by the close of the 20th century. Farounbi notes that television at this stage could not be said to be serving any definable national interest . . . (as) over 80 percent of the total programmes (which stood at twenty-one hours per week) were foreign; with the remaining 20 percent being news bulletins, interviews and a few school programmes. (Lasode, 1994, p. 159) The programming philosophy and objectives have also evolved accordingly from pure entertainment in the 1960s through national self-identity and reconciliation of the many years of military politics and civil war, through the political ego-massaging of civilian democratic experiments, to the present market-driven deregulated democracy of the new millennium. Maduka notes the "professionalism" of the 1990s but bewails the endangered species of serious developmental programming driven under by commercialized broadcasting. "The trend with independents (producers)," he says, "permeates of freshness, competition and merit. It is also opening up the medium to anyone who has a claim," adding that it is "clearly a movement in the direction of democratization" (Lasode, 1994, p. 174).
Programming for any effective developmental broadcasting is a painstaking management process, which requires research into media habits, socio-statistics, and the cultural environment of the target niche audience, as well as their opinions and biases concerning the content of the communication. This process becomes necessary if a two-way collaborative communication is desired, a situation where "everyone generates information (and) everyone receives information" away from the prevalent medium abuse where political leaders promote a "manipulative one way form of communication" in clear breach of the NBC Code stipulations and the political leaders' oaths of office. This abuse is most prevalent among state-owned stations, which are invariably treated as extensions of the states' Ministries of Information and Public Relations, and not public establishments kept alive by tax-payers' money (cf. "The Good, the Bad & the Bungling," 2003; "Whose Voices, Their Master's or the Peoples?" 2003). It is interesting to note that during the January 2012 mass protests against Goodluck Jonathan government's removal of petroleum subsidy, NTA and state TV were clearly on the side of government: spreading propaganda, under-reporting, misreporting events, or running down labor and civil rights personalities, while the private stations like African Independent Television (AIT) and Channels TV provided more balanced stories to the admiration of audiences. Even the blood-letting activities of the Islamist group Boko Haram are hardly reported by public TV in Nigeria; all these in the name of what state media would call developmental journalism.
Maduka notices that even though broadcasting laws in Nigeria are modeled after state controlled public trusteeship model, the stations' programming still gets dominated by news, politics, and other controversial matters with little attention to Nigerian cultures, values, and indigenous entertainment, concluding that over the years, television has been run "as a free-wheeling medium, without any clear-cut objectives" (Ukoha, 2000, p. 9). This lack of clear-cut policy and programming direction stands out when one compares NTA with public service stations like Iran's Press TV and India's Doordarshan, which appear more objective in their news reporting.
Despite increased programming and the presence of independent production, commercialization and market-driven media altered the content of local programming, abolished and re-introduced the former zonal administrative structures, brought about rationalization and multi-skilling in NTA and other TV stations. With deregulation and the entry of private broadcasters, the motives for most private or some public licensees have become more commercial, political, and often personal. One clearly sees a predominance of commercial and pure-entertainment programs over development programs that have local cultural relevance. Even a public service network like NTA has come under great stress for the above reason. While NTA headquarters in Abuja and Lagos do fairly well, network stations are left to their wits to fill up airtime with cheap commercial and tele-evangelical programs without consideration for the commercial viability of their local environments. Where there is local production, the stations design their programs after western global "formats that sell" like game-shows, pure entertainment and so-called actuality programs that ape the Big Brother phenomenon. This tendency negatively affects the creativity of well-meaning local producers and artists who still bother about local cultural content beyond postmodern market-oriented programming (cf. Betiang, 2009).
In the Nigerian broadcasting environment, the following factors, events, and happenings have been contributory to shaping the development of television form and programming since its inception in 1959.
•
• Like South Africa's SABC and India's Doordarshan, the beginning of broadcasting in these colonial outposts grew out of the need for imperial Britain (BBC) to connect with its colonies abroad. This expansion and program syndication were achieved through a redistribution or "wire service" located at the "bridgehead" of the colony in Lagos in the case of Nigeria. • • The formation of the Broadcasting Organization of Nigeria (BON) in 1972 facilitated subsequent networking, which improved programming quality, sharing of management, and professional techniques (Maduka, 1998). This also applies to later deregulation days' networking like the defunct State-Owned Broadcasting Organizations of Nigeria (SOBON) and the many independent TV and radio production guilds. There are also other sponsored big-budget, long-running reality shows like the "Gulder Ultimate Search" and the "Maltina Dance All"' TV shows sponsored by Nigerian Breweries, Plc. The "Star Quest" music talent hunt has also produced new music stars as well as cloned content for TV programming in Nigeria.
With the deregulation of broadcasting in Nigeria, the birth of independent producers has enriched programming and content production in the industry. This development would not have come without the in-house standards of professionalism and on-the-job training that forms part of the personnel development policy of government operated public service television stations, especially NTA and FRCN. The Television College at Jos, which has currently been upgraded to a degree awarding institution (with its Radio Nigeria training corollary in Lagos) has made significant input to the broadcast industry. The public service tradition of training and retiring staff without consideration for indispensability has seen many a trained personnel out of job but who have joined the independent producers' movement. In addition, the now fashionable downsizing and rightsizing in 21 st century media globalization era, and the poor working civil service conditions invoked on broadcasters, have seen increased motility of talent among producers who have joined the army of earlier brain-drained names like Laolu Oguniyi, Jab Adu, Lola Fanikayode, Eddie Ogbonna, state-murdered Ken Saro Wiwa and Mabel Oboh, etc. It may not be far from the truth that most producers who have now spawned new private stations and independent production companies are products of these public service mother stations.
This tendency has also seen the formation of trade guilds like the Independent Television Producers Association of Nigeria (ITPAN), and Frontline Independent Television Association of Nigeria (FITRAN). The advent of deregulated broadcasting has also brought about the formation of other guilds like the Motion Pictures Practitioners Council of Nigeria (MPPCN) with all its other composite guilds. The many productions of these guilds have no doubt enriched local content creation in television broadcasting in Nigeria; as well as encouraged professionalism, standardization, networking and professional discipline. However, this may not be said of return-on-talent for their members in view of the rampant acquisition of programmes' composite rights by global cable companies and the prevalence of digital technologies, which have encouraged piracy of local creative output. Cheap pirated DVDs contain as many as twenty home movies; in the same way as cable TV stations float whole programmes like "Africa Magic," Nollywood, etc, which play Nigerian movies continually round the clock, with little returns-on-talents at home.
•
• Despite shortcomings that border on cultural imperialism, deregulation/commercialization of television broadcasting (handmaids of media globalization) has also affected Nigerian television content creation. It will be recalled that following Christopher Kolade's report on commercialization in 1987, the Babangida regime drastically reduced government subvention and expenditure by 40%. Earlier in 1984, 20 FRCN stations had been shut down, while the rest, NTA stations and state-owned stations, started commercial drives. This in a way made the stations to become for the first time more innovative in programming, even as it signaled the death-wish for public service broadcasting. Profit motive became privileged over public service as commercial programming increased through the sale of airtime, spots, sponsorships, and commercial news coverage. In spite of all these, television at both national and state levels has remained under the bureaucracy of the Ministry of Information and Culture. • • New developments in satellite communication technology and media globalization have also affected TV programming and production. This phenomenon has made it possible for broadcasting organizations, especially for satellite and cable re-broadcasting stations, to access the stream of programming on global channels. These signals are re-broadcast to subscribers, and even on terrestrial channels owned by state/governments. Sophos argues that with the deregulated media of the 1980s, new networks evolved that were "designed to provide subscribing stations with twentyfour hour a day programming formats delivered in high fidelity stereo via satellite". One such was erstwhile South African content provider TV-Africa, whose activities were promptly arrested by NBC in view of the many illegalities in their contractual operations with Nigerian TV stations. Other commonly syndicated global and regional transnational media content providers include BBC World, DW-TV, Cable News Network (CNN), FOX TV, and British Satellite TV. Today, Multichoice/MNet's Channel-O and local content music providers: Nigezee and Proudly African dominate the airwaves with foreign and local clones of western hip-hop culture.
In the same way as these digitized media facilitate content distribution, the problem of piracy and monitoring have also quadrupled. This development in digital technology has in some way also spawned the pervasive Big Brother template of reality shows, where like the American "McDonalization" of junk fast food, the Big Brother format has become a global franchise; a practice that promotes a tradition whereby the mass medium of television has been reduced to a cultural apparatus for the mass production of what Turner calls "celetoids," defined as the studio production of celebrities from "ordinary people" outside the circle of the "famous" (2006).
• • The introduction and popularization of the global system of mobile telephony and its convergence in
Internet and other computer-mediated communication have also influenced content generation and packaging in the Nigerian television industry. Multi-media production and interactive two-way broadcasting have been enhanced even though with mixed consequences. The evolution of citizenship journalism through social media networks, using mobile handheld devices, has in some ways enriched broadcast content with less gatekeeping and monitoring. While this trend has frustrated state-owned media like NTA and others who traditionally tailor news stories to suit their paymasters, the masses' participation and right to know and talk-back have been strengthened. While interactive programming on both radio and TV becomes commonly abused, down-loading and uploading of satellite/Internet media fare have also become common, as well as increased cloning of cheap commercial-oriented formats. This scenario has created an ongoing discourse in media chat rooms about media democratization and the need for responsible journalism.
The consequences of the foregoing on TV programming and production are obvious in the increased number of independent producers, increased number of television stations, increased AM-broadcasting with ever increasing number and popularity of telethon programs and programming. There is also increased and more balanced news reporting on public service NTA due to the influx of competing private broadcasters who have liberated Nigerians from the surfeit of "government say so news," and unfair and nonbalanced news reporting (Uyo, 2000, p. 24). This credit hardly extends to "state-owned" public TV stations who still degradingly abuse these stations as propaganda outfits for state governors and ruling parties much to the alienation of the citizenry.
But Salihu (2002), an NBC broadcast monitor, concludes that "essentially, however, whether privately or governmentowned, broadcasting is still largely its master's voice . . . as the question of true independence is in fact the biggest challenge before Nigerian broadcasting today" (p. 154). Moreover, TV screens have continued to pander to western consumerist values, while entertainment has "surrendered more or less to foreign programmes that have little or no redeeming values as far as our culture is concerned" (Uyo, 2000, p. 25). There appears to be a dearth of indigenous comedy on our TV stations and the emergence of some unholy marriage between TV and home videos (VHS), which have tended to dwell more on occultism and superstition with melodramatic plots on the ill-informed belief that that is what the audiences want. TV stations and content providers have failed to understand that the content provider creates and deserves the audiences they have through the mass pacification process.
TV programming and production have been besieged by a myriad of interlocking problems. From all indications, there is an abundance of imaginative program producers in Nigeria today, but these also must depend on the "simultaneous availability of performing, writing and production talent and production facilities" (Akpan, 1994, p. 87). Such facilities include what Nigerian producers go to South Africa to access like the (Sasani) studios, equipment, and technical crews, as well as Nigeria's God-given variegated scenery from the Atlantic coast to the Sahara desert, and the multiple traditions and colorful costumes of 400 ethnic cultures.
Other challenges include the paucity of script-writers and informed directors who are not necessarily market-but art-driven, and lack of corporate sponsorship. There is also the preponderance of foreign and free-to-air cable tele-evangelical programs, which get aired with the false assumption that they are free-to-air, but which actually carry heavily embedded western consumerist values that socialize Nigerians the wrong way. The low motivation of government-employed TV producers and hard-to-come-by sponsorship encourage this over-dependence on foreign programs that are of course cheaper and easier to acquire than local culturally relevant productions.
The continued attachment of federal/state-owned TV stations to Ministries of Information constitutes a cog in the smooth functioning of stations' programming and production. The creative artist cannot achieve professionalism with the strictures of civil service whose goals are not those of creative culture. The bureaucracy that guides approvals, unpredictability and changes in appointments and funding create discontinuity in programming. Other complications of this situation include politicization of personnel recruitment and remuneration as well as equipment acquisition. Ministries' boardrooms recruit staff and personnel and acquire equipment on behalf of stations; most of which turn out to be unusable. The civil service attachment has also contributed to the high motility of staff from public stations to the private ones (who do not provide security of tenure anyway). Incidentally, the silver lining behind this staff motility phenomenon is that public stations have become training grounds for the industry as most of the staff in private stations and independent markets passed through these public stations. This can be considered as some kind of silent fulfillment of the public service personnel development calling of public TV in Nigeria.
Other Local Impacts
Even though TV "bends blurs and blends" (apologies to Gerbner), its impact is relative. It is obvious from the foregoing narrative of TV's history in Nigeria that television has made various prismatic impacts because there are different media environments. Beyond the multiplication of stations, which the global phenomenon of deregulation has engendered, one doubts whether such fragmentation has equaled access to media for the alienated rural dwellers that neither have electricity to watch TV nor even get the signal. Consequently, well-placed persons in both urban and rural areas resort to patronizing satellite TV as terrestrial TV is out of reach or nonexistent; Nigeria provides the biggest market for cable television regional corporations like Multichoice.
On the other hand, urban or metropolitan areas are suffused with TV stations, which fight and glut public space for niche audiences. But what is the content that they trade with advertisers for these populations in the developing world? Advertising admittedly provides the cash for the stations. But what about the ostensible or subliminal values that advertising carries? This is why arrowheads of media globalization shop for cultural converts to sell western consumerist and cultural values. A 2009 study by this author shows the ignorance or unpreparedness of our media managers to manage media globalization due principally to the managers' inclination to see the world from the perspective of their pay-masters, who obviously are in business to make money. Government in Nigeria perceives globalization as an elixir of development without its post-industrial imperialism with all its baggage (cf. Betiang, 2009). At the moment, stations nationwide are struggling to digitize to meet global media 2015 digital migration standards which the country cannot input into but must comply, as nations must live or die by global standards.
From inception in Nigeria, TV has existed for the subversive use of political leadership against the people it pretends to represent. NTA and public service stations in the states that are terrestrial and actually have grassroots coverage often behave like hegemonic extensions of government. The private stations in the bid to win the people over to themselves become sensational, consumerist, and nucleating around high-population urban zones. Their adventurous programming accounts for the high rate of NBC breaches committed by these stations (Udekwu, 2002). Walter Ofonagoro (2002), former NTA's DG and Minister of Information rightly noted that "in Nigeria, there is a strong co-relation between the posture of those in control of the media and the professional performance of their workers" (p. 26).
While the economic impact cannot be quantified in an article of this scope, one can look at it from the point of view of the global media players, the regional media representatives, the public service stations, and the private-owned stations. Because one cannot be privy to their annual statements of accounts, it can only be said that TV broadcasting is a capital intensive venture, which one dares say has capital intensive returns, especially in this information age where information is power. The cost of public TV cannot be quantified. For instance, what is the real cost or opportunity cost of installing and running three politically motivated NTA stations littered in almost each of the 90 senatorial districts of the country? The drain on the economy in terms of imported broadcast software, hardware, and overheads can only be imagined.
Culturally relevant local programs like Village Headmaster, Ichoku, Samanja, Mirror in the Sun, Cock Crow at Dawn, Nigerian Dances, and Food Basket, Master Mind, Adio Family, and Checkmate that ruled the airwaves have disappeared. The reasons are not unconnected to the everpresent commercialization drive, now cheaper foreign programs that have come with satellite and media digitization, the dearth of cultural-minded sponsors and banks that do not understand the need to provide credit facilities for local content production; and probably, the ever-present Nigerian home video. This also invites one to question as to what real cultural values our Nigerian TV stations promote beyond materialism, consumerism, and magic quick-fix methods of making money which inundate Nigerian airwaves. Neo-Pentecostal material-minded pastors preach on open airwaves on prosperity that amounts to sanctification of crime; a phenomenon that was not there in the early days of TV. The family with its values has been sacrificed on the altar of consumerist-minded television (cf. Didigu, 2002).
The real impact of TV can therefore be perceived if broadcast managers and content providers contemplate the manner of children and future they hope to create through TV programming. This tendency is worsened by the paucity of community media, which have the potentialities to counterbalance these new consumerist values and political-economic hegemonies. Even though the family belt is being re-introduced on NTA, not much is done about other purveyors of TV content. The TV content monitor in contemporary Nigeria needs to broaden his/her periscope to go beyond mere classification and exhibition, because the content that media globalization technology avails through the airwaves and the open market needs re-consideration.
Conclusion
This critical, rather celebratory view of 50 years of TV in Nigeria examined its brief history, public service dimension and programming and the subtle "modernizing" influences that have aided its development. These influences and experiences of some public TV managers show how little of deliberate developmental policy there is in public TV broadcasting and its management. It also shows how much the Industry has relied on developing itself through reactive responses to global challenges and prompting from international corporate media, international NGOs, and other extra-governmental organizations. There is the need to develop the hardware or form of TV in Nigeria along with the issues of content. Dependency theorists suggest that one major way of throwing off this syndrome is to "have some self-sufficiency in the realm of information, ideas and culture" (McQuail, 2010, p. 255). This will check our traditional national dependence on outsiders' prompting for both form and content. This is because the new convergent TV promises potentialities for more damaging reconfiguration of our ethnic and national cultures and psyches through its new soft imperialism.
While the drive for independent production has geared up in recent times, independent content producers should have better ways of exhibiting their works other than the current practice where regional cable providers buy up their works and rights. Government can persuade the Committee of Bankers to include the culture industries in their Small and Medium Industries Equity Investment Schemes (SMIES), while we await the emergence of culture-dedicated banks in the country.
Government could also strive to achieve universal access to TV media through encouraging educational and public service broadcasting. It is not enough to expand the industry along commercially determined lines. Instead of fragmentation of stations, public service stations could establish dedicated channels, like the South African SABC, which may be wholly commercial, news, and entertainment, indigenous languages to make for easier management and reach within a competitive media environment. A deliberate policy on the management of globalizing media, in the manner of India's Doordarshan, could also provide a policy direction for the industry.
Finally, the need to fully latch into the Internet web-casting age is overdue. Content production and distribution should begin to explore cyberspace as another platform for conquest and content dissemination. While this will engender the acquisition of digital literacy, and digital policing on the part of monitors, it will also expand media access. Above all, in the midst of this media quagmire, media advocates and activists, as well as departments of theatre and media studies should begin to include media literacy and social media practice and management in their curricula as a way of strengthening viewers against local/global subliminal manipulations of television broadcasting hegemonies.
Declaration of Conflicting Interests
The author(s) declared no potential conflicts of interest with respect to the research, authorship, and/or publication of this article.
Funding
The author(s) received no financial support for the research and/or authorship of this article. | 2019-05-06T14:08:33.096Z | 2013-12-09T00:00:00.000 | {
"year": 2013,
"sha1": "c35b10db48a1a9db4911f6bb49421cdea349edbf",
"oa_license": "CCBY",
"oa_url": "https://doi.org/10.1177/2158244013515685",
"oa_status": "GOLD",
"pdf_src": "Sage",
"pdf_hash": "eedb09855a1706cbd911175fdf33aced3e063b90",
"s2fieldsofstudy": [
"Political Science"
],
"extfieldsofstudy": [
"Sociology"
]
} |
261925011 | pes2o/s2orc | v3-fos-license | Consumer Behavior toward Halal Food Using Theory of Planned Behavior and Theory of Interpersonal Behaviors among Muslim Students in Tasikmalaya
The share of halal products in Indonesia is enormous and grows yearly. With this rapid development, the City of Tasikmalaya plans to become the pioneer of the first halal tourism in West Java by prioritizing halal certification in food. However, the facts on the ground state that many Muslim students still do not understand and tend not to care about consuming halal food. This study aims to reveal the magnitude of the influence of attitudes, subjective norms, perceived behavioral control, religiosity, and habits that are formed on purchase intentions and actual purchases of halal food on Tasikmalaya Muslim Students. The method used in this research is descriptive quantitative by distributing it to 404 respondents. The data analysis technique in this study uses Partial Least Structural Equation Modeling (PLS-SEM). The results show that attitude, subjective norm, perceived behavior, and perceived behavior positively affect purchase intention. Purchase intention and habit positively affect the actual purchase, while habit moderates the relationship between purchase intention. The last, religiosity has a positive effect on attitude and purchase intention.
Introduction
The halal label is a significant factor in consumer purchasing decisions because the halal market continues to increase (Lutfie et al., 2016;Anggadwita et al., 2020).One of the cities that wants to move into the halal food industry is the City of Tasikmalaya.This city wants to be the pioneer of the first halal tourism in West Java due to the solid Islamic tradition and its regional background, which is famous for its many Islamic boarding schools (Kompasiana.com, 2021).However, according to Rosidi et al. (2018), students with the most significant potential in the halal food market tend to be more concerned with practical, affordable, and fast food.
Based on this, this study aims to examine the buying behavior of halal food among Tasikmalaya Muslim students.The research will integrate consumer behavior theory, namely, the Theory of Planned Behavior and the Theory of Interpersonal Behavior, as well as the religiosity factor.The method used in this research is to use the survey method and distribution of questionnaires to 400 respondents.
Literature Review
According to Heizer et al. (2017), operations management is a series of activities that create value in goods and services by converting inputs into outputs.Moreover, based on Heizer et al. (2017), ten operations management decisions include: process and capacity strategy (design of goods and services); managing quality; process and capacity strategy; location strategy; layout strategy; human resources and job design; supply chain management; inventory management; scheduling; and maintenance (Ivanov et al., 2021).
Based on what was stated by Heizer et al. (2017), supply chain management describes the coordination of all supply chain activities, starting from raw materials and ending with customer satisfaction.Thus, the supply chain includes suppliers, manufacturers or service providers, distributors, wholesalers, or retailers who deliver products and services to end customers (Latifah et al., 2021).According to Khan et al. (2022), halal supply chain management can be interpreted as the management of the process of handling halal products starting from raw materials from suppliers to finished products in the hands of consumers, whose entire process is based on Sharia law.
Developed a theory called the Theory of Interpersonal Behavior.This theory proposes that behavioral interests are determined by feelings (feelings) that humans have towards behavior which is called affect.Subsequent behavior is influenced by what humans have done, which is called habit, behavioral interests, and facilitating conditions (Triandis, 1980).Emphasized that "habit" is behavior that has been automated in existing situations to shape behavior directly without needing an interest-building process first.
According to Chae et al. (2020), a purchase intention is a form of consumers who wish to buy and choose a product based on their experience, use, and desire for a product.According to Prabowo & Priambodo (2021), an actual purchase is a series of physical and mental actions experienced by consumers.This theory is the result of the development of the Theory of Reasoned Action (TRA) by Icek Ajzen to become the Theory of Planned Behavior (TPB) in 1985.The purpose of this theory is to predict individual behavior precisely.Stated that the foundation of attitudes is formed around ideas or beliefs embedded in the attitude formation model and structure (Ajzen, 1985).Then according to Kania et al. (2020); Susilawati et al. (2022), attitude is a consumer's response to whether he likes or dislikes an object.
Perceived Behavioral Control, also known as behavioral control, is a person's feelings about how easy or difficult it is to manifest a particular behavior.Ajzen & Fishbein (2005); Apsari et al. (2019) stated that religiosity comes from the English "religion," which means religion, then the word "religious" is added, which means religious or pious.Based on this frame of mind, this study adopts research from Amalia et al. (2020) which analyzes factors from the Theory of Planned Behavior, Theory of Interpersonal Behavior, and Religiosity on purchase intention and actual purchase (actual purchase) of halal food.(Amalia et al., 2020) The hypothesis in this study refers to (Amalia et al., 2020)
Methodology of Research
The research method used in this research is a quantitative method with a descriptive research type because this research is directed to provide accurate symptoms and facts regarding the characteristics of the population in a particular area.Quantitative researchers will see the relationship of variables to the object under study, which is cause and effect (causal), so there are independent and dependent variables in their research.The strategy used in this study used a survey method to obtain data from certain natural places by distributing questionnaires via Google form to 404 respondents from Tasikmalaya Muslim Students.The unit of analysis used is the individual obtained directly from the source (primary data) because this research will analyze the answers or information from the various opinions of each individual.Based on the implementation time, this study used a cross-sectional method which only made one observation.The data analysis technique uses Partial Least Squares Structural Equation Modeling (PLS-SEM), which aims to examine the relationship between variables by looking for a relationship or influence between variables.This model is suitable for use when research has several variables and indicators.
Normality Test
Based on Figure 2 above, it is known that the significance value is 0.000 <0.05, so it can be concluded that the residual data is not normally distributed.Therefore, data processing can use Smart-PLS.Convergent validity is carried out to determine the validity of each relationship between indicators and their variables.The convergent validity test can be seen from the loading factor of each construct indicator, which has a value of > 0.7 for confirmatory research and a loading factor value of 0.6-0.7,which is acceptable for exploratory research, as well as a value of Average Variance Extracted (AVE) > 0.5 (Table 1).Based on the Table 1, each indicator on the variable meets the criteria, which is more than 0.7.Then, the outer model of the convergent validity test can also be seen from the AVE values.The recommended AVE value is > 0.5 to indicate good convergent validity.The discriminant validity test has attachments with different construct measurements and is not highly correlated.To test discriminant validity, one must pay attention to the cross-loading value of each variable > 0.70 (Table 2).Using the PLS-SEM data analysis technique with SmartPLS 3.0 software, the reliability test can use the Composite Reliability (CR) value (Table 3) to measure the reliability value of a construct.The rule of thumb for this measurement is Composite Reliability > 0.7 for confirmatory research, and CR values of 0.6-0.7 are still acceptable for exploratory research.Based on these results, the CR value of each construct exceeds 0.7.This is in accordance with the rule of thumb for the CR value, which is greater than 0.7, so all variables can be declared reliable.
Inner Model Measurement
The Table 4 shows the R-square value in the study, namely the attitude variable, with an R-square value of 0.838 or 83.8%; this means that 83.8% of the attitude variable can explain the religiosity variable.While an error variance causes the remaining 16.2% at the time of measurement, there may be other variables that can be explained by the attitude variable, which requires further research.Furthermore, the purchase intention variable with an R-square value of 0.936 or 93.6%; this means that 93.6% of the purchase intention variable can explain the variables of attitude, religiosity, subjective norm, and perceived behavioral control.In contrast, the remaining 6.4% is caused by an error variance at the time of measurement, so there may be other variables that can be explained by the purchase intention variable, which requires further research.Furthermore, the actual purchase variable with an R-square value of 0.710 or 71% means that 71% of the actual purchase variable can explain the variables of attitude, religiosity, subjective norm, perceived behavioral control, and habit.While an error variance causes the remaining 29% at the time of measurement, other variables can be explained by the actual purchase variable, which requires further research.
Hypothesis Testing
Table 5 can answer this research question by looking at the significance value between variables, namely the t-value > 1.64.Attitude, subjective norm, religiosity, and perceived behavioral control variables show a positive and significant influence on purchase intention because they have a tvalue above 1.64, so the hypothesis is accepted.The purchase intention and habit variables show a positive and significant influence on the actual purchase variable because it has a t-value above 1.64, so the hypothesis is accepted.Furthermore, the habit variable can moderate the relationship between the purchase intention variable and the actual purchase.Figure 2 shows the magnitude of the path coefficient value for each relationship between the variables contained in this study using SmartPLS.The large values of the path coefficients in the structural model indicate the path coefficients of the hypothesized causal variables.This value indicates the strength of the influence of the independent variable on the dependent variable.An example is the first hypothesis: attitude influences purchase intention with a regression coefficient of 0.105.This value indicates that the attitude variable affects 10.5% of the purchase intention variable.
Figure 2. Structural Model
The relationship between attitudes towards purchase intention has a t-value of more than 1.64, equal to 1.705, indicating that attitude influences purchase intention, so H1 is rejected with a regression coefficient of 0.105.The relationship between norms and purchase intention has a t-value of more than 1.64, equal to 3.225, indicating that subjective norms affect purchase intention, so H2 is accepted with a regression coefficient of 0.166.
The relationship between perceived behavioral control and purchase intention has a t-value of more than 1.64, equal to 1.967, indicating that perceived behavioral control influences purchase intention, so H3 is accepted with a regression coefficient of 0.072.The relationship between purchase intention and actual purchase has a t-value of more than 1.64, equal to 9.542, indicating purchase intention towards the actual purchase, so H4 is accepted with a regression coefficient of -0.645.
The relationship between habit and actual purchase has a t-value of more than 1.64, equal to 4.256, indicating that perceived behavioral control influences purchase intention, so H5 is accepted with a regression coefficient of 0.329.The habit variable moderates the relationship between the purchase intention and actual purchase variables.The t-value of more than 1.64, equal to 2.383, indicates that habit negatively moderates the relationship between the purchase intention variable and the actual purchase variable, so H6 is accepted with a regression coefficient of 0.140.
The relationship between religiosity and attitude has a t-value of more than 1.64, equal to 47.479, indicating that perceived behavioral control influences purchase intention, so H7 is accepted with a regression coefficient of 0.915.The relationship between religiosity and purchase intention has a t-value of more than 1.64, equal to 9.928.It indicates that religiosity affects intention, so H8 is accepted with a regression coefficient of 0.658.
Conclusion
This study aims to determine the effect of Attitude, Subjective Norm, Purchase Behavioral Control, and Religiosity on Purchase Intention and its direct effect on Actual Purchases with Habit as a moderating variable.Data processing to assist in analyzing the relationship between variables in this study uses Partial Least Square (PLS) with SmartPLS 3.2.9software.Based on the results of the analysis in the previous chapter, it can be concluded that Attitude has a positive and significant effect on purchase intention by having a t-value of more than 1.64, which is equal to 1.705, and a regression coefficient of 0.105.Subjective norm positively and significantly affects purchase intention by having a value.The t-value is more than 1.64, equal to 3.255, and the regression coefficient is 0.166.significant to the actual purchase by having a t-value of more than 1.64, which is 9.542 and a regression coefficient of 0.645.Habit has a positive and significant effect on Actual Purchases by having a t-value of more than 1.64, equal to 4.256, and a regression coefficient of 0.329.Habit moderates the relationship between purchase intention variables and actual purchase variables with a t-value of more than 1.64, which is equal to 2.383, and a regression coefficient of 0.140.Religiosity has a positive and significant effect on Attitude by having a t-value of more than 1.64, equal to 47.479, and a regression coefficient of 0.915.Religiosity has a positive and significant effect on purchase intention, with a t-value of more than 1.64, 9.928, and a regression coefficient of 0.658.
Figure 1 .
Figure 1.Conceptual Framework(Amalia et al., 2020) , namely: H1: Attitude significantly influences Purchase Intention of halal food by Muslim students in Tasikmalaya.H2: Subjective Norm significantly influences the Purchase Intention of halal food by Muslim students in Tasikmalaya.H3: Perceived Behavioral Control significantly influences Purchase Intention of halal food by Muslim students in Tasikmalaya.H4: Purchase Intention significantly influences the Actual purchase of halal food by Muslim students in Tasikmalaya.H5: Habit significantly influences the Actual Purchase of halal food by Muslim students in Tasikmalaya.H6: Habit negatively moderates the impact of Purchase Intention toward Actual Purchase of halal food by Muslim students in Tasikmalaya.H7: The religiosity of Muslim students in Tasikmalaya can impact Attitude toward Purchase Intention of halal food.H8: The religiosity of Muslim students in Tasikmalaya can impact Purchase Intention of halal food.
Figure
Figure 2. Normality Test
Table 4 .
Value of Structural Inner ModelIn the model, fit test analysis is done manually using following GoF formula.Based on the results of the model fit test that has been carried out, the model fit test value is greater than 0.38, equal to 0.86.So it can be concluded that the model used in this study has a good model fit. | 2023-09-16T15:15:07.757Z | 2022-09-30T00:00:00.000 | {
"year": 2022,
"sha1": "a84d968406ee6fefa39e737df9c801f33d6104a8",
"oa_license": "CCBYSA",
"oa_url": "https://jurnal.fisip.uniga.ac.id/index.php/jisora/article/download/87/81",
"oa_status": "GOLD",
"pdf_src": "Anansi",
"pdf_hash": "85b5457e71e591d062cb049360240a65f1c981de",
"s2fieldsofstudy": [
"Business"
],
"extfieldsofstudy": []
} |
210983862 | pes2o/s2orc | v3-fos-license | PIPAC versus HIPEC: cisplatin spatial distribution and diffusion in a swine model
Abstract Purpose Pressurized intraperitoneal aerosol chemotherapy (PIPAC) is a novel approach for delivering intraperitoneal chemotherapy and offers perspective in the treatment of peritoneal carcinomatosis. Concept is based on a 12 mmHg capnoperitoneum loaded with drug changed in microdoplets. It was postulated to guarantee a more homogeneous drug distribution and tissular uptake than hyperthermic intraperitoneal chemotherapy (HIPEC). The aim of this study was to compare cisplatin peritoneal distribution and pharmacokinetic between HIPEC and PIPAC procedures in a healthy swine model. Methods Two groups of eight pigs underwent either HIPEC with cisplatin (70 mg/m2) at 43 °C for 60 min, or PIPAC with cisplatin (7.5 mg/m2) for 30 min. Postoperatively, peritoneal areas were biopsied allowing peritoneal cavity cartography. Tissular and plasmatic cisplatin concentrations were analyzed. Results Cisplatin distribution was heterogeneous in both the groups with higher concentrations obtained closed to the delivery sites. Median total platinum peritoneal concentration by pig was higher in the HIPEC group than in the PIPAC group (18.0 μg/g versus 4.3 μg/g, p < .001) but the yield was 2.2 times better with PIPAC. Platinum concentrations were higher in the HIPEC group in all stations. At each time-point, cisplatin plasmatic concentrations were higher in the HIPEC group (p < .001) but beneath the toxicity threshold. Conclusions With doses used in clinical practice, HIPEC guaranteed a higher cisplatin peritoneal uptake than PIPAC in this swine model. Spatial drug distribution was heterogeneous with both technics, with hotspots closed to the drug delivery sites. Nevertheless, considering the dose ratio, IP drug uptake yield was better with PIPAC.
Introduction
In digestive, gynecological and primary peritoneal malignancies, peritoneal carcinomatosis (PC) indicates poor prognosis. Different combinations of PC origin and peritoneal spread generate four clinical situationslimited spread, usually peritumoral; extended metastases accessible to complete surgical resection; borderline disease and non-resectable PCeach warranting different therapeutic strategies.
For selected patients with resectable PC, complete cytoreductive surgery (CRS) provides better outcomes and is considered the only potentially curative treatment. Immediate postoperative hyperthermic intraperitoneal chemotherapy (HIPEC) combined with complete CRS can improve local control [1], showing encouraging oncological results [2][3][4]. Systemic chemotherapy is the current standard for patients with non-resectable disease, but yields insufficient oncological outcomes for PC of most origins [5][6][7]. Intraperitoneal (IP) chemotherapy could improve local control, increase survival and sometimes convert non-resectable disease to resectable disease.
The rationale behind IP drug administration is to exploit peritoneal-plasma barrier pharmacokinetics-enhancing tumor nodule exposure to chemotherapy drugs, while decreasing systemic passage and associated toxicities [8]. However, IP therapy is limited by two pharmacokinetic issues: poor drug penetration into nodules, and heterogeneous spatial distribution within the peritoneal cavity [9]. Pressurized intraperitoneal aerosol chemotherapy (PIPAC) is proposed as a new IP therapy method that could overcome these limitations [10]. The 12-mmHg 'therapeutic capnoperitoneum' PIPAC method was developed to ensure better local drug bioavailability with homogenous distribution and tissue penetration [11]. However, recent data from ex vivo and postmortem swine models highlight non-uniform patterns of doxorubicin distribution [12].
In the present study, we aimed to compare distribution and pharmacokinetic profiles of IP drug after HIPEC and PIPAC in an in vivo swine model using protocols similar to those applied in clinical practice [13,14]. Although previous reports investigate doxorubicin as an IP agent, we chose cisplatin due to its wider clinical use and the greater amount of available data [3,15].
Animals
This study was performed in 16 male Sus scrofa domesticus pigs. All pigs were allowed to acclimatize to the laboratory environment for 7 days, with free access to standard food and water. Then, the operations were performed. On postoperative day 8, the animals were sacrificed with intravenous injection of embutramide-mebezonium-tetracaine (T61; Intervet, France). This project complied with European regulations (Directive EU 86/609), and was approved by the Animal Ethics Committee of the National Veterinary School of Lyon (VetAgro Sup), France (agreement no. 1552).
Surgical procedure
The pigs received intramuscular premedication with 6 mg/kg tiletamine-zolazepam (Zoletil; Virbac, France). Then, a peripheral venous catheter was inserted in the auricular vein, and anesthesia was administered with 4 mg/kg propofol (Diprivan; AstraZeneca, UK), followed by orotracheal intubation. Animals were maintained under anesthesia using isoflurane and intravenous propofol. Heart rate, electrocardiogram, esophageal temperature and oxygen blood saturation were continuously monitored, and recorded every 5 min. Fluid resuscitation was achieved using isotonic saline and ringer lactate, with a mean volume of 2 L/pig. A central venous catheter was positioned in the jugular vein, and a femoral arterial catheter was placed. Hemodynamic parameters were monitored using the PICCO system (Pulsion, Germany).
The pigs were randomly assigned to undergo HIPEC (eight pigs) or PIPAC (eight pigs). Each pig's abdominal cavity was arbitrarily divided into 10 peritoneal stations, inspired by Sugarbaker's peritoneal areas [16], as described in Box 1. In the HIPEC group, closed-abdomen HIPEC was performed using the Cavitherm device (Soframedical, France). We placed two inflow tubes under the diaphragm (stations 1 and 3), and one outflow in the pelvis (station 6). One temperature probe was placed close to the mesentery root. For one hour, HIPEC was performed using a 70 mg/m 2 cisplatin solution, with continuous abdominal massage, and a target temperature of 43 C [13]. Upon completion, intra-abdominal fluids were evacuated, and all tubes were removed.
In the PIPAC group, a 12-mmHg capnoperitoneum was insufflated following placement of two balloon trocars (12 mm and 11 mm; Applied Medical, D€ usseldorf, Germany). A 10-mm micropump device (Reger Medizintechnik, Rottweil, Germany) was placed in station 8, facing station 4, and connected to a high-pressure injection device (Injektron 82M; MedTron, Saarbruecken, Germany). The micropump was sterilized once and changed in every two pigs. Abdomen tightness was documented using a CO 2 zero-flow. Through the nebulizer, we applied pressurized aerosol containing cisplatin (7.5 mg/m 2 body surface) in 150 ml NaCl 0.9%. Injection parameters included a flow of 30 ml/min, maw upstream pressure of 200 psi and intra-abdominal pressure of 12 mmHg [11]. Therapeutic capnoperitoneum was maintained for 30 min at body temperature (37 C), and exhausted using a closed system including a Buffalo particle filter (Medtek Devices, Inc., Lancaster, USA).
Cisplatin distribution analysis
After IP chemotherapy application, we performed nine parietal peritoneum biopsies (stations 1-9) and a full-wall jejunum biopsy to analyze visceral peritoneum (station 10). Peritoneal samples were divided from sub-serosal fatty tissues, rinsed with an isotonic saline solution and immediately deep-frozen at À20 C.
Cisplatin pharmacokinetics
Blood samples were collected at the end of chemotherapy administration (T0), and at 30 (T30) and 60 (T60) min after chemotherapy completion. Blood samples were centrifuged to obtain plasma samples, which were deep-frozen at À20 C.
Platinum assay
Total plasmatic and peritoneal platinum were measured using an inductively coupled plasma mass spectrometer (ICP-MS Nexion 350XX, Perkin Elmer Life and Analytical Sciences, Waltham, MA, USA). Platinum analysis was performed according to a field application report of Perkin Elmer. Total plasmatic platinum concentrations were expressed in mg/L. For peritoneal platinum measurements, peritoneum samples were dried overnight at 40 C then 3 h at 105 C, followed by an acid digestion [HNO3 (Plasma Pure, SCP Sciences, Quebec, Canada) at 60 C overnight]; results were expressed as mg of platinum per gram of dry tissue.
Statistical analysis
All variables are presented as mean (standard deviation) or median (min-max). Due to the small sample size, groups were compared using non-parametric tests, with the exact Mann-Whitney test used for quantitative variables. We compared the topographic distribution of cisplatin tissue concentrations between the HIPEC and PIPAC groups, globally and according to station. Tissue concentrations were also analyzed within groups using non-parametric tests for matched pairs with correction for multiple testing (Friedman). For all tests, a p value of <.05 was the statistical significance threshold.
Results
All experiments were performed as scheduled. Mean weight on the day of surgery was 38 kg (35-43 kg).
Peritoneal platinum concentrations
Peritoneal platinum concentrations are presented in Table 1 and Figure 1. Median total platinum peritoneal concentration was significantly higher in the HIPEC group (18.0 lg/g) than the PIPAC group (4.3 lg/g) (p < .001). The ratio between drug posology and median total platinum concentration was 2.23 times higher with PIPAC than with HIPEC. Considering only the parietal peritoneum biopsies (stations 1-9), median platinum concentrations were 18.8 lg/g in the HIPEC group, and 4.8 lg/g in the PIPAC group (p < .0001). In the visceral peritoneum (station 10), mean platinum concentrations were 3.17 lg/g in the HIPEC group, and 0.75 lg/g in the PIPAC group (p ¼ .005). Drug uptake distribution analysis revealed that the mean and median platinum concentrations were higher in the HIPEC group at all stations, with a significant difference at 7 of 10 stations (Figure 1). Intra-group analysis of platinum uptake spatial distribution revealed wide heterogeneity among stations in each group (p < .001 for each group) (Table 1), with greater platinum concentration variability in the HIPEC group ( Figure 1). The HIPEC group exhibited higher concentrations in samples from the right and left flanks (stations 4 and 8), next to the inflow tubes. The PIPAC group showed higher platinum uptake at station 4, facing the extremity of the micropump (Figure 1).
Total platinum plasma concentration
Mean total platinum plasma concentrations are presented in Figure 2 and Table 2. At each time-point, concentrations were significantly higher in the HIPEC group than the PIPAC group (p < .001). Peak mean plasma concentrations were 0.513 mg/L (0.174) at T0 in the HIPEC group, and 0.121 (0.021) mg/L at T30 in the PIPAC group.
Discussion
We compared HIPEC and PIPAC in a swine model, using daily clinical practice treatment settings. Analysis of cisplatin peritoneal uptake revealed that both techniques resulted in heterogeneous spatial distribution, with significantly higher drug concentrations near the administration sites. HIPEC enabled higher peritoneal platinum concentrations than PIPAC, but considering the 10-fold difference in concentrations applied, the PIPAC's yield was 2.2 times better.
When considering intraperitoneal administration, spatial drug distribution is important. PIPAC is reportedly better than HIPEC in this context, providing a homogeneous, 'gaslike' drug distribution throughout the abdominal cavity, via formation of microdroplets carried by the 12-mmHg capnoperitoneum [11]. Initially, a heterogeneous distribution was reported with nebulization of microdroplets smaller than 10 lm and an enhanced staining near the micropump, without staining of the bursa omentalis and inferior liver [17]. In 2012, Solass et al. [11] described a second-generation nebulizer. They compared staining with PIPAC versus conventional lavage, and found that PIPAC yielded a 150-fold higher dye concentration, with more intense staining and a larger stained peritoneal surface [11]. The same team also compared lavage with nebulization in an ex vivo model, assessing agent penetration into a peritoneal fragment inserted into a plastic box mimicking a peritoneal cavity [18]. Values are mean (standard deviation) expressed in expressed in mg/L. T þ x 0 , timing of sampling (numbers expressed in min). a Mean was calculated with data from six pigs.
performed with 10 mL, without reference to box volume. The PIPAC group included an electrical gradient to enhance tissue penetration. Compared to lavage, nebulization allowed deeper (up to 1 mm) and more homogeneous agent uptake [18]. Thus, initial comparisons suggested that PIPAC provides more homogeneous distribution than liquid intraperitoneal treatment (considered equivalent to HIPEC). In contrast, our present experiments demonstrated heterogeneous cisplatin distribution in both the groups, with higher concentrations near the delivery sites. This phenomenon is magnified by the higher concentration used with HIPEC, as attested by the more important variability among stations in that group and by the wide heterogeneity of concentrations in the PIPAC group station 6, the farthest from the MIP. Median total platinum peritoneal concentration was significantly higher following HIPEC than PIPAC (18.0 lg/g versus 4.3 lg/g, p < .001), and platinum concentrations were higher in the HIPEC group at all stations. A research team from Herne studied doxorubicin distribution via PIPAC, and also showed heterogeneous distribution. They reported that doxorubicin reached all the peritoneum parts, showing varying penetration depths among areas [12], with the greatest depth (around 350 lm) near the micropump in the small intestine [12,19].
Khosrawipour et al. [19] evaluated the effects of changing various parameters in an ex vivo model, similar to that described above. They found that micropump position influenced drug distribution and penetration, with better results on the surface directly opposite the micropump. With the micropump closer to tissue, the penetration depth was higher in front of the micropump, but lower elsewhere. Higher doxorubicin concentration promoted significantly increased drug penetration, predominantly localized near the micropump. Enhanced internal pressure did not significantly increase the doxorubicin penetration depth [19].
Another study compared distribution between PIPAC and IP chemotherapy in postmortem swine, applying planar scintigraphy and single-photon emission computed tomography after 99mTc-Pertechnetat administration [20]. IP liquid administration was performed using 150 mL of marked solution. They found deviations from uniform drug distribution of 40% and 74% in the PIPAC group, and of 23% and 34% in the liquid group. All animals exhibited 'hot spots'areas with higher deposition, which uptook 20-30% of the total delivered 99mTc. In addition to regions near the injection sites, the Douglas pouch was a hot spot in both the groups, suggesting a role of gravity in drug repartition. In our present study, we found hot spots of cisplatin uptake in stations near the inlet ports, but not in the pelvic area.
The results obtained with doxorubicin in ex vivo models are concordant with our present results using cisplatin in an in vivo model. The findings seem to be at least partly explainable by microdroplet size [20,21]. Ex vivo measurements indicate that the micropump delivers droplets with a mean size of 11 lm [3][4][5][6][7][8][9][10][11][12][13][14][15]22]. G€ ohler et al. [21] describes various aerosols generated with a micropump, showing a two-phase gas with 98% of the droplets being >3 lm and a median diameter of 25 lm. From 3 lm upwards, droplets submit to gravitational settling and inertial impaction. The authors calculated that target droplet size should be <1.2 lm for homogenous drug distribution during PIPAC [21].
Although nebulization is a promising approach, neither PIPAC nor HIPEC could guarantee homogenous intraperitoneal drug distribution. The proposed theory that large droplets are affected by gravity seems intuitive, and is a basis for technological improvements. Proposed innovations to homogenize drug uptake include using radiation to decrease doxorubicin penetration [23,24], electrostatic precipitation to accelerate drug uptake [25] and an endoscopic microcatheter to perform microinvasive PIPAC [26]. Acting on droplet behavior seems to be the most promising idea. Adding an electrostatic field may enhance charged droplet precipitation and tissue penetration. Willaert et al. [27] reported in 48 patients that electrostatic PIPAC (EPIPAC) was safe, feasible and efficient [27]. G€ ohler et al. [28] proposed to reduce aerosol droplet size through a hyperthermic intracavitary nanoaerosol therapy (HINAT), based on extracavitary generation of a heated aerosol. They described an aerosol comprising droplets with a 1.3 lm median diameter, which provides a quasi-uniform distribution over the peritoneum in a swine model. This distribution yields better drug uptake, with a mean penetration depth of 226 lm (88) for HINAT, compared to 102 lm (104) for PIPAC-MIP [28].
The IP drug dosage determines both the efficacy (drug uptake in peritoneum) and toxicity (systemic passage). We used cisplatin concentrations equivalent to those clinically prescribed [13,14]. Schematically, dose ratio between HIPEC and PIPAC was of 9.3, whereas median tissular concentration ratio was of 4.2, suggesting a 2.2 times better yield for peritoneal uptake of IP cisplatin with PIPAC. So intrinsically PIPAC provides a better cisplatin uptake than HIPEC. Clinical used concentrations are constantly evolving. A dose escalation phase I study with a standard 3 þ 3 dose escalation design confirmed that cisplatin concentration could be increased up to 10.5 mg/m 2 (third step) with no dose limiting toxicity [29]. Nevertheless, a colon perforation occurred during surgical access and other reports suggest an over-risk of anastomotic leakage with PIPAC, suggesting that a clinically usable dose-limit exist for this technic [10,14,30]. While HIPEC was considered to induce 18% of renal insufficiency at 80 mg/m 2 [31], nephroprotection with intravenous sodium thiosulfate seems to prevent renal failure, enabling the use of 100 mg/m 2 cisplatin in a recent prospective trial [3]. In our study, total cisplatin plasma levels were significantly higher after HIPEC than after PIPAC. Cisplatin's nephrotoxicity is the main dose-limiting factor in HIPEC, while perturbed renal function is not described after PIPAC [32]. This renal insufficiency risk remains concerning for patients with a poor prognosis [3,33]. Cisplatin pharmacokinetics studies are complex. After intravenous administration, many platinum fractions circulate, including aquated and unchanged cisplatin, and 'mobile' and 'fixed' metabolites blinded to low or high molecular mass substances, respectively [34]. While the level of unchanged platinum seems to be the most relevant parameter for studying nephrotoxicity, total serum platinum could be used as a risk marker [35]. Data from the literature show that cisplatin-linked renal failure can occur at !1 mg/L [34,[36][37][38]. The peak plasma concentration in our experiments was below this threshold, but we did not check serum creatinine, which could have been silently affected.
Our study had several limitations, the main one being that results were obtained in healthy tissues, without peritonectomies. Consequently, the plasma platinum concentrations were probably lower than those observed after CRS-HIPEC, which could influence drug distribution. Additionally, we studied cisplatin penetration into normal peritoneum, which differs from tumor nodules. However, Los et al. [39] described higher cisplatin intratumoral concentration with a single IP administration compared to IV administration, with no increased IP pressure. This concentration was increased by repeated injections, with the advantage extending up to 1.5 mm inward from the tumor periphery [39]. Moreover, hyperthermia enhances platinum penetration, with concomitant survival decreases both in vitro and in vivo [15]. The cisplatin-induced DNA-adducts formation was measured 3-5 mm into the tumor tissue following IP administration and hyperthermia [40].
HIPEC and PIPAC approaches are not dedicated to the same clinical uses, but are often compared [10,11,18]. Our study was a picture of peritoneal cisplatin uptake using clinically established protocols. It showed a heterogeneous distribution in both the groups and a higher tissular drug uptake with HIPEC. PIPAC provided a better yield with lower systemic passage and thus toxicity, confirming this technic as promising. Improvements are needed, possibly including HINAT, increased drug concentration, and repeated administration during the same session with MIP position modified by microcatheter. | 2020-02-01T14:03:16.709Z | 2020-01-01T00:00:00.000 | {
"year": 2020,
"sha1": "a8ad384ebffc829c00ad7f4c3d6f0816f95015a9",
"oa_license": "CCBY",
"oa_url": "https://www.tandfonline.com/doi/pdf/10.1080/02656736.2019.1704891?needAccess=true",
"oa_status": "GOLD",
"pdf_src": "TaylorAndFrancis",
"pdf_hash": "56151beb62dde50807184b3eb8b323284773c201",
"s2fieldsofstudy": [
"Medicine"
],
"extfieldsofstudy": [
"Medicine"
]
} |
195791441 | pes2o/s2orc | v3-fos-license | Excitation spectrum of a trapped dipolar supersolid and its experimental evidence
We study the spectrum of elementary excitations of a trapped dipolar Bose gas across the superfluid-supersolid phase transition. Our calculations, accounting for the experimentally relevant case of confined systems, show that, when entering the supersolid phase, two distinct excitation branches appear, respectively connected to crystal or superfluid orders. These results confirm infinite-system predictions, showing that finite-size effects play only a small qualitative role. Experimentally, we probe compressional excitations in an Er quantum gas across the phase diagram. While in the BEC regime the system exhibits an ordinary quadrupole oscillation, in the supersolid regime, we observe a striking two-frequency response of the system, related to the two spontaneously broken symmetries.
We study the spectrum of elementary excitations of a trapped dipolar Bose gas across the superfluid-supersolid phase transition.Our calculations, accounting for the experimentally relevant case of confined systems, show that, when entering the supersolid phase, two distinct excitation branches appear, respectively connected to crystal or superfluid orders.These results confirm infinite-system predictions, showing that finite-size effects play only a small qualitative role.Experimentally, we probe compressional excitations in an Er quantum gas across the phase diagram.While in the BEC regime the system exhibits an ordinary quadrupole oscillation, in the supersolid regime, we observe a striking two-frequency response of the system, related to the two spontaneously broken symmetries.
Supersolidity -a paradoxial quantum phase of matter that combines crystal rigidity and superfluid flowhas been suggested more than half a century ago as a paradigmatic manifestation of a state in which two continuous symmetries are simultaneously broken [1].In a supersolid, the spontaneously broken symmetries are the gauge symmetry, associated with the phase coherence in a superfluid, and the translational invariance, signalizing crystalline order.The striking aspect is that the same particle is participating in developing such two apparently antithetical, yet coexisting, orders.Originally predicted in quantum solids with mobile bosonic vacancies [2][3][4], the search for supersolidity has fueled research across different areas of quantum matter from condensed matter to atomic physics, including quantum gases with non-local interparticle interactions [5][6][7][8][9][10][11][12][13][14][15][16][17][18][19].
Remarkable experiments have recently revealed that axially elongated dipolar quantum gases can undergo a phase transition from a regular Bose-Einstein condensate (BEC), possessing a homogeneous density in the localdensity-approximation sense, to a state with supersolid properties, where density-modulation and global phase coherence coexist [15][16][17].Such experiments, complementing the ones with BECs coupled to light [20][21][22], have opened a whole set of fundamental questions, covering the very real meaning of superfluidity in a supersolid state, its shear transport, and phase rigidity.
Of particular relevance is the study of the spectrum of elementary excitations, which governs the systems response to perturbations [23][24][25].Typically, phase transitions occur in concomitance with drastic modifications of the excitation spectra, -as in the case of the emergence of roton excitations in He II or the phononic dispersion for BECs -and similar dramatic changes are expected when crossing the superfluid-supersolid transition.Theoretical studies of uniform (infinite) gases with periodic boundary conditions and soft-core [26,27] or dipolar interactions [14], have shown two distinct branches appearing in the excitation spectrum of a supersolid state -one for each broken symmetry.Their co-existence has been identified as an unambiguous proof of supersolidity, being the direct consequence of the simultaneous presence of superfluid and crystalline orders [2,[26][27][28].
The next major leap is to understand if these trademarks survive -and can be measured -in the experimentally relevant regimes of a finite-size quantum gas, confined in all three spatial dimensions.In this Letter, we address these points by performing full spectrum calculations and by experimentally exciting collective modes in an erbium quantum gas.Both theory and experiment show the existence of two distinct classes of excitations, one connected to crystal modes and the other to phase modes, providing the finite-size equivalent of the twobranches spectrum for infinite systems.
In our study, we consider a three-dimensional dipolar quantum gas confined in an axially elongated (y) harmonic trap with transverse orientation (z) of the atomic dipoles.These systems are well described by an extended Gross-Pitaevskii equation (eGPE), including nonlinear terms, accounting for contact interactions depending on the scattering length a s , the anisotropic long-range dipole-dipole interaction, and quantum fluctuations in the form of a Lee-Huang-Yang type of correction [12, 14-17, 19, 29-33]; see also [34].We calculate ground-state (GS) wavefunctions, ψ 0 (r), by minimizing the energy functional resulting from the eGPE using the conjugategradients technique [35].As shown in Fig. 1 (insets), the ground state evolves with decreasing a s from a regular BEC (a,b) to a supersolid state (SSS) with axial densitywave modulation (c-f) and finally to an insulating array of independent droplets (ID) (g-h) [7,14,15,17,27].
The spectrum of elementary excitations is calculated by numerically solving the Bogoliubov de Gennes (BdG) equations, which are obtained from an expansion of the macroscopic wavefunction as ψ(r, t) = ψ 0 (r) + η(u l e −i l t/h + v * l e i l t/h ) e −iµt with η 1 and linearizing the eGPE around ψ 0 [13,25,35,36].Here, µ is the GS's chemical potential.By solving the result- ing eigenvalue problem, we find a set of discrete modes, numbered by l, of energy l = hω l and amplitudes u l and v l .We calculate the dynamic structure factor (DSF), S(k, ω), which informs on the system's response when its density is perturbed at a given modulation momentum k and with an energy hω [25,37,38].Whereas in absence of an external trap the spectrum is continuous and the DSF is a δ-peak resonance at the Bogoliubov mode (ω l , k l ), the confining potential yields instead a discretization of the excitation spectrum and a k broadening in S(k, ω).For considered parameters, these finitesize effects are more pronounced in Er than Dy, since the latter exhibits a larger number of maxima in the density-modulated phases, rendering its excitation spectrum is more reminiscent of the infinite-system case; see e. g.Fig 1.
Figure 1 shows the calculated excitation spectrum for GSs in the regular BEC, the SSS, and the ID phases for a Dy (upper row) and Er (lower row) quantum gas.In the BEC regime close to the SSS transition (a, b), the spectrum of excitations shows a single excitation branch with the characteristic phonon-roton-maxon dispersion of a BEC [39][40][41][42][43], as recently measured [44].When the roton fully softens (at a s = a * s ), the GS becomes density modulated with a wave number close to the roton one, k rot .Here, the excitation spectrum develops additional structures, marked by the appearance of nearly-degenerate modes; (c, d).When lowering a s , we find that these modes start to separate in energy, where some harden and the others soften, and two excitation branches become visible; (e, f).This result resembles that of infinite systems, where the broken translational and gauge symmetry are each associated with the appearance of one excitation branch [14,26,27].Additionally, we observe that the spectrum acquires a periodic structure, reminiscent of Brillouin zones in a crystal, with reciprocal lattice constant k k rot .Modes with energy higher than the maxon seem to have a single-droplet-excitation character and they will be subject of future investigations.When further decreasing a s < a * s , the lower-lying branch decreases both in energy and in DSF values, whereas the opposite occurs for the higher branch.Eventually, when reaching the ID regime, the lower branch progressively vanishes, underlying the disappearance of global superfluidity; (g, h).
We focus on the properties of the excitation spectrum in the supersolid regime.The interesting question is how the two branches relate to the two orders in the systems, crystal and superfluid.To gain insight, we study the system's dynamics when a single mode l is excited with amplitude η 1 by writing The subsequent time evolution of the axial density profile is shown in Fig. 2(ac) for three relevant cases.For simplicity, only the two extremes of the mode oscillation are shown.The mode character can be understood by noting that phase gradi- ents correspond to mass currents.Large gradients inside a density peak imply motion of the density peak (e.g.Fig. 2(a)) and relate to crystal modes.Large phase gradients between density peaks, signify a superfluid current of particles tunneling from one density peak to another (e.g.Fig. 2(b)), and are associated with phase modes.However, in our system, the phase/crystal mode classification is not strict and we find that these two characters mix; see Fig. 2 (a-c).Particularly, we observe both behaviors simultaneously in Fig. 2 (c).Such a mixing is expected from the long-range nature of the DDI, coupling density and position of the peaks [26,27].Note that the character of the mode can change with a s .For instance, the mode in Fig. 2 (c) develops an almost pure crystal character for descreasing a s .To quantify a mode's character, we plot in Fig. 2 (d) the DSF spectrum at a fixed a s , colored according to the ratio C of phase variances inside, and between the density peaks [34].This allows to differentiate the dominant character of the two branches, being phase-type for the lower branch, and crystal-type for the upper one.
To test our predictions, we experimentally study the collective excitations in an erbium quantum gas across the BEC-supersolid-ID phases.We prepare a BEC at a s = 64 a 0 .The atoms are confined in an axially elongated optical-dipole trap of harmonic frequencies 2π × (ν x , ν y , ν z ) = 2π × (259(2), 30 (1), 170(1)) Hz and polarized along z by an external magnetic field; see [13,17].To probe our system, we perform standard absorption imaging after 30 ms of time-of-flight expansion, yielding measurements of the momentum space density n(k x , k y ) [34].Using the tunability of the contact interaction via magnetic Feshbach resonances [45], we can prepare the system at desired locations in the phase diagram in the BEC, SSS, or ID phase by linearly ramping down a s in 20 ms to the target value.We then allow the system to stabilize for 10 ms.At this point we record an atom number of typically 5 × 10 4 for the SSS regime.We confirmed the relevant a s -ranges by repeating the matterwave interferometric analysis of Ref. [17].While in the BEC region the momentum distribution shows a regular, nearly Gaussian single peak, in the SSS regime the in-trap density modulation gives rise to coherent interference patterns along k y , consisting of a central peak with two lower-amplitude side peaks; see Fig. 3(a).
After preparing the system in the desired phase, we excite collective modes in the gas by suddenly reducing the axial harmonic confinement to 10% of its initial value (i.e. ν y ≈ 3 Hz) for 1 ms, before restoring it again.The atomic cloud is subsequently held for a variable time t h , before releasing it from the trap and recording the time evolution of n(k x , k y ).As the lifetime of the supersolid state is limited to around 40 ms [17], we focus on t h ≤ 30 ms.As expected, in the BEC phase, we predominantly observe an oscillation of the axial width, connected to the lowest-lying quadrupole mode [25].In the SSS regime, the situation is more complex; see Fig. 1(c-f).Here, multiple modes, of both crystal and phase character, can be simultaneously populated, resulting in a convoluted dynamics of the interference pattern.
We therefore employ a model-free statistical approach, known as principal component analysis (PCA) [46], to study the time evolution of the measured interference patterns at a fixed a s .This powerful method has been successfully used to study e. g. matter-wave interference [47] and collective excitations [48] in ultracold-gas experiments.The PCA analyzes the correlations between pixels in a set of images, decomposes them into statistically independent components and orders these principal components (PCs) according to their contributions to the overall fluctuations in the dataset.
In a dataset probing the systems dynamics after an excitation, the PCA can identify the elementary modes with the PCs weights in the individual images exhibiting oscillations at the mode frequencies [34,48].We apply the PCA to the time-evolution of the interference patterns after the trap excitation.Figure 3 (b) shows the PCA results in the SSS regime at a s = 49.8 a 0 .We identify two leading PCs, which we label as PC1 and PC2.Their weights oscillate with different amplitudes and at distinct frequencies, namely 41(1)Hz for PC1 and 52(5)Hz for PC2.The comparison between the measured frequencies and the theoretically-calculated mode energies indicates that, following our trap excitation, the sec- ond and third lowest lying even-mode are simultaneously populated.As shown in Fig. 2 (b) and (c), these modes possess a phase and a mixed character, respectively.Note that we apply an overall shift of −4.3 a 0 to the a s value for the experimental data; for more details see discussion in Refs.[44,49].
To visualize the role of each PC on the interferencepattern dynamics, we apply a partial re-composition of the images, accounting only for the PC of interest; see [34].The effect of PC1 on the axial dynamics is shown in Fig. 3(c), mainly being an axial breathing of the central peak, accompanied by weaker in-phase breathing of the side peaks.Instead PC2 exhibits a dominant variation of the side-peak amplitude; see Fig. 3 (d).These results show a good agreement with the calculated time evolutions of the interference patterns for the second and third even mode, shown in Fig. 3 (e-f).
Finally, we study the evolution of the modes across the BEC to SSS and ID phases.We repeat the collective excitation measurements for various a s , and, using the PCA, we extract the oscillation frequencies of all the leading PCs. Figure 4 shows our experimental results together with the mode tracking from the BdG-spectrum calculations.For a give elementary mode l, we plot ω l as well as the response amplitude R l = mω ID regions (see upper labels), identified using a matter-wave interfermometric analysis of the experimental data [17].
which indicates the probability to be excited by our trapexcitation scheme.For completeness, the figure shows both even and odd modes, although only even modes are coupled to our trap-excitation scheme.Here, |0 and | denote respectively the ground and excited state of interest, and ŷ is the axial position operator.
In the BEC regime, besides the roton mode that progressively softens with decreasing a s , the other modes show a regular spacing in energy and are nearly constant with a s .Both in theory and experiment, we observe that just one mode couples to the trap-excitation scheme.This mode has a compressional, axial breathing character.Experimentally, we observe that all the leading PCs oscillate at the same frequency, suggesting that they account for the same mode.In this regime, both the PCs frequencies, ω l , and R l remain rather constant.At the supersolid phase transition, reached around a s = 50.6 a 0 , the numerical calculations reveal that different modes undergo an abrupt change and can mix with each other.Their energy and phase/crystal character exhibits a strong dependence on a s .Here, several modes respond to the trap-excitation scheme, as shown by the value of R l .In the PCA we observe that the leading PCs now oscillate at distinct frequencies and have different characters (see also Fig. 3).One set of PCs reduces their frequency when lowering a s , indicating (at least) one phase mode that softens strongly in the SSS regime, even below the trap frequency ν y .Another set of PCs shows a slight increase of frequency with decreasing a s , indicating a mode that hardens.Calculations of C show that this mode changes character along the phase diagram and eventually becomes crystal-type.
In conclusion, the overall very good agreement between experiment and theory confirms the calculations in the SSS regime, revealing two distinct branches with respective crystal and superfluid characters.The trademarks of supersolidity expected in infinite systems thus carry over to the finite-size ones currently available in laboratories.The knowledge of the excitation spectrum will provide the base for future investigations related to the superfluid properties and phase rigidity in a supersolid state.
Our theory is based on an extended version of the
where ψ(r, t) is the dipolar quantum-gas' wave function ψ(r, t).The eGPE includes the kinetic energy, external trap potential and the mean-field effect of the interactions [25,59].The first three terms of Eq. ( 1) account for the kinetic energy, the external harmonic trapping potential, and the mean-field interactions, respectively.The latter includes the contact and the dipolar interactions.In order to study the supersolid phase, it is fundamental to also include a beyond-mean-field corrections in order to stabilize the supersolid state against the roton instability.This is done by adding a term in the form of the Lee-Huang-Yang correction, ∆µ[n] [12, 14-17, 19, 29-33]; see also [8,54,55,58].This is typically included as a correction to the chemical potential obtained under the assumption of local density approximation [56,57].However, recent experimental results have raised the questions about the range of validity of such a treatment since quantitative disagreements at a level of few % have been observed when comparing the theory results with the experimental findings [13,32,44,49,52,53].To the best of our knowledge, this is still an open question, which will need future additional theoretical investigations.To compensate for this effect, throughout this letter, we shift a s by −4.3a 0 .To calculate the ground-state (GS) wavefunction, ψ 0 (r), we then minimize the energy functional resulting from the eGPE using the conjugate-gradients technique [35].
In a next step, we study the Bogoliubov de Gennes (BdG) excitation spectrum of a dipolar Bose-Einstein condensate trapped in a harmonic cigar shaped potential [25,35].Our calculations are obtained by expanding the wavefunction ψ(r, t) around ψ 0 (r).Here, we write: where η 1, µ is the chemical potential of the ground state and The spatial modes u l and v l are oscillating in time with the corresponding frequency ω l = l /h.We then linearize the eGPE around ψ 0 at first order in η.By solving the set of coupled linear equations, we obtained the discrete modes, numbered by l, of energy l and amplitudes u l and v l .We define the (odd) even parity of the mode from their amplitude u l and v l being (anti-)symmetric in y.
In order to illustrate the spectrum, we compute the dynamic structure factor (DSF), since it directly gives information about the density response of the system when perturbed at specific energies and momenta.At T = 0 the DSF is defined as [13,37]: where the sum is over the different spatial modes and k is the wave vector.In Fig. 1 and Fig. 2 we plot the DSF of Eq. ( 2).For better visualization, we use an energy broadening of 0.09 hν y and 0.12 hν y for Fig. 1 and Fig. 2, respectively, similar to what was done in Ref. [37].
Defining the mode character
Within the Bogoliubov theory and in the linear regime, the effect of the population of the mode l on the global state dynamics can be studied using the following expression [25] ψ(r, t)e iµt ≈ |ψ 0 (r)| 2 + 2ηδρ l (r) cos ω l te −iηδϕ l (r) sin ω l t , where the density fluctuations δρ l = (u l + v * l ) |ψ 0 | and phase fluctuations δϕ l = (u l − v * l )/ |ψ 0 | have been separated.
In order to evaluate the dominant character of each mode l, we introduce the quantity C. As discussed in the main text, the crystal and phase mode differentiate from each other by the spatial region where δϕ l varies the most.For crystal modes, this is inside the density peaks, resulting e. g. in a center-of-mass motion of one individual peak, which leads to a change of the crystal structure.Differently, for phase modes, δϕ l changes the most between neighboring peaks, signalizing a particle exchange between peaks and thus a modification of the atom numbers in the peaks.We quantify these two types of character by computing the spatial variance of δϕ l (r) inside the density peaks, V in , and in between them, V out .The quantities V in and V out are defined as follow.
For a given axial density cut of the GS wave function |ψ 0 (0, y, 0)| 2 , we first define the region inside (between) the density peaks by identifying the different density maxima (minima) and number them by j ∈ [1, N in(out) ].In a next step, we compute the mean distance d between all density minima to their neighbouring maxima.Finally, we isolate the region R j = [−d/3, +d/3] of space centered around each maxima (minima) and calculate: The mode character is then evaluated by considering the ratio C = Vin /Vout.C is large for modes with prevalent crystal and small for the ones with dominant phase character.In Fig. 2 (d) we encode the information on C as a color scale on the DSF spectrum.The same color map is used to illustrate the modes of the panels (a-c) in Fig. 2, confirming their correct assignment.
Applying the principal component analysis to our data
Dataset for applying the PCA
To identify the excited modes from our experimental data, we apply a general statistical method called principal component analysis (PCA) [46][47][48] to a set of measured density distributions after a time-of-flight expansion.For our trap-excitation measurement, a dataset for the PCA is composed as follow.For each target value of a s , we record the time evolution of the density distribution for holding time, t h , between 0 and 30 ms.Pro t h , we record between 15 and 30 repeated images, all together yielding a dataset of N m > ∼ 200 images.Each experimental run i yields a two-dimensional density distribution n i (k x , k y ).By performing a simple two dimensional Gaussian fit, we extract 71 × 71 pixels region-of-interest (ROI) centered on the atomic cloud (the pixel's width in k x,y is 0.32µm −1 ).In addition, we post-select the shots in which the atom number, the axial cloud size and the transverse cloud size vary by less than 20%, 30% and 15% than their mean values, respectively.
PCA's working principle
To apply the PCA, we represent each ROI of a dataset as a vector ρ i (s) where s represent the index of the pixel (s ∈ [1, N p ], N p is the number of pixels in one image).We compute the mean vector image ρ(s) = Nm i=1 ρ i (s)/N m and consider the variations of the pixel values in each vector image compared to ρ, δρ i (s) = ρ i (s) − ρ(s).Finally, we consider the covariance matrix of these variations Cov(p, s) = Nm i=1 ρ i (s)ρ i (p)/(N m −1), which is real symmetric.By simply diagonalizing the covariance matrix, the PCA constructs a new basis of N p vector-images, called principal components (PCs) and written C p (s) (p ∈ [1, N p ]) in the original pixel basis, that are uncorrelated one from an other.The PCs satisfy CovC p = λ p C p where λ p is the eigenvalue of the covariance matrix associated to the PC p.The original vector images can be all rewritten in this new basis as ρ i (s) = ρ(s) + Ns p=1 w p,i C p (s), where w p,i = Np s=1 C p (s)ρ i (s) is the weight of the component p.We note that, by converting back the pixel representation to the original two-dimensional momentum space, the above decomposition means where C p (k x , k y ) encompasses now the densitydistribution change induced by the PC p.The fact that the covariance matrix is diagonal in the PC basis indicates that the PCs correspond to uncorrelated sources of variations in the dataset.More explicitly, the coefficients w p,i show no correlations in between different p.This feature makes the PCA a powerful tool, e. g. to identify and discriminate between elementary modes of different frequencies when applied to time-evolution data, as used in Ref. [48].An example of the obtained two leading PCs in the supersolid region is given in Fig. S1.Identifying the elementary modes of a quantum gas via the PCA We quickly remind the working principle, of the identification of modes via the PCA.In the linear regime, the contribution of each mode to density oscillations is expected to decouple and separate temporal and spatial variations as: with φ l an arbitrary phase for the mode l.This relation should also hold for the density distribution after the gas's free-expansion.If one considers that the image index i encloses a time dependence (t i ), the equations (3) and ( 4) have a very similar structure, associating C p (k x , k y ) and w p,i to ρ l (r) and cos (ω l t i ), respectively.Thus the PCA-based identification of uncorrelated components in the time-evolution of the density profiles should enable to identify the elementary modes of the system.The corresponding PCs' weights are then expected to oscillate in time at the frequency ω l of the modes.In particular, the PCA should separate the modes oscillating at different frequencies and differentiate them from other sources of fluctuations or of dynamics (e.g.dissipation).Following Ref. [48], we note that modes can be properly distinguished if the period associated to their beating is smaller than the total time for which the time-evolution is recorded, or, even for shorter probe time, if they have different enough amplitudes of oscillations (i.e.excitation probability).One should note that a single mode can be accounted for by several PCs, especially if other sources of fluctuations or of time evolution couple to the mode's signature in the pixel correlations.
From our dataset with repeated realizations of each hold time t h , we thus consider, for each PC p, the mean weights at time t h , W p (t h ) = i/ti=t h w p,i / i/ti=t h 1.We then fit W p (t h ) to a sine function A 0 + A s cos (ωt h + φ) and extract the PC's frequency (ω) and amplitude A s of oscillation.We then consider as relevant the PCs that show oscillation of amplitude A s > 8 × 10 −4 , frequency ν > 20 Hz, and where the oscillation frequency can be extracted with a precision < 10%.Examples of the time evolution of W p and of their fits are shown in Fig. 3
Partial recomposition
To isolate the effect of each PC on the complex timeevolution of the interference patterns, we use partial recomposition of the images inspired from Eq. (3).In particular we define n (p) (k x , k y , t) = n(k x , k y ) + W p (t)C p (k x , k y ).(5) This is equivalent to consider that a single PC is "excited", similarly to what can be done in theory for the individual excited modes of the BdG spectrum (see Fig. 2) and its description in the main text and Supp.Mat.).In Fig. 3 (c-d), we show examples of the axial cuts of n (p) (k x , k y , t) for two of the leading PCs.We note that here, as well as for all experimental data shown in this manuscript, the axial cuts correspond to the average of the density distributions for |k x | < 1.6 µm −1 .
FIG. 3 .
FIG. 3. PCA results at as = 49.8 a0.(a) Example of a measured mean interference pattern in the renormalized central cut of the density distribution n(ky) for t h = 5 ms.(b) Time evolution of the weights of PC1 (filled) and PC2 (empty circles) together with their sine fit.Error bars denote the standard error of the mean.(c-d) Evolution of the partially recomposed n(ky) for PC1 (c) and PC2 (d).(e, f) Calculated time evolution of n(ky) from excitation of the mode shown in Fig. 2 (b) and (c), respectively, and using η = 0.15.
2 y |ŷ 2 FIG. 4 .
FIG. 4. Comparison between the mode energy obtained from the Bdg calculations and the energies extracted from the PCs (circles).The gradual color code of the theory lines represents the relative strength of R l going from strong (red) to no (grey) coupling.Error bars denote one standard deviation from the fit.The background color indicates the BEC, supersolid, and ID regions (see upper labels), identified using a matter-wave interfermometric analysis of the experimental data[17].
FIG. S1.Examples of the two leading PCs for our dataset at as = 50 a0.(a) PC1 reveals a dominant fluctuation of the interference patterns in the central peak at ky ≈ 0 µm −1 (central blue region) with a slighter change of the sidepeaks at ky ≈ ±2 µm −1 (red regions) .(b) PC2 shows fluctuations in the interference patterns' sidepeaks around ky ≈ 2 µm −1 and no significant change of the central peak. | 2019-07-03T15:17:28.000Z | 2019-07-03T00:00:00.000 | {
"year": 2019,
"sha1": "3112da588dd5ade27682ce269cdca89ecb09ef08",
"oa_license": null,
"oa_url": "https://arxiv.org/pdf/1907.01986",
"oa_status": "GREEN",
"pdf_src": "Arxiv",
"pdf_hash": "3112da588dd5ade27682ce269cdca89ecb09ef08",
"s2fieldsofstudy": [
"Physics"
],
"extfieldsofstudy": [
"Physics",
"Medicine"
]
} |
249283802 | pes2o/s2orc | v3-fos-license | Involuntary Career Changes: A Lonesome Social Experience
Like any other career process, career changes are influenced by relationships. Moreover, involuntary career changes are a challenging, yet understudied, career transition. Based on a relational perspective of work and careers, we investigated the way people’s social environment affects the process and experience of involuntary career changes. Specifically, we aimed to identify the sources of relational influences and to understand how these influences affect career changes. Semi-structured interviews were carried out with 14 adults who were forced to change career because of unemployment or health issues. Through thematic analysis, we identified three sources of relational influences (personal, work, and institutional environment) and three forms of influence that others had on career changes (positive, negative, and ambivalent). These influences manifested at four distinct moments of the process: When participants were leaving their former job, when they were shifting between their former occupation and a new livelihood, when they were exploring new career options, or when they were trying to implement their new career plan. Overall, results suggest that involuntary career changes are deeply shaped by heterogeneous and differentiated relational influences. The effect of the personal environment varied depending on the moment of the career change process. In particular, family and friends tended to be perceived as barriers when it came to shifting from the old to a new occupation and implementing a new career plan. The work environment mostly had a negative effect on the career change experience, suggesting the labor market might be somewhat refractory toward adult career changers. Institutions played a critical role throughout the change process, with support structures often being perceived as inappropriate, but with guidance professionals generally recognizing participants’ difficulties. Moreover, diverse forms of ambivalence characterized the identified relational influences, which were sometimes both appreciated and avoided or had ambiguous and fluctuating effects. Finally, although being a fundamentally social experience, involuntary career changes were also characterized by moments of loneliness that reflected the inadequacy of available support and a sense of shame associated with the status of career changer. Study limitations, research perspectives, and practical implications at the labor market, institutional, and individual levels are addressed.
INTRODUCTION Involuntary Career Changes
During their working lives, people tend to go through several career transitions, which scholars have argued are on the rise in recent years (Fouad and Bynner, 2008;Rudisill et al., 2010;Urbanaviciute et al., 2019). Career transitions are defined as "moves across different types of boundaries" that may represent "both minor discontinuities and major interruptions in an individual career" (Chudzikowski, 2012, p. 298). Some transitions are highly normative and expected (e.g., the transition to retirement), whereas others are rather nonnormative and unanticipated (e.g., demotions) and can involve higher discontinuities. In addition, career transitions can be of several types (Heppner and Scott, 2006): entry or reentry transitions (e.g., the passage from school to work or labor market reintegration after a break), maintenance transitions (e.g., a role change within the same company or occupation), advancement transitions (e.g., a promotion toward a better position), and leave-or-seek transitions (e.g., a change of occupational sector). The latter are also called career or occupational changes and they imply a shift to a new occupation that is not in line with the previous occupation (Ibarra, 2006;Carless and Arnup, 2011;Peake and McDowall, 2012). While research on normative transitions and upward mobility is abundant, career changes have been less studied (Sullivan and Al Ariss, 2021).
Depending on the reasons and outcomes of this process, career changes can represent both opportunities to grow and perturbing periods in workers' careers (Carless and Arnup, 2011;Chudzikowski, 2012;Masdonati et al., 2017). Intentionality or willingness is a key dimension that tends to differentiate the experience of career changes (Heppner and Scott, 2006;Fouad and Bynner, 2008;Stuart et al., 2009;Zacher, 2017). In general, workers who voluntarily decide to change their career seek an improvement in their working (e.g., job satisfaction) or personal life (e.g., work-life balance). In contrast, other workers change job or occupation involuntarily, which might indicate a loss of control over their careers and lead to situations of uncertainty and decreased work and life satisfaction (Bernhard-Oettel and Näswall, 2015). Surprisingly, although voluntary career changes have been extensively studied (e.g., Négroni, 2007;Barclay et al., 2011;Howes and Goodman-Delahunty, 2014), little is known about the challenges of involuntary career changes (Peake and McDowall, 2012). Moreover, the rare studies on this topic focus on specific populations, such as veterans (e.g., Haynie and Shepherd, 2011;Kulkarni, 2020) and athletes (e.g., Arvinen-Barrow et al., 2018). The experiences of people who were forced to change career, such as workers with health issues (e.g., Baldridge and Kulkarni, 2017) or those who change occupational sector because of lack of job opportunities in their initial occupation (e.g., Bernhard-Oettel and Näswall, 2015), are then relatively undocumented.
Career Changes as Relational Experiences
Career changes have been addressed through various theoretical lenses (Baruch and Sullivan, 2022), such as the chaos theory of careers (Peake and McDowall, 2012), the life-span, life-space approach (Barclay et al., 2011), the life-course perspective (Howes and Goodman-Delahunty, 2014), a psychosocial transitional perspective (Masdonati et al., 2017), the stress-coping approach (Rudisill et al., 2010), and the boundaryless career perspective (Chudzikowski, 2012). Most of these theoretical frameworks refer to a social dimension that might affect career development processes. Nevertheless, the social dimension is often reduced to a background factor. Research employing these theoretical lenses mostly investigated the ways people's social environment contributed to the decision to change career. For example, Howes and Goodman-Delahunty (2014) showed that police officers and teachers initiated a career change because they felt unvalued and lacked social recognition. Professional opportunities stemming from social connections prompted occupational changes for midcareer individuals in Peake and McDowall (2012). More in line with the present article, Motulsky (2010) appears to be one of the few studies that have thoroughly examined how an individual's social environment intervenes in the transition process beyond the decision to change career. It showed that women's voluntary midlife career changes were shaped by an articulation of social connections and disconnections from parents, the partner, friends, colleagues, and supervisors. However, none of these studies examined how social influences affect involuntary career change processes.
In parallel, broader theoretical approaches to career development (e.g., Brown and Lent, 2017;McMahon and Patton, 2019) increasingly emphasize the impact of contextual and social environmental factors on career paths, though in polarized and rather static ways (i.e., in terms of proximal and distal supports and barriers, see Sheu and Bordon, 2017). Kenny et al. (2018) proposed the relational perspective on work and careers to address the influences that different types of relationships can have on careers and to articulate the work and non-work challenges of people who struggle in their working lives. Rooted, among other things, in the earlier writings of Blustein (2011) and Flum (2015), this perspective is structured around four tenets. First, "Work is a vehicle for human connection" (Kenny et al., 2018, p. 138), meaning that working is a fundamentally relational experience and that relationships at work can provide social connection and meaning in life, but can also be detrimental for individuals' well-being. Second, "Family and other close relationships are vital domains of life experience that interact with work in reciprocal and complex ways" (p. 139). This tenet highlights that events at work affect life outside work (particularly one's family), and vice versa, and that this reciprocal influence can be either positive or negative. Third, "Relationships, both current and those internalized from past experience, affect the career development process and trajectory" (p. 139). In that sense, relationships can be both limiting (i.e., consist of barriers to one's career development) or positive (i.e., function as emotional or instrumental support). This tenet also suggests that a temporal dimension is to be considered when studying relational influences on careers: Past relational experiences affect the way people cope with current challenges at work and develop throughout their careers. Fourth, "Culture, social marginalization, and economic status exert critical roles in shaping work and relational experiences; their meaning; and the dynamics between work, family, and community life and opportunities and outcomes" (p. 139). This last tenet advocates a broad conceptualization of relationships, which are not limited to direct interactions with the proximal social environment, but also include cultural and societal processes. Thus, people also relate with the society they live in, including institutions, support structures, and the labor market.
When applied more specifically to involuntary career changes, the relational perspective on work and careers (Kenny et al., 2018) implies considering the extent to which relationships at work play a role in the career change processes and outcomes (Tenet 1). It also suggests that relationships outside the work sphere possibly affect-and are affected by-career changes (Tenet 2). Moreover, the relational perspective suggests considering social supports and barriers from a temporal viewpoint, and particularly the influence of internalized past experiences on current career changes (Tenet 3). Finally, a broad conceptualization of relationships implies understanding the ways both proximal (e.g., co-workers and family) and distal factors (e.g., institutions and labor market) affect career changes (Tenet 4).
The Current Study
In sum, while voluntary work transitions are well documented, research is needed to understand the process and experiences of involuntary career changes. This is especially crucial because involuntary transitions are more likely to represent disruptive events for individuals as they are generally less anticipated, thus leaving less time and space for individuals to develop their resources and prepare for the transition compared to voluntary transitions (Fouad and Bynner, 2008;Blustein et al., 2013;Sullivan and Al Ariss, 2021). In particular, given the relational nature of careers (Blustein, 2011;Flum, 2015), it is of pivotal importance to study the influence of others on people's experiences of a career change. Research on relational processes during voluntary career change has suggested that this influence needs to be considered in a dynamic and nuanced way (Motulsky, 2010). Accordingly, the general aim of the present study was to understand how others influence the process and experience of involuntary career changes. Based on the relational perspective on work and careers (Kenny et al., 2018), this general aim was divided into two specific research goals. The first specific goal was to identify the sources of relational influences on involuntary career change processes and experiences. These sources can be deployed both within and outside the working sphere as well as at both proximal and distal levels. Our second specific goal was to understand how relational influences affect the process of involuntary career changes. The role of others is to be considered through a temporal perspective, which means studying the way both anticipated future relationships and internalized past relationships influence current experiences of career change.
The current study was carried out in the French-speaking part of Switzerland, representing 23% of 8.5 million Swiss habitants. Similar to several other countries, workers in Switzerland show increasing occupational mobility intentions and behaviors (Office Fédéral de la Statistique [OFS], 2020). For example, between 2018 and 2019, one out of five workers quit their job. In the same period, 4.6% apparently left their jobs involuntarily, either because of health issues, firings, or expiring contracts. However, the proportion of this occupational mobility that involved a career change is not clear. From a statistical viewpoint, involuntary career changes seem an overlooked phenomenon or an issue that is difficult to quantify and grasp. Based on these statistics, the present study focused on the two populations who seem at greater risk of experiencing involuntary career changes, namely workers changing jobs because of health issues and people who are looking for a job in a new occupational sector because of a lack of opportunities in their initial occupational field. These two populations can benefit from state support, either through the unemployment or disability public scheme. The eligibility for state support for implementing a career change and the extent and type of support depend on the assessment of each beneficiary's situation, provided by career professionals such as job coaches and career counselors.
Paradigmatic Position
To address our study aims, we implemented an inductive qualitative research design based on thematic analyses of semistructured interviews with involuntary career changers. Inductive qualitative research is indeed considered an appropriate method to study recent changes in the world of work (Pratt and Bonaccio, 2016). According to the classification of qualitative research paradigms suggested by Ponterotto (2005), the present study can be qualified as postpositivist-constructivist. Indeed, our research objectives, data collection, and analysis strategies suggest that while recognizing the uniqueness of each participant's story, we assume the existence of a shared reality that is common to people living the same situation. Specifically, on the one hand, the research can in part be described as postpositivist, since it adopted semi-structured interviews with key questions asked to all participants, involved reaching consensus among coders, and considered category frequencies when presenting and discussing findings. On the other hand, our study is also constructivist in that interviews were partly adjusted to each participant's unique experiences and the interviewer-interviewee interaction, researchers shared reflections on their feelings and experiences after each interview, and participants' narratives and voices prevailed over frequencies when reporting and discussing results.
Participants and Procedure
Fourteen participants, seven men and seven women, aged 29-58 (M = 40.36, SD = 8.84), who were engaged in an involuntary career change because of unemployment (N = 6) or health issues (N = 8), took part in the study. As indicated in Table 1, the occupations in which they had previously worked were mainly in the service sector (except for one carpenter) and covered a variety of occupational fields (e.g., security, health, transportation, and sales) and educational requirements, ranging from vocational training (e.g., hairdresser) to university (e.g., surgeon). Eight participants were of Swiss origin, two were binational, and four came from other European countries. Inclusion criteria required having begun a career change recently and involuntarily because of either physical or mental health problems or lack of employment opportunities in their occupational sector. The recruitment procedure began with presenting the project to public and parapublic institutions in the cantons of Vaud, Genève, and Neuchâtel (French-speaking regions of Switzerland) whose mission is to coach adults forced to change career and reintegrate them into the labor market. We then asked the collaborators of the 13 institutions that had accepted participation in the study (mainly job coaches and career counselors) to invite users who matched the inclusion criteria to participate in a research interview. The contact details of the interested users were referred to the researchers, who contacted them by e-mail to set up an interview. All interviews were conducted remotely using Zoom software, which is considered appropriate for this type of data collection (Archibald et al., 2019). Interviews lasted between 34 and 146 mins (M = 92), they were entirely recorded and transcribed, and they were carried out by three researchers, a professor, a Ph.D. student, and a final-year master's student in vocational psychology and career counseling. Since the interviews could sometimes generate strong emotions, at the end of each interview the interviewer took a moment to acknowledge these emotions and restore, if necessary, the initial emotional state. This was facilitated by the fact that all the interviewers are trained in counseling psychology. Participants who needed additional support were referred to appropriate services. Upon completion of each interview, the interviewer wrote a note summarizing the interviewee's situation and their main challenges and proposed a self-analysis.
Participation in the study was voluntary and was not rewarded. The interview itself was considered an incentive to participate in the research as it could provide participants with a valuable moment to reflect on their life and career paths and plans. The research was carried out in line with the American Psychological Association's ethical standards and the study procedure was approved by the ethics committee of the
Interview Guideline
The interview guideline was divided into seven sections. The first section gathered participants' sociodemographic information. In the second section, interviewees were asked to describe their career paths (e.g., "Can you tell me what jobs you have held throughout your career, and how long they lasted?"). The third section focused on the process and experience of career change (e.g., "Can you tell me what brought you to this point?", "What are the reasons for your career change?", "How do you feel about changing professions/leaving your profession/entering a new profession?"). The fourth interview section investigated personal, social, and vocational identity processes (e.g., "How does the career change also change how you see yourself as a person?", "How do you talk about this situation to people around you?", "Try to project yourself into the distant future, for example, 10 years from now. Who do you think you will be at that moment professionally?"). Personal and social resources and barriers were explored in the fifth section (e.g., "What resources facilitate your coping in facing the challenges of a career change?" and "What is standing in your way?"). Finally, Sections 6 and 7 covered participants' relationship to work and, when pertinent, to training (e.g., "How has your career change affected the importance you attach to work in your life and the meanings you attribute to it, if at all?").
The complete guideline is provided in the Appendix. Interview questions were formulated ad hoc for a broader research project that aims to understand what characterizes involuntary career change processes through the concepts of career shock (Akkermans et al., 2018), identity work (Kulkarni, 2020), and relationship to work (Fournier et al., 2020) and training (Kaddouri, 2011). For the present study, we essentially focused on participants' discourse related to the third, fourth, and fifth sections.
Analyses
Data analysis was carried out by two researchers, a professor and a junior researcher in vocational psychology and career counseling. According to Morrow (2005), the researchers' "horizons of understanding" (p. 250) influence reflexive processes during the analysis stage and have to be made explicit through a researcher-as-instrument statement. The professor has experienced several career and geographical transitions but has never gone through an authentic career change. The junior researcher has experienced a voluntary career change from teacher to career counselor. Having both been trained as career counselors, the two researchers are familiar with the realities of people who face a career change. A third researcher, a senior researcher in the same field, was external auditor. His role was to provide a critical perspective on the analyses and the study as a whole, to help find consensus when needed, and to prevent possible power issues. Analyses were performed with the software MAXQDA, and the six-step procedure for reflexive thematic analysis suggested by Braun and Clarke (2006) was followed.
Familiarizing With the Data
One researcher went through the transcriptions and identified all the interview passages where participants referred to relational aspects of their career change experience or indicated the presence of others in the career change process. The selected passages were divided into two parts and each researcher carefully read one of the two parts. They then met to generate initial ideas about interesting aspects to be further analyzed. The researchers reached a consensus and organized the data analysis around three structuring themes: the type of influence others have on the career change process and experience (in line with the second specific research question); the sources of relational influences (in line with the first specific research question); and the moment when the relational influences took place (in line with the temporal dimension addressed in the theoretical framework, see Kenny et al., 2018).
Generating Initial Codes
The two researchers interchanged their parts and each researcher coded half of the retained transcriptions. The codes were datadriven, and each code was classified according to the three structuring themes. The two researchers then met a second time to share and compare their respective initial codes.
Searching for Themes
During the same meeting, the researchers sorted their codes and collated them into overarching themes. Concretely, they gathered codes referring to the same structuring theme and then combined and grouped them into themes. The following themes were identified: Positive, negative, and ambivalent influences were the themes within the structuring theme "type of influence"; personal environment, work environment, institutional influences, and societal influences were the themes within the structuring theme "source of influence;" and past, present, and anticipated influences were the themes within the structuring theme "moment of influence."
Reviewing Themes
During a third meeting, the two researchers reviewed the themes to ensure they accurately reflected the data set and satisfactorily covered the study objectives. Two modifications were implemented at this stage. First, within the source of influence structuring theme, we merged the themes of institutional and societal influences because coding these two sources separately was sometimes impossible. Second, we divided the moment of influence structuring theme into four temporal themes, namely "leaving" (the former occupation), "shifting" (the change process), "exploring" (planning a new career), and "implementing" (the new career plan). This separation seemed to us to be better aligned with the actual stages of career change reported by our participants. A fourth meeting was then organized to identify subthemes consensually. To provide a holistic and temporal perspective on our results, we crossed all the themes as well as compared and collated the codes within each of these intersections. For example, subthemes were created to describe more precisely positive influences coming from the personal environment and occurring when participants were leaving the former occupation, and so on. At this stage, some codes were removed because they did not refer to the study objectives or were exclusive to a single participant, and maps of the structuring themes, themes, and subthemes were designed and discussed.
Defining and Naming Themes
One researcher defined and described all the structuring themes, themes, and subthemes and returned to the data to identify examples of quotes that illustrate each subtheme. These definitions, descriptions, and quotes were revised and validated by the second researcher with the goal of identifying compelling labels and relevant, informative excerpts. Final minor adjustments were implemented on the subthemes. The structuring themes and themes are illustrated in Figure 1, whereas the "Results" section addresses each subtheme.
Producing the Report
This final step consisted of transforming the findings from a multidimensional configuration to a linear narrative. We hierarchized the structuring themes and organized the "Results" section according to our research goals, prioritizing our understanding of how others influence the process and experience involuntary career changes, and depending on whether they facilitate, hinder, or bring ambivalence to the career change. Thus, findings were organized first around the type of influence structuring theme, then around the source of influence structuring theme, and finally around the moment structuring theme.
Trustworthiness
In line with Morrow's (2005) suggestion concerning trustworthiness criteria for postpositivist-constructivist qualitative research, we ensured our study credibility, transferability, dependability, and confirmability. Reflexive notetaking right after each interview and the use of the shared perspectives of two researchers enhanced the credibility of the analyses. The explanation of the context and the limits of the study (proposed in the "Introduction" section and at the end of the discussion, respectively) enable us to estimate the transferability of our results. Finally, the detailed description of the analysis procedure, the clarification of the two researchers' horizons of understanding through a researcher-as-instrument statement, and the contribution of an external auditor should ensure the dependability and confirmability of the study.
RESULTS
We identified three structuring themes in the initial analysis steps (Figure 1). The first structuring theme was the type of influence that others had on participants, which we divided into three themes: positive influences, referring to others perceived as resources and facilitators of the career change process; negative influences, indicating others perceived as obstacles or constraints; and ambivalent influences, others perceived either as both a resource and an obstacle, or having an unclear, ambiguous effect on the career change experience. The second structuring theme was the source of influence, which we divided into three themes: the personal environment, covering influences from family members and friends; the work environment, referring to influences from the professional context, such as colleagues and other members of participants' companies; and institutional influences, indicating influences from the institutions with which participants were in contact. This category also included people from the society in which they evolved more broadly. The third structuring theme was the moment when the influence manifested or the phase within the career change process. We identified four key moments, yielding four additional themes: leaving-the period when participants learned that they needed to change careers while still in their first occupation; shifting, referring to the inbetween phase in which they dealt with the challenges of transitioning "from a before to an after;" exploring, when participants focused on and were concerned with their future career plans; and implementing, which covers the concrete preparation and implementation of their career changes to new occupations.
In the following sections, we present the subthemes that emerged within these themes. We first divided the subthemes according to the type of influence. Within each type of influence, we then organized them according to the source of influence. (5) Socioemotional support and role models (5) Work environment Discovery of a new career option (3) Institutions Acknowledgment of the issue (4) Instrumental support (money and time) (4) Professionals who make the person feel unique (4) Psychological support (3) Help with career decision strategies (6) Support for professional integration (6) Numbers refer to frequencies of participants.
Finally, for each source of influence, we present subthemes chronologically (i.e., according to the moment when they occur).
Positive Relational Influences
As indicated in Table 2, 10 subthemes allowed us to qualify positive relational influences on participants' career change experiences.
Positive Influences From the Personal Environment
Three subthemes covered positive influences coming from the personal environment. Participants stressed the importance of both instrumental and socioemotional support during the period when they prepared to leave their former occupations. For example, William, 41, a former flight coordinator who lost his job, mentioned socioemotional support: There are also moments that are lower, more difficult. And then, as we said, you have to share with your friendsyou have to talk about it, you have to discuss it; it's very important.
[. . .] It starts again and then the positive comes back, there is the motivation to move forward. Support from friends and family was also highlighted during the exploring period, again taking the form of socioemotional help (e.g., encouragement) but also role models. Frédéric, 29, a former carpenter suffering from disabling back pain, talked about a friend who had gone through a career change and inspired him: Most of my friends, they all studied, except for one who also did a career change. He used to be an optician, then he said that it wasn't really his values and he decided to change, and now he's in his last year of ergotherapy. [. . .] I've always admired him for his career change, his desires, his values. And it's true that I decided-given that this year wasn't great and all, and that I had the opportunity to be able to do it-I said to myself, "Well, this year I'm the one who decides to change job and change professions, make this career change."
Positive Influences From the Work Environment
Participants reported one positive influence on their career changes coming from the company they were about to leave. They were indeed able to identify new career options (e.g., an appealing new occupation or an integration opportunity) while still in the former job. Stating that allergies prevented him from continuing working as a hairdresser, Kevin, 29, identified the human resource sector as a promising career alternative. Before leaving his job, he had already decided to enroll in a training program in this field: I said to myself: "OK, what could suit me?" And I remembered that [. . .] my boss was really, let's say artistic director, but on the other hand the management of the hairdressing salon and the employees were a bit handed to me. So, if we had to hire an apprentice, I was the one who had to do the necessary steps for the recruitment, for his training, to manage the holidays of my employees.
Positive Influences From Institutions
Institutional influences operated as resources throughout the four career change moments and manifested in six subthemes. Some participants appreciated that their health issues were already acknowledged during their former work experiences, which facilitated the process of leaving. Frédéric was able to access public disability insurance before leaving his job, which helped him look forward to the future: Now, I'm still employed at my old company, so I'm on sick leave. The disability insurance accepted that I do an early career change. Because all my medical files noted that I could do it, and now I'm looking for a new apprenticeship, and as soon as I start, the disability insurance will support me financially.
Three additional forms of institutional supports were mentioned during the shifting phase. First, some participants stressed the importance of the instrumental support they accessed from state disability or unemployment insurance, both in terms of financial resources and time to reflect on and cope with the career change. Second, interviewees mentioned that the professionals they interacted with had been able to make them feel unique and step away from the profiteering image that is often ascribed to beneficiaries of public disability or (3) Loneliness (3) Silence (6) Tensions and imbalances (4) Family strains (4) Work environment Disrespectful employers (8) Labor market prejudices (7) Labor market rigidity (4) Difficulties in networking (4) Institutions Administrative slowness (4) Rigidity and constraints (4) Inconsistent and inadequate support (5) Inappropriate adult education programs (9) Numbers refer to frequencies of participants.
unemployment schemes. Third, other participants benefited from psychological help. Beatriz, 29, a former saleswoman, clearly valued the fact that her counselor made her feel special in terms of motivation: My counselor is awesome. I mean, he's seen me as a different person. Because people who go to the disability insurance, in general they don't give themselves as much means, they don't do as much research to be able to get out of it. Whereas he saw in me a really optimistic side, saying to myself, "today I can't, but tomorrow I can." And for him, it was. . . he told me, "I have rarely met people like that, in fact-people who have the desire and then fight to achieve something." During the exploration phase, participants mentioned institutional support as a significant aid for strategic career decision-making. Recalling how he chose when he was a teenager, Jean, 31, a former money courier, appreciated the career decision-making process initiated by his public disability insurance counselor: What I think is cool is that when you're at school, when you have a career counselor who says, "What do you want to do now?" I think for 90% of the kids, I saw with my buddies, we were not interested in the thing [. . .] While now, when we elaborate plans and everything, [. . .] I am a lot more mature, saying "Well, I have to do something I like; I still have X years to go, if not more, because the laws might change; I have to have a decent salary." [. . .] I think I'm even luckier because I'm facing this situation of career choice, but with maturity and with more valid criteria than "I want to go party on weekends." Finally, interviewees emphasized the support received during the implementation phase, which took the form of concrete internship opportunities and training courses leading to formal qualifications. Véronique, 44, a former hairdresser facing a chronic illness, appreciated that a professional offered to give her access to his network to help her find an internship position: I'm going to be evaluated next week, in relation to the first month that's gone by. I think my rate will increase slightly, and the goal is to continue this dynamic a little bit, and then I'm going to be followed up by a vocational rehabilitation coordinator who will, through his network, help me find an internship in a company.
Negative Relational Influences
Negative relational influences cover all the moments of change and all the sources of influence. We divided these influences into 13 subthemes (see Table 3).
Negative Influences From the Personal Environment
Misunderstandings and judgments during the leaving phase were the first form of negative influence coming from the personal environment. In this subtheme, participants reported that family and friends did not really recognize what they were going through and may have implicitly or explicitly criticized their inability to overcome their difficulties. A second negative influence occurring during the leaving phase was more indirect: It referred to the absence of potentially helping others within the personal environment, leading to feelings of loneliness. Instead of actual negative influence, some participants characterized some of their career change experiences as the unavailability of possible support from loved ones and their close entourage. Giuliana, 47, a former surgeon, shared her loneliness as follows: I am alone here: no family. I have nothing, nothing, nothing, nobody. So, my only family are my colleagues, my patients, who meanwhile call me all the time: "When are you coming back?" A more intentional form of loneliness characterized the shifting phase: Some participants decided to remain silent and isolated themselves to avoid sharing what they were living through. Henry, 58, a former humanitarian coordinator, adopted the strategy of avoiding discussion of his unemployment situation with his entourage. When asked whom he was talking to about this situation, it took him a long time before answering, I don't talk that much about [my career change] to be honest. It's kind of something I'm keeping to myself, for now. I'll see how it turns out and to what extent we'll have to get into it.
The exploration phase goes hand in hand with tensions and imbalances in the private sphere, including intimate relationships. Kevin, for example, found it humiliating to earn less money and be unable to contribute equally to his activities with his partner: When you go from a 100% salary to an 80% salary. well, there you go [. . .]. That's pretty hard, in the sense that you really feel like you're a bit of a burden-well, not a burden, because I still pay the same bills [. . .]. I really wanted to keep the same bills to tell myself that I don't want to be a burden for the person I live with. But of course, the weight is still there [. . .]. I don't know, but when we want to go on holidays, I can't necessarily put money aside like he does. So, if tomorrow he says to me, "OK we're going on vacation, how much do you have set aside?" [stifled laughter] well, we won't go very far. So, maybe I'll feel a drag. I may not be able to afford to go to a restaurant, to offer to eat out [. . .]. But it's true that there's still a weight on my shoulders. I think that I would have lived better if I lived alone. [. . .] Maybe I also have a problem with being dependent on others, like being dependent on disability insurance or being dependent on unemployment or social assistance. It's hard, you know.
Finally, the implementation phase was sometimes inhibited by family strains. In particular, family duties could prevent professional mobility and flexibility, or reduce financial resources. This was the case for Gabriel, 29, a former carpenter suffering from disabling back pain, who could not afford to be without a salary to attend an appropriate training course because he had to support his two children: [The training programs I consider] are not big trainings, so it's okay. I couldn't leave, I don't think I would have gone on a big 2-or 3-year course. It would have been. . . and then the family situation doesn't really allow for that either. With a child at home, there's a second one on the way, so it's not. . .
Negative Influences From the Work Environment
Negative influences coming from participants' work environments manifested at the beginning and at the end of the career change process. During the leaving phase, some complained about disrespectful former employers, who did not understand or consider their needs and rights. For example, Anna, 49, a former nurse, was forced to return to work while recovering from wrist surgery: After the accident, I was pressured to go back [to work] and then it was almost mobbing [. . .] Psychologically it was very, very, very, very hard. In the end I was even forced to contact a lawyer, because I was pushed up to the director. He proposed to me-I still had wires in my wrist after the second operation when they removed the pin-he wanted me to move on foot for the follow-up of the students, to do small works during the rehabilitation. And then the director, I explained that it's too soon to ask that, and he said, "I'm willing to talk with your doctor." They did, they tried to mob me.
While implementing their career plans, participants encountered three forms of negative influence coming from the work environment they tried to integrate: Half of them had to cope with labor market prejudices toward career changers; some faced rigidities in the labor market; and others struggled to create and enter professional networks, which is a hindrance to accessing interesting positions. Former library data manager Nancy, 41, mentioned experiencing a combination of prejudices associated with her gender, age, and origin: I think it's very difficult [to find a job], especially in Switzerland, or maybe it's because I'm a woman, because of my age, because I'm a migrant, I don't know. . . Because when you change [careers], you have to start again and maybe employers have to take a chance. But for me, with my experience, there are not many opportunities to change.
Negative Influences From Institutions
Two forms of institutional barriers were pointed out during the shifting phase. First, some participants stated that administrative procedures slowed down the change process, as if the institutional pace was too slow for their own pace of change. Second, participants suffered from rigid, constraining, rule-imposing institutions that made them feel out of control of their career changes. Beatriz stated the following concerning the rules of the public disability insurance: You have to tell everything: if you fart, you have to justify why. You want to take a vacation? You have to say, "I'm going on vacation from such and such a date to such and such a date." You have to say, "I'm sick, " you have to say, "I was sick on such and such a day, " and after a while, that's it. You get your salary on the third of the month, while everyone else gets their salary on the 25th. So yeah, it's full of little constraints like that, which I don't like. So, disability insurance is not for me.
During the exploration phase, institutional support was sometimes considered inconsistent or inadequate. For example, Giuliana was initially forced to abandon her plans to switch from being a surgeon to working as a medical diagnostician. After being forced to close her practice, public disability insurance still allowed her to continue consultations despite its initial decision: After the [disability insurance's] medical advisor waited 3 months to answer, and then he received the report of the specialist-the hand surgeon-well, he told me, "Now you can continue with the consultations." Well done
Leaving
Shifting Exploring Implementing Personal environment Benevolent but inappropriate support (7) Work environment Ambivalent exit from former job (3) Institutions Appreciated help, but rigidity, and feared dependency (7) Partly imposed, partly chosen career plan (6) Numbers refer to frequencies of participants.
Ambivalent Relational Influences
Four subthemes reflected the way in which some influences ( Table 4).
Ambivalent Influences From the Personal Environment
Ambivalences in the personal environment appeared during the shifting phase. Half of the participants reported that their entourage was supportive in benevolent ways; but this support was often inappropriate, clumsy, and led to misunderstandings. Marie, 42, who lost her job as a bookseller, stated that her friends' suggestions for her career change were inaccurate and inappropriate, because they had an incorrect image of her skills and contrasting views on the future of her employment sector: Sometimes they make projections about things that I should be doing and I don't feel up to it. For example, for me it's quite heavy, this story of having done a master's degree gives the impression that I have a great qualification. But in fact, I feel rather handicapped by that, because I was a rather weak student [laughs]. In general, they offer their support, they encourage, they sympathize and then when they have ideas, they share them, but there is no. well, their opinions on the future of bookselling. . . Sometimes there are people who say, "forget it, there's no chance."
Ambivalent Influences From the Work Environment
Within the work environment, participants mentioned ambivalence when they had to leave their former jobs. For example, Kevin's former boss did not want to accept that a performing and appreciated employee like him had no choice but to leave: My boss came back, and then I said, "I'm going to start really going to a doctor to find out what's causing this [allergy], because right now I can't stand it." So, I really warned him, but did he really get the idea that I was going to quit overnight? I don't know, I think there was a bit of a denial. I was kind of [. . .], the good collaborator.
Ambivalent Influences From Institutions
Institutional influences manifested in ambivalent ways both during the shifting process and when it came to implementing a new plan. During the shifting phase, half of the participants noted that although they cherished being helped, they were afraid of rigidity or becoming too dependent on these supports. William pointed out the rigidity and unsuitability of institutional help when changing occupations. At the same time, he appreciated the networking opportunities it had given him, though: Participants also stated that they encountered ambivalent institutional influences when trying to implement their career changes. For some, the new career plan was neither totally freely chosen, nor completely imposed by the institutions supporting them. Jean, for example, learned tardively that he could be financially supported for new training, but also stated that this support was not unrestricted: At the beginning I had a lot of questions. I didn't always know what I could afford to do. The misdirection was related to this lack of information, because basically I didn't know what I was entitled to from the beginning. Depending on the income you have [i.e., the former job's pay], [the public disability insurance] unlocks substantial means. Unfortunately, this is the information that I had to go and get [. . .]. So, my choice is wider too. That's why I limited myself. I didn't feel limited, it was reasonable. It's not like you can pretend to go to university, to take another high school diploma, etc.
DISCUSSION
The general aim of this study was to understand how the social environment influences the process and experience of involuntary career changes. We split this into two specific objectives: (1) identifying the sources of relational influences and (2) understanding how these relational influences affect the process and experience of involuntary career changes. Addressing these objectives is critical because research shows that work transition processes are shaped by complex, yet underexplored, relational influences (e.g., Motulsky, 2010), and because recent theoretical perspectives stress that relationships profoundly shape career paths (Kenny et al., 2018).
A first general observation that emerged from our findings is that relational influences on career change processes were highly heterogeneous and differentiated, underlining the inherent complexity of relational influences on careers (Kenny et al., 2018). On the one hand, we identified a wide range of subthemes (i.e., 10 positive, 13 negative, and five ambivalent influences) covering diverse sources of relational influences at distinct moments in the transitional process. This finding provides empirical support for the need to move beyond a static and dualistic view of contextual influences on careers, which consists of splitting them into supports and barriers or into positive and negative influences (see Sheu and Bordon, 2017). Instead, relational influences take multiple forms, are fluid, and sometimes have mixed effects.
On the other hand, only three out of 28 subthemes concerned more than half of the participants; namely, emotional support from the personal environment and employers' disrespect during the leaving phase (nine and eight participants, respectively) and inappropriate adult education programs during the implementation phase (eight participants). The sources and types of relational influences, then, seemed to depend on each participant's specific context and unique work and life path.
We can provide several more targeted observations related to our specific study goals. In the following sections, we discuss these observations in detail, highlight their importance in understanding the career change process, and underline some limitations and practical implications stemming from our research.
Who Others Are
Our first specific research goal was to identify the sources of relational influences on involuntary career change processes and experiences. We identified three sources of influence: the personal environment (including family members and close relationships), the work environment (including former and current employers and companies), and institutions participants interacted with to obtain support for their career changes (mainly the disability and unemployment insurance bureaus). Several cross-cutting observations allow us to discuss each of these sources.
The Alternating Influences of Personal Environment
Influences from family, friends, and partners tended to alternate between positive and negative. During the shifting and implementation phases, the personal environment exclusively had a negative impact on participants, leading career changers to prefer not sharing their change experiences and restraining their projects. In contrast, during the leaving and exploring phases, family and friends both positively and negatively affected career changers. Positive influences mainly took the form of emotional support, while negative influences manifested through diverse sorts of relational stress due to misunderstandings, imbalances, tensions, or negative judgments from close relatives. The effect of the personal environment might then vary depending on the moment when others intervene. Such a finding stresses the importance of taking into account the temporal and processual dimensions of career changes (Kenny et al., 2018). It also indicates that, although close relatives might be supportive when people learn they have to change careers and when they reflect on the direction this change may take, close relatives might be less helpful during the in-between phase and when it comes to concretely implementing a new career plan. Beyond their temporal nature, alternating influences from the personal environment also highlighted the complex, sometimes ambivalent, and intricate web of interpersonal relations. Adding to what has been shown by most existing research (e.g., Blustein et al., 2013), our results suggest that influences from the personal environment might include a "dark side, " which calls for the benefits of environmental supports to be put into perspective. Overall, our results confirmed that people's life domains are highly interrelated. Working experiences and career decisions partly depend on-and influence-what people experience in their personal life spheres (Kenny et al., 2018;McMahon and Patton, 2019).
Hostile Work Environments
Obviously, influences from the work environment operated at the beginning and at the end of the career change process; that is, when participants learned that they had to quit their former jobs and prepare for change (i.e., the leaving phase) and when they strived to move to a new occupation (i.e., the implementation phase). These influences were predominantly negative, whether because former employers complicated the transition and prevented a smooth exit, or because several types of labor market obstacles (e.g., rigidities and prejudices) threatened the new career plan. Within our study's context, this result could indicate that the labor market-particularly, some companies and employers-was either hostile toward career changers or not ready to consider their specific challenges and needs. Combined with the observation that national statistics on this issue are imprecise and rudimentary (Office Fédéral de la Statistique [OFS], 2020), our results may reflect a general lack of knowledge and familiarity with career changes in the Swiss context.
The Critical Role of Institutions
Institutions deeply affected participants' career change processes and experiences. Their prevailing influence took multiple forms during the shifting stage (i.e., through seven distinct subthemes), which is not surprising given that this is the stage at which institutions are most solicited to support workers' transitions. Institutional influences were both positive and negative. Interestingly, recognition emerged as a critical positive element. Some participants valued the fact that professionals acknowledged the complexity of their situations and the difficulties they faced. This finding might show the importance of the recognition of the emotional and social challenges encountered by people who struggle with their careers beyond the weight of instrumental (e.g., financial) support. It is then consistent with previous research outlining the crucial role of social recognition on careers, such as Blustein et al.'s (2013) study, which showed that the experience of unemployment is less detrimental when people benefit from emotional support. Conversely, Howes and Goodman-Delahunty (2014) showed that a lack of recognition can lead to a decision to change occupation. However, several negative effects of institutions were also pointed out, broadly stressing that institutional supports were often considered inappropriate. This inappropriateness was mainly the result of a mismatch between participants' needs and the support actually provided by the institutions. For example, when flexibility was requested, rules were rigid; when quick action was expected, procedures were slow; and when freedom of choice and action would have been beneficial, participants were faced with constraints.
What Others Do
Our second specific goal was to understand how these sources of relational influences affected the involuntary career change process. Our results indicated that career changes can be considered both as a highly social and a rather lonesome experience, and that ambivalent relational influences deserve particular attention.
Career Change as a Social Experience
The career change experiences participants shared were socially shaped. Others constantly gravitated around them and influenced how participants approached and experienced the change process. On one hand, positive relational influences were perceived at each stage of the career change process and took multiple forms. No single type or source of influence was prominent, which confirms the uniqueness of every single career change experience and life course. On the other hand, the same applies for negative relational influences, with one noteworthy exception: As mentioned in the previous section, the inappropriateness of institutional supports pervaded the narratives of several participants and discouraged them from benefiting as much as they might have otherwise. These results confirmed the relevance of the first tenet of the relational perspective on work and careers (Kenny et al., 2018), which suggests that working is a fundamentally relational experience and that relationships can have both positive and negative effects on people's working lives. In contrast with this tenet, however, our study showed that negative relational influences were not limited to the workplace but could spread beyond the work setting to include institutions and the private sphere. These findings also corroborated the results of past research on career transitions, such as Motulsky, 2010, who showed that relational processes at diverse levels can both enhance and hinder midlife women's career transitions.
Career Change as a Lonesome Experience
If career change experiences were deeply relationally embedded, they were not necessarily socially shared by career changers. Indeed, the change process was also characterized by moments of isolation and withdrawal, during which people managed or wanted to manage some of the challenges of career change on their own. Three forms of loneliness resulted from our analyses, which we refer to as unintentional, deliberate, and experiential. Unintentional loneliness refers to moments when the absence of possibly helping relationships became a burden-for example, a lack of networking or of benefiting from emotional support. In contrast, at other moments loneliness was deliberate: Participants purposely decided to isolate themselves and not share their difficulties in order to feel normal, not perceived as needy, possibly in an attempt to protect their self-esteem.
Finally, experiential loneliness resulted from the potential effects of relational barriers on participants. Not feeling understood by family and friends, perceiving resistance in the labor market, and witnessing the rigidity of transition and education programs may have implicitly signaled to them that they were alone in their career change journeys. This experiential loneliness mirrors the social disconnectedness experienced by unemployed people (Blustein et al., 2013). Overall, these results confirmed that "relational influences can, at times, present considerable challenges to individuals negotiating work-based tasks" (Blustein, 2011, p. 9).
Multilevel, Multiform Ambivalence
Our findings indicated that ambivalence was displayed at all levels of relational influences, whether through caring but awkward relatives, rigid institutions with supportive professionals, or companies that participants both regretted working for and were happy to leave. In addition to the specific and situated ambivalent situations highlighted in our analyses, it seems legitimate to assume the existence of another form of ambivalence associated with the processual and temporal dimension of career changes. Temporal ambivalence would thus add to situated ambivalence to indicate that relational influences can fluctuate throughout the change process: The same relationship may have positive influences at one point in the process but become marginal or even negative at another point. This appeared to be the case, for example, with close friends and family, who were typically mentioned as important sources of emotional support during the leaving phase but were ignored or avoided during the shifting phase. Overall, the incidence of multilevel and multiform ambivalent relational influences tended to emphasize the complexity and mutability of relational processes. These findings echo Motulsky's (2010) results about women in career transitions, stating that women's relationships "included both connections and disconnections within the same person" (p. 1100). Overall, our findings complement and expand upon previous research and theories of career development (e.g., Blustein, 2011;Sheu and Bordon, 2017), which have portrayed a polarized and rather static understanding of others' influences on careers.
Limitations and Perspectives
Our study had four limitations, leading to some research perspectives. First, we opted for a horizontal analysis, searching for common themes and subthemes across participants (Braun and Clarke, 2006). This implies omitting the biographical, withinperson dimension of career change experiences. As we observed that each experience is unique and that relational influences seemed to involve a temporal dimension, future research should focus on the longitudinal feature of career changes. It would thus be relevant to implement a qualitative longitudinal design that combines within-and between-case analyses (Neale, 2021).
Second, our recruitment procedure involved reaching out to institutions that support people in transition. It is therefore likely that the participants' accounts focused on the role played by these institutions, which could in part explain the prominence of this source of relational influence in our findings. The recruitment procedure also prevented us from accessing the experiences of career changers who were not institutionally supported and who might therefore have had to cope with even more marginalizing transitional challenges. Future research could then implement alternative sampling strategies to access less institutionalized involuntary career change experiences.
Third, at the time of the interviews, not all participants were necessarily at the same point in their career change processes, which may have shaped their narratives of their transitional experiences. For example, the retrospective view of the change process might have been more positive for those who already had a specific, achievable project when interviewed. In contrast, more pessimistic views might have been expressed by participants who had not yet identified meaningful and reachable career opportunities. Again, qualitative longitudinal studies could be pertinent to consider this aspect.
Fourth, we did not clearly detect distal relational influences on career change experiences (e.g., opportunity structures, macrocontextual, cultural, or societal effects; see for example Sheu and Bordon, 2017), probably because these influences are less tangible and more difficult for participants to recognize. Nevertheless, as suggested in the fifth tenet of the relational perspective on work and careers (Kenny et al., 2018), distal influences are not to be neglected, and we observed hints of these types of effects in some subthemes. For example, labor market prejudices and rigidities, as well participants' choice to keep silent in order to feel normal, suggested the existence of cultural and societal forces detrimental to the transition experience. However, in this study, these influences were only speculative and suspected; thus, further studies could address them in a more targeted manner.
Practical Implications
"Uncertainty also carries the potential to augment the need for affiliation and sensitivity to relatedness" (Flum, 2015, p. 147). Because involuntary career changes involve a high degree of uncertainty and create major discontinuities in people's lives, nurturing the relationships of those who experience them becomes critical. Based on our results, this relationship-building work can be implemented at the labor market, institutional, and personal levels. At the labor market level, it would be relevant to raise awareness among labor market stakeholders about the specific needs and issues of involuntary career changers-for example, focusing on the fact they have to cope with undesired events in their lives. Interventions targeting employers and companies could also inform about and advocate for respecting workers' rights and duties. At the institutional level, professionals supporting people who are forced to change careers should be sensitized to the need of implementing adaptable interventions that suit their challenges in terms of rhythms and specific needs, among other factors. Another recommendation at this level would be to foster tertiary and vocational trainings tailored to an adult population, with special consideration for possible family and financial constraints. At the individual level, it seems crucial to avoid worsening career changers' loneliness and silence by identifying adequate supports. Interventions should help career changers recognize who is best able to help, how, and when. Members of the personal environment could be involved in these interventions to help them become effective helpers instead of benevolent barriers.
CONCLUSION
Our study showed that involuntary career changes are deeply shaped by relational influences. These influences are multifaceted, sometimes manifesting as resources that support the process of change, while in other cases as barriers that hinder it. Between these two poles, other influences are rather ambivalent, meaning that they can be both appreciated and avoided, or that their effects are ambiguous. Moreover, a temporal dimension must be considered when trying to understand relational influences. Influences can come at the right time or at the wrong time, and their effects can fluctuate depending on the transitional phase a person is in. Among the sources of relational influences, institutional influences are omnipresent and have the power to facilitate or constrain career change processes. This depends on institutionalized programs' appropriateness to the specific and situated needs of each person. Ultimately, while they result in an eminently social experience, involuntary career changes are also marked and framed by moments of loneliness. These moments reflect an inadequacy of available supports, but also a sense of shame that individuals may feel toward taking advantage of them or being labeled as career changers. As a result, it seems imperative both to identify and strengthen resourceful others and to better identify and grasp situations of loneliness. Indeed, these situations can prevent people from taking advantage of possible supports; make the experience of change more insecure, if not traumatic; and compromise the success of the career change process.
DATA AVAILABILITY STATEMENT
The datasets presented in this article are not readily available because they are subject to restrictions imposed by the funding institution and ethics committee. Requests to access the datasets should be directed to JM, jonas.masdonati@unil.ch.
ETHICS STATEMENT
The present study was reviewed and approved by the Ethics Committee of the Faculty of Social and Political Sciences of the University of Lausanne (project number C_SSP_052021_00003). The participants provided their written informed consent to participate in this study. | 2022-06-03T13:35:30.251Z | 2022-06-02T00:00:00.000 | {
"year": 2022,
"sha1": "db195f817f2b7ec14e9f99eef863754c8d5e5a9b",
"oa_license": null,
"oa_url": null,
"oa_status": null,
"pdf_src": "Frontier",
"pdf_hash": "db195f817f2b7ec14e9f99eef863754c8d5e5a9b",
"s2fieldsofstudy": [
"Psychology"
],
"extfieldsofstudy": [
"Medicine"
]
} |
260857428 | pes2o/s2orc | v3-fos-license | A Comprehensive Analysis of the Fowleria variegata (Valenciennes, 1832) Mitochondrial Genome and Its Phylogenetic Implications within the Family Apogonidae
Controversies surrounding the phylogenetic relationships within the family Apogonidae have persisted due to the limited molecular data, obscuring the evolution of these diverse tropical marine fishes. This study presents the first complete mitochondrial genome of Fowleria variegata, a previously unrepresented genus, using high-throughput Illumina sequencing. Through a comparative mitogenomic analysis, F. variegate was shown to exhibit a typical genome architecture and composition, including 13 protein-coding, 22 tRNA and 2 rRNA genes and a control region, consistent with studies of other Apogonidae species. Nearly all protein-coding genes started with ATG, while stop codons TAA/TAG/T were observed, along with evidence of strong functional constraints imposed via purifying selection. Phylogenetic reconstruction based on maximum likelihood and Bayesian approaches provided robust evidence that F. variegata forms a basal lineage closely related to P. trimaculatus within Apogonidae, offering novel perspectives into the molecular evolution of this family. By generating new mitogenomic resources and evolutionary insights, this study makes important headway in elucidating the phylogenetic relationships and mitogenomic characteristics of Apogonidae fishes. The findings provide critical groundwork for future investigations into the drivers of diversification, speciation patterns, and adaptive radiation underlying the extensive ecological diversity and biological success of these marine fishes using phylogenomics and population genomics approaches.
Introduction
Fowleria variegata, commonly called the variegated butterflyfish, is a marine fish species belonging to the family Apogonidae [1,2].F. variegata naturally occurs in tropical waters of the Indo-Pacific region, spanning the Red Sea, eastern African coast, and western Pacific Ocean.F. variegata is highly sought-after owing to the vibrant and distinct coloration that makes it a popular choice for aquarium enthusiasts.The F. variegata is characterized by its oval-shaped body, which is laterally compressed and highly flattened.It has a small mouth, a long snout, and a continuous dorsal fin that runs along its back [3].The coloration of this species is its most striking feature, with a pattern of alternating bands of black, white, and yellow on its body.These vibrant colors not only serve as a form of camouflage, but also make F. variegata an attractive sight on coral reefs.In terms of habitat, F. variegata is primarily found in coral-rich areas, particularly around reef slopes and outer reef zones.It prefers shallow depths of up to 40 m, where it can easily access its main food source-coral polyps [4].This species has a specialized diet, primarily feeding on small invertebrates and zooplankton, which it extracts from the coral using its small, beak-like mouth (Figure 1).The reproductive behavior of the F. variegata involves pair bonding, in which a male and a female form a monogamous partnership.They engage in courtship displays and territorial behavior in order to establish their breeding grounds.The female fish lays a large number of pelagic eggs, which are fertilized by the male.The eggs are then released into the water column and left to fend for themselves.While F. variegata is not currently listed as endangered, it does face various threats due to human activities and environmental changes.Habitat destruction, caused by factors such as coastal development, pollution, and coral bleaching, poses a significant risk to the species.Overfishing for the aquarium trade also impacts its population in certain areas [5].As a result, conservation efforts are necessary to protect the F. variegata and ensure the long-term viability of its habitat.
camouflage, but also make F. variegata an attractive sight on coral reefs.In terms of habitat, F. variegata is primarily found in coral-rich areas, particularly around reef slopes and outer reef zones.It prefers shallow depths of up to 40 m, where it can easily access its main food source-coral polyps [4].This species has a specialized diet, primarily feeding on small invertebrates and zooplankton, which it extracts from the coral using its small, beak-like mouth (Figure 1).The reproductive behavior of the F. variegata involves pair bonding, in which a male and a female form a monogamous partnership.They engage in courtship displays and territorial behavior in order to establish their breeding grounds.The female fish lays a large number of pelagic eggs, which are fertilized by the male.The eggs are then released into the water column and left to fend for themselves.While F. variegata is not currently listed as endangered, it does face various threats due to human activities and environmental changes.Habitat destruction, caused by factors such as coastal development, pollution, and coral bleaching, poses a significant risk to the species.Overfishing for the aquarium trade also impacts its population in certain areas [5].As a result, conservation efforts are necessary to protect the F. variegata and ensure the long-term viability of its habitat.Mitochondria, present in almost all eukaryotic organisms, play vital roles in regulating energy metabolism, apoptosis, aging, and various diseases, establishing them as essential components within cells [6].Mitochondrial DNA (mtDNA) is a valuable molecular marker for systematic studies.It is widely used due to its simple structure, rapid evolutionary rate, abundant copies, and ease of isolation.These characteristics make mtDNA a convenient and effective tool for investigating genetic relationships and phylogenetic patterns [7].Mitochondrial genomes (mitogei nomes) are pivotal in molecular biology research as they provide crucial insights into evolutionary relationships, population history, and genetic diversity [8].They are extensively employed in species identification, classification, and phylogenetic analysis, enabling the revelation of species' phylogenetic relationships and aiding in the reconstruction of a genus's evolutionary tree [9].Moreover, mitochondrial genomes facilitate the study of gene flow, migration patterns, and genetic diversity among species [10].However, the absence of mitochondrial genome sequences in species belonging to the genus Fowleria is creating a significant gap in molecular biology research.Scientists are unable to utilize mitochondrial genomes for species identification, classification, and evolutionary analysis, leading to an incomplete understanding of the phylogenetic relationships and population history within the genus.This deficiency may give rise to misconceptions regarding species relationships and confusion in the taxonomic positioning of the entire genus.Mitochondria, present in almost all eukaryotic organisms, play vital roles in regulating energy metabolism, apoptosis, aging, and various diseases, establishing them as essential components within cells [6].Mitochondrial DNA (mtDNA) is a valuable molecular marker for systematic studies.It is widely used due to its simple structure, rapid evolutionary rate, abundant copies, and ease of isolation.These characteristics make mtDNA a convenient and effective tool for investigating genetic relationships and phylogenetic patterns [7].Mitochondrial genomes (mitogei nomes) are pivotal in molecular biology research as they provide crucial insights into evolutionary relationships, population history, and genetic diversity [8].They are extensively employed in species identification, classification, and phylogenetic analysis, enabling the revelation of species' phylogenetic relationships and aiding in the reconstruction of a genus's evolutionary tree [9].Moreover, mitochondrial genomes facilitate the study of gene flow, migration patterns, and genetic diversity among species [10].However, the absence of mitochondrial genome sequences in species belonging to the genus Fowleria is creating a significant gap in molecular biology research.Scientists are unable to utilize mitochondrial genomes for species identification, classification, and evolutionary analysis, leading to an incomplete understanding of the phylogenetic relationships and population history within the genus.This deficiency may give rise to misconceptions regarding species relationships and confusion in the taxonomic positioning of the entire genus.
This study presents a comprehensive analysis of the mitochondrial genome (mitogenome) of F. variegata, a species belonging to the genus Fowleria.We have successfully assembled the complete mitogenome of F. variegata using paired-end (PE 150) sequencing technology.This achievement not only enhances our understanding of F. variegata's genetic composition, but also provides valuable insights into the phylogenetic relationships within the broader family Apogonidae.The mitogenomic data presented in this study represent a significant expansion of the existing knowledge of the family Apogonidae.This study presents the first complete mitogenome for any species in the genus Fowleria, providing a robust dataset that can be used to investigate the phylogeny of the family Apogonidae in greater detail.The availability of complete mitogenomes of the Apogonidae species strengthens our ability to explore the evolutionary history and genetic diversity of this taxonomic group.The findings presented in this research article will lay the foundation for further studies on the genus Fowleria species and contribute significantly to the broader field of fish phylogenetics.This work represents a major advance in our understanding of the evolutionary history of Apogonidae and will help to shed light on the relationships between this diverse group of fishes.
Ethical Approval for Research Protocols
Animal handling and experimentation protocols adhered to the guidelines and regulations for laboratory animal care in China.The research protocols were approved by the institutional animal care and use committee in accordance with the ethical regulations for animal studies issued by the China Council on Animal Care.
Experimental Fish and Sampling
Genomic DNA was extracted from the collected F. variegata sample using the TIANamp Genomic DNA Kit (TIANGEN, Beijing, China), following the manufacturer's protocol.Approximately 0.2 µg of the extracted DNA was fragmented into ~350 bp pieces to generate overlapping short fragments suitable for sequencing.The sequencing library was constructed in accordance with to the kit guidelines.It involved fragment end-repair, adapter ligation, and PCR enrichment.The prepared library was sequenced on an Illumina Nova 6000 platform, generating 6 Gb short reads with substantial coverage of the F. variegata genome.The TIANamp kit is a reliable extraction method widely used in molecular biology research.Following the standardized protocols ensured high-quality DNA extraction and library construction for optimal sequencing results.
F. variegata Mitogenome Assembly and Annotation
The F. variegata mitogenome was assembled using the GetOrganelle pipeline with default parameters, an approach that ensures accurate mitogenome assembly [11].The pipeline extracted seed reads from the 'animal_mt' database to initiate assembly.After assembly completion, the short reads were aligned back to the mitogenome using BWA in order to evaluate coverage and validate the accuracy of the assembly [12].Pilon was then utilized to further refine the assembly [13].This step enhanced the accuracy and overall quality of the mitogenome assembly.The integrated use of BWA alignment and Pilon polishing played a key role in improving the assembly and reducing potential errors or inconsistencies.
Following the assembly process, the F. variegata mitogenome was annotated to identify various genetic elements.The identification of protein-coding genes (PCGs) was carried out by comparing them to a reference mitogenome using Mitoz v3.4 [14].This comparison helped to determine the presence and arrangement of these important genetic components.The annotated F. variegata mitogenome was generated using MITOS, a widely utilized tool for mitogenome annotation (http://mitos.bioinf.uni-leipzig.de/index.py,accessed on 3 September 2022) [14].MITOS accurately identified the transfer RNAs (tRNAs) and ribosomal RNAs (rRNAs) encoded in the mitogenome.Precise annotation of these functional RNAs is important as they have key roles in protein synthesis and mitochondrial function.
Circular maps of the F. variegata mitogenome were generated using OGDraw in order to visualize the organization and arrangement of genomic features (https://chlorobox.mpimp-golm.mpg.de/OGDraw.html,accessed on 3 September 2022) [15].The maps illustrate the positions of all genes, tRNAs, and rRNAs in the mitogenome, providing a comprehensive overview of its structure.
Assessment of Sequence Properties
The nucleotide composition, codon usage, and relative synonymous codon usage (RSCU) of the F. variegata mitogenome were analyzed using CodonW [16].This shed light on the nucleotide makeup and codon preferences of the mitogenome.Nucleotide diversity (Pi) and Ka/Ks ratios for the 13 mitochondrial protein-coding genes (PCGs) in Apogonidae were calculated using DnaSP in order to assess genetic variation patterns [17].Sliding window analyses of the PCGs were also conducted in DnaSP using 100 bp windows with 25 bp steps in order to examine diversity within PCGs.Additionally, genetic distances were estimated using the Kimura-2 parameter (K2P) model in MEGA in order to determine evolutionary relationships.Combining codon usage analysis, Pi, Ka/Ks ratios, and K2P distances enabled us to obtain comprehensive insights into the mitogenomic diversity and evolution of Apogonidae.
Phylogenetic Analyses
To determine phylogenetic relationships within Apogonidae, the 13 concatenated mitochondrial PCGs from F. variegata and other Apogonidae species (Table 1) were aligned using MAFFT [18].ModelFinder identified the optimal evolutionary model (GTR + F + R6) based on the Akaike Information Criterion [19].This model balanced accuracy and complexity.Maximum likelihood analysis was conducted using IQ-TREE with 1000 ultrafast bootstraps [20].Bayesian inference employed MrBayes with two independent MCMC runs of 50 million generations, sampling every 1000 generations until convergence [21].The first 10% of trees were discarded as burn-in before computing a consensus tree.The bootstraps and posterior probabilities provided statistical support to the evaluation of topology robustness.Combining maximum likelihood and Bayesian approaches enabled a robust phylogenomic assessment of the evolutionary relationships in Apogonidae to be performed.
Genomic Organization and Nucleotide Composition
The F. variegata mitogenome was characterized as a 16,558-base-pair circular molecule.The analysis of its nucleotide composition revealed 28.14% A, 25.64% C, 18.81% G, and 28.41% T, reflecting an AT bias (56.55%) consistent with that of other Apogonidae species.The mitogenome contains 13 protein-coding genes, 22 transfer RNAs, 2 ribosomal RNAs, and a control region with high AT content (Figure 2, Table 2).The shortest tRNAs were tRNAPhe, tRNACys, and tRNASer at 69 bp, while the longest were tRNALeu, tRNAAsn, and tRNALeu at 74 bp.The 896 bp control region lies between tRNAPro and tRNAPhe.Our comparative analysis showed remarkable similarity with result from other Apogonidae, with variations of 10-221 bp primarily seen in control region-associated genes (Figure 3, Table 1).Such variations indicate potential divergence and evolution patterns in Apogonidae.
Genomic Organization and Nucleotide Composition
The F. variegata mitogenome was characterized as a 16,558-base-pair circular molecule.The analysis of its nucleotide composition revealed 28.14% A, 25.64% C, 18.81% G, and 28.41% T, reflecting an AT bias (56.55%) consistent with that of other Apogonidae species.The mitogenome contains 13 protein-coding genes, 22 transfer RNAs, 2 ribosomal RNAs, and a control region with high AT content (Figure 2, Table 2).The shortest tRNAs were tRNAPhe, tRNACys, and tRNASer at 69 bp, while the longest were tRNALeu, tRNAAsn, and tRNALeu at 74 bp.The 896 bp control region lies between tRNAPro and tRNAPhe.Our comparative analysis showed remarkable similarity with result from other Apogonidae, with variations of 10-221 bp primarily seen in control region-associated genes (Figure 3, Table 1).Such variations indicate potential divergence and evolution patterns in Apogonidae.In summary, characterization of the F. variegata mitogenome revealed typical features including AT bias and conserved RNAs and genes, highlighting their functional significance.Variations among Apogonidae species point to a complex interplay between conservation and adaptation.Further investigation of these variations will provide deeper insights into mitogenomic diversity and evolution in Apogonidae.
which indicated purifying selection (Figure 5, Table S2) [34].The atp8, nad6, nad2 and nad4L genes showed relatively lower Ka/Ks, suggesting weaker evolutionary pressures and the retention of more non-synonymous mutations.Cox1 displayed the lowest Ka/Ks ratio, reflecting stronger selection and functional constraints [35].This is significant since mitochondrial DNA encodes essential respiratory components and governs inheritance, making the mitogenome susceptible to the accumulation of deleterious mutations [36].The use of strong purifying selection on cox1 eliminates such mutations, rendering it ideal for application to Apogonidae phylogeny.Consequently, these genes likely contribute to phylogenetic resolution at the genus level within Apogonidae, providing insights into evolutionary relationships and divergence [37].All 13 mitogenome protein-coding genes exhibited Ka/Ks ratios below 1 (0.02-0.12), which indicated purifying selection (Figure 5, Table S2) [34].The atp8, nad6, nad2 and nad4L genes showed relatively lower Ka/Ks, suggesting weaker evolutionary pressures and the retention of more non-synonymous mutations.Cox1 displayed the lowest Ka/Ks ratio, reflecting stronger selection and functional constraints [35].This is significant since mitochondrial DNA encodes essential respiratory components and governs inheritance, making the mitogenome susceptible to the accumulation of deleterious mutations [36].The use of strong purifying selection on cox1 eliminates such mutations, rendering it ideal for application to Apogonidae phylogeny.Consequently, these genes likely contribute to phylogenetic resolution at the genus level within Apogonidae, providing insights into evolutionary relationships and divergence [37].
Genes 2023, 14, x FOR PEER REVIEW 8 of 12 All 13 mitogenome protein-coding genes exhibited Ka/Ks ratios below 1 (0.02-0.12), which indicated purifying selection (Figure 5, Table S2) [34].The atp8, nad6, nad2 and nad4L genes showed relatively lower Ka/Ks, suggesting weaker evolutionary pressures and the retention of more non-synonymous mutations.Cox1 displayed the lowest Ka/Ks ratio, reflecting stronger selection and functional constraints [35].This is significant since mitochondrial DNA encodes essential respiratory components and governs inheritance, making the mitogenome susceptible to the accumulation of deleterious mutations [36].The use of strong purifying selection on cox1 eliminates such mutations, rendering it ideal for application to Apogonidae phylogeny.Consequently, these genes likely contribute to phylogenetic resolution at the genus level within Apogonidae, providing insights into evolutionary relationships and divergence [37].Nucleotide diversity (π) quantifies average differences between two randomly selected sequences in a gene or genomic region.It represents a fundamental genetic parameter that measures the extent of genetic variation or diversity within a population.Higher π values denote greater diversity in the nucleotide sequences of a specific region.Assessing nucleotide diversity allows researchers to evaluate the level of genetic variation present.
Nucleotide sequence alignments of 13 PCGs from 12 Apogonidae mitogenomes were analyzed in order to identify DNA polymorphisms (Figure 4, Table S2).This revealed the nucleotide diversity (π) between the genes in these mitogenomes.Interestingly, the nad6 gene exhibited the highest nucleotide diversity, with a π value of 0.286.It was followed closely by nad2 (0.275), nad5 (0.243), and nad6 (0.238).On the other hand, the nad4 (0.170) and cox1 (0.177) genes displayed the lowest nucleotide diversity values within the dataset.To further characterize the genetic distances between the sequences, we analyzed the mean genetic distances for these genes (Table S3).Mirroring the nucleotide diversity, nad6, nad2, nad4 and nad5 showed higher genetic distances of 0.29, 0.28, 0.28 and 0.25 respectively, implying greater sequence divergence.In comparison, cox1, cox3 and atp8 exhibited lower distances of 0.18, 0.19 and 0.19 respectively, denoting relatively lower divergence.
These findings offer insights into genetic diversity and sequence divergence in proteincoding genes among Apogonidae mitogenomes.The identification of genes with high nucleotide diversity and genetic distances, such as nad6, nad2, nad4, and nad5, suggests that these genes may be subjected to selective pressures or evolutionary forces that contribute to their higher variability.Further exploring the functional roles of these genes and their evolutionary implications in Apogonidae would improve our understanding of genetic diversity and adaptation in this family.
Phylogenetic Analyses
To ensure robust phylogenetic analysis, our dataset was expanded to 16 mitogenomes.This included 12 from Apogonidae as the focal family and 3 from Gobiidae plus 1 from Acanthuridae as outgroups.These reference mitogenomes were retrieved from the NCBI RefSeq database, with data updated as of 17 June 2023.
Phylogenetic relationships were investigated using both maximum likelihood (ML) and Bayesian inference (BI) analyses (Figure 6).F. variegata occupied a basal position within Apogonidae and showed affinity to P. trimaculatus, in accordance with a prior study performed using three mitochondrial genes (nad1, nad2 and cox1) [5].However, previous mitogenomic analyses lacked genus Fowleria representation [22,23,25,27,32].Our study provides the first complete Fowleria mitogenome and phylogenetic analysis, addressing this gap.F. variegata's basal status provides insights into Apogonidae evolution, highlighting the need for further analyses with complementary datasets.
The outgroup mitogenomes enabled comprehensive phylogenetic evaluation to be carried out within Apogonidae.However, performing augmentation with data from other genomic regions or analytical approaches would reinforce these findings.Future efforts should focus on validating and expanding these observations in order to advance the understanding of evolutionary dynamics and relationships in Apogonidae.
Summary
This study presents the first complete mitochondrial genome of F. variegata using short-read sequencing technology, making a valuable addition to the limited genomic resources for the study of this genus.Through comparative genomic analysis, F. variegata was found to possess the typical mitogenomic composition of 13 protein-coding, 22 tRNA, and 2 rRNA genes along with a control region, in accordance with other species in the family Apogonidae.The performance of phylogenetic reconstruction using maximum likelihood and Bayesian inference analysis provided robust support for the basal position of F. variegata, closely related to P. trimaculatus, within family Apogonidae.These findings significantly enhance current understanding of the molecular evolution and phylogeny of this commercially and ecologically important perciformes family.
Further analysis of selection pressures and Ka/Ks ratios in protein-coding genes offered new insights into the evolutionary dynamics of Apogonidae mitogenomes.The genes were found to have undergone varying levels of purifying selection, rendering them promising markers for use in future population genetics studies on genetic differentiation, gene flow, and local adaptation.Targeted investigation of the genes under differential evolutionary constraints will help to elucidate population structure, demographic histories, and the impacts of environmental factors on genetic variations.
Moreover, the elucidation of phylogenetic relationships and comparative mitogenomic analysis in this study establishes critical groundwork for future research into the genetic diversity, adaptation and evolutionary trajectories of F. variegata and its related species.The integration of expanded molecular datasets, diverse analytical approaches and a solid systematic framework will provide powerful tools for uncovering the intricacies that underly diversification and adaptation in Apogonidae fishes.Findings from such endeavors will offer valuable insights into the drivers of the speciation and biodiversity critical for conserving and managing these tropical marine fishes.
In summary, by generating novel mitogenomic resources and evolutionary perspectives, this study makes important headway in advancing research into the ecological genomics and molecular systematics of an understudied Perciformes group.
Summary
This study presents the first complete mitochondrial genome of F. variegata using short-read sequencing technology, making a valuable addition to the limited genomic resources for the study of this genus.Through comparative genomic analysis, F. variegata was found to possess the typical mitogenomic composition of 13 protein-coding, 22 tRNA, and 2 rRNA genes along with a control region, in accordance with other species in the family Apogonidae.The performance of phylogenetic reconstruction using maximum likelihood and Bayesian inference analysis provided robust support for the basal position of F. variegata, closely related to P. trimaculatus, within family Apogonidae.These findings significantly enhance current understanding of the molecular evolution and phylogeny of this commercially and ecologically important perciformes family.
Further analysis of selection pressures and Ka/Ks ratios in protein-coding genes offered new insights into the evolutionary dynamics of Apogonidae mitogenomes.The genes were found to have undergone varying levels of purifying selection, rendering them promising markers for use in future population genetics studies on genetic differentiation, gene flow, and local adaptation.Targeted investigation of the genes under differential evolutionary constraints will help to elucidate population structure, demographic histories, and the impacts of environmental factors on genetic variations.
Moreover, the elucidation of phylogenetic relationships and comparative mitogenomic analysis in this study establishes critical groundwork for future research into the genetic diversity, adaptation and evolutionary trajectories of F. variegata and its related species.The integration of expanded molecular datasets, diverse analytical approaches and a solid systematic framework will provide powerful tools for uncovering the intricacies that underly diversification and adaptation in Apogonidae fishes.Findings from such endeavors will offer valuable insights into the drivers of the speciation and biodiversity critical for conserving and managing these tropical marine fishes.
In summary, by generating novel mitogenomic resources and evolutionary perspectives, this study makes important headway in advancing research into the ecological genomics and molecular systematics of an understudied Perciformes group.
Figure 1 .
Figure 1.The sample image of F. variegata, taken by Weiyi He.
Figure 1 .
Figure 1.The sample image of F. variegata, taken by Weiyi He.
Figure 2 .
Figure 2. A circular map of the F. variegata mitochondrial genome is shown, with the outer circle denoting the heavy (H) strand and the inner circle denoting the light (L) strand.The inner gray circle illustrates the GC and AT content distribution, where darker regions indicate higher GC content and lighter regions indicate higher AT content.
Figure 2 .
Figure 2. A circular map of the F. variegata mitochondrial genome is shown, with the outer circle denoting the heavy (H) strand and the inner circle denoting the light (L) strand.The inner gray circle illustrates the GC and AT content distribution, where darker regions indicate higher GC content and lighter regions indicate higher AT content.
Figure 5 .Figure 4 .
Figure 5. Genetic diversity and evolutionary dynamics of mitogenomes in this study.
Figure 5 .Figure 5 .
Figure 5. Genetic diversity and evolutionary dynamics of mitogenomes in this study.
Table 1 .
Mitochondrial genome sequences of Apogonidae species from NCBI used in this study.
Table 2 .
Features annotated in the F. variegata mitochondrial genome.
Figure 3. Mitochondrial Genomes of Apogonidae Species Analyzed.Note: F. variegata highlighted in red | 2023-08-13T15:13:28.269Z | 2023-08-01T00:00:00.000 | {
"year": 2023,
"sha1": "67dea8cddf564b98b24c3ad4746f6e4b2c594e57",
"oa_license": "CCBY",
"oa_url": "https://www.mdpi.com/2073-4425/14/8/1612/pdf?version=1691764048",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "f47bb8c567da1555b87f14247dbf3e51d50ac690",
"s2fieldsofstudy": [
"Biology"
],
"extfieldsofstudy": [
"Medicine"
]
} |
244445873 | pes2o/s2orc | v3-fos-license | TikTok’s Spiral of Antisemitism
: The growing presence of antisemitism on social media platforms has become more prominent in recent years. Yet, while most of the scholarly attention has been focused on leading platforms like Twitter, Facebook, or Instagram, the extremist immigration to other platforms like TikTok went unnoticed. TikTok is the fastest-growing application today, attracting a huge audience of 1.2 billion active users, mostly children and teenagers. This report is based on two studies, conducted in 2020 and 2021, applying a systematic content analysis of TikTok videos, comments, and even usernames. Data were collected twice, in two four-month periods, February–May 2020 and February–May 2021, to allow for comparisons of changes and trends over time. Our findings highlighted the alarming presence of extreme antisemitic messages in video clips, songs, comments, texts, pictures, and symbols presented in TikTok’s content. TikTok’s algorithm is even more disconcerting since it leads to a spiral of hate: it pushes users who unintentionally view disturbing content to view more. Considering TikTok’s young demographic, these findings are more than alarming; TikTok even fails to apply its own Terms of Service, which do not allow content “deliberately designed to provoke or antagonize people, or are intended to harass, harm, hurt, scare, distress, embarrass or upset people or include threats of physical violence”.
Introduction
Seventy-six years after the liberation of Auschwitz, antisemitism is still being expressed both publicly and violently. In 2020, the Anti-Defamation League reported 2024 reported antisemitic incidents throughout the United States, making it the third-highest year on record since ADL began tracking antisemitism in 1979 (ADL 2020). More recently, during the uptake in violence in the Israeli-Palestinian conflict in May 2021, they reported a 75% increase in antisemitism since the fighting began (ADL 2021a). Antisemitism as well as other forms of hate speech and disinformation thrive on social media, which connects users and creates a digital echo chamber. Whilst social media companies such as TikTok claim to roll out moderation tools to keep hate speech, misinformation, and incitements of violence off their platforms, users are perpetually publishing content despite it being prohibited. Social media algorithms, particularly TikTok's, are helping to spread hatred, shaping how billions of people read, watch, and think every day, and are creating polarization. The spread of hatred on TikTok, including antisemitism, is particularly shocking considering its young audience and the amount of influence it has on this vulnerable demographic.
TikTok: The Platform, Audiences and Content
In September 2016, the Chinese company ByteDance released a lip-synching video app called Doyin and later launched TikTok for markets outside of China. TikTok initially enabled its users to upload videos up to 60 s in length but in early 2021 this was increased to three minutes. TikTok enables its users to be creative with an array of features and interactive formats such as lip-synching and sharing memes.
In November 2017, ByteDance acquired Musical.ly and merged it with TikTok in 2018. As of July 2021, it is estimated that TikTok has been downloaded over three billion times globally on the App Store and Google Play (Sensor Tower 2021). TikTok has become the fifth non-game app to reach the three-million-install milestone; the only four other apps that have accumulated the same number of downloads are WhatsApp, Messenger, Facebook, and Instagram-all of which are owned by Facebook.
TikTok has become one of the most popular applications in the world, boasting a young audience despite its Terms of Service stating users must be over the age of 13. TikTok is rampant among children and teenagers seeking to expand their social networks, seek fame, or express themselves creatively (Montag et al. 2021). However, the application has a dark side, which puts its young users at risk. The app was shown to expose users to a range of extremist content, racist postings, and calls for attacking minorities, ethnic groups, people of color, Muslims, and Jews, as well as postings sharing neo-Nazi propaganda. TikTok's algorithm, which can increase video exposure, has made the platform attractive to many extremist, racist, and radical groups, including neo-Nazi and antisemitic individuals and groups. This study examined the rise in antisemitic postings on TikTok using a systematic content analysis of the videos, the comments, the texts, and the usernames.
It is estimated that TikTok currently has 37.3 million Gen Z users and by 2023 they will have more Gen Z users than Instagram (EMarketer 2021). TikTok's popularity amongst Gen Z can be explained by several factors. Firstly, TikTok enables anyone to produce content due to the simplicity of using the app. TikTok's mission, as declared by the company, is "to capture and present the world's creativity, knowledge, and precious life moments, directly from the mobile phone. TikTok enables everyone to be a creator and encourages users to share their passion and creative expression through their videos." In addition, by using short video formats, unlike other social media sites such as YouTube, TikTok can attract teenagers and young adults with shorter attention spans. This reflects the notion that the app designers decided to use youngsters as their preferred target audience from the very beginning, with its wide range of special effects and editing options. Another aspect of TikTok's appeal is that it is easier to go viral on TikTok compared with other social networking sites due to its algorithm, as Katie Elson Anderson argued: "a TikTok video from a user with absolutely no followers can quickly gain an audience as it appears in other user's feeds" (Anderson 2020).
Since TikTok's inception in 2016, TikTok has progressed from viral dances and lipsynching to dealing with political issues ranging from all sides. The best-performing videos on TikTok are under 15 s, which are challenging for fact-checkers, especially as the platform does not allow users to share URLs. This has given momentum to extremists, terrorist groups, and conspiracy theorists who use the app. As Jia Tollentino argued, "TikTok is a social network that has nothing to do with one's social network . . . in essence, the platform is an enormous meme factory, compressing the world into pellets of virality and dispensing those pellets until you get full or fall asleep" (2019). TikTok has also evolved into a meme-sharing platform due to its focus on viral culture and a young target audience.
TikTok's Algorithm
TikTok is unlike other social media accounts, as their newsfeed known as the "For You" page uses an algorithm to recommend videos to users as well as showing content from other, followed users. According to TikTok's listing in the iOS App Store, it is a "personalized video feed based on what you watch, like, and share" (TikTok 2020a). The "For You" page is the first page that opens when users open the app, with the videos playing on auto-play, making it hard for users not to watch them. Users can swipe down and watch unlimited videos based on the app's algorithm, showing posts based on the content users have been engaging with.
TikTok algorithm may seem complex and mysterious, but TikTok has revealed how it works (TikTok 2020b). According to TikTok, the algorithm recommends content by ranking videos based on a combination of factors including: (1) User interactions: such as the videos the user likes or shares, accounts he/she follows, comments he/she posts, and content he/she creates; (2) Video information: this includes details like captions, sounds, and hashtags; and (3) Device and account settings, including the user's language preference, country setting, and device type. Each of these factors is individually weighted by TikTok's "For You" recommendation system, which means each "For You" page will be unique to a user and their level of interest. TikTok thereby provides users with a continuous stream of video content, fueling user interest and entertainment, and this can lead to the "anaesthetic effect" (Fang et al. 2019) whereby users consume content for long periods fed by curiosity without being fully aware they are doing so. TikTok's algorithm is therefore a disconcerting feature, as the algorithm can in effect push users who unintentionally view disturbing content to view more. This characterizes TikTok's dangerous spiral of hate and violence.
Literature Review
Scholarship on TikTok is still in its early days; however, most research focuses on the role of TikTok's algorithm. Research by Klug et al. (2021) found that TikTok users are highly aware of TikTok's algorithm and have developed assumptions about how the algorithm works to make their videos trend. A study by Omar and Dequan (2020) found that self-expression explains active involvement on TikTok with users wanting to express themselves and interact with others. In light of this, a study by Cervi (2021) explored the relationship between TikTok and generation Z and found that TikTok's algorithm provides everyone with the same popularity to go viral, which explains why TikTok is popular amongst minors. Further research by Schellewald (2021) studied the popularity of TikTok's short video content, with many videos portraying everyday situations and stereotypes and found that the popularity of videos is related to relatability. Zulli and Zulli (2020) looked at how TikTok's digital structure influences user behavior and found that TikTok encourages imitation and replication by using challenges combined with video editing features that can be easily accessed by users wanting to imitate the video. They also looked at the role of celebrities taking part in TikTok challenges which consolidates their significance on TikTok and in mainstream culture. Other studies have looked at TikTok's political potential; a study by Vijay and Gekker (2021) looked at how TikTok shapes expression, in particular political expression. Likewise, Henneman (2020) looked at the uses and concerns of TikTok's journalistic storytelling ability. Vázquez-Herrero et al. (2020) studied how TikTok is changing the consumption of news. Across these studies, there is consistent evidence that TikTok's algorithm, combined with its short video content, explains TikTok's widespread popularity by encouraging users to not only watch these videos, but also to be inspired and create their own videos by offering users the ability to use the same sounds or features that have already been used.
Old and New Antisemitism
Antisemitism has been referred to as history's oldest hatred, and it is extremely adaptable. Antisemitism is a set of negative attitudes, ideologies, and practices directed at Jews either individually or collectively based on hostile erroneous beliefs and assumptions that are perpetuated through age-old conspiracy theories and their modern variants.
In 1873, Wilhelm Marr, a German political agitator, coined the term "anti-Semitism", believing that Jews were conspiring to run the state and that they should be excluded from obtaining citizenship. Following the Holocaust, antisemitism became less accepted; whilst it did not vanish, the events of WWII drastically inhibited its expression. Theodor Adorno, a German philosopher, recognized the basic grounds of antisemitism in his definition from 1950, stating that "This ideology [of antisemitism] consists . . . of stereotyped negative opinions describing the Jews as threatening, immoral, and categorically different from non-Jews, and of hostile attitudes urging various forms of restriction, exclusion, and suppression as a means of solving 'the Jewish problem.'" (Adorno et al. 1950, p. 71).
Since Adorno's definition of antisemitism in 1950, the key ideas which he outlined have persisted, such as the fear of perceived Jewish power. Other forms of antisemitism are directly linked to Israel-accusing Israel of antisemitic charges including blood libels and using evil power to control the world. Other forms are less direct, with criticism of Israel being blamed on Jews and Jewish institutions globally and launching attacks. Thus, when historians refer to the rise of "new anti-Semitism" in the 21st Century, it is evident that its core beliefs are formed of traditional notions of anti-Semitism. New anti-Semitism gained traction in the 1970s and 1980s, proposing that traditional anti-Semitism, typically associated with a biological race concept and prejudice against Jews, had been replaced with a new form of anti-Semitism which expressed itself as an "animus against Israel" and "insensitivity and indifference to Jewish concerns" (Romeyn 2020).
Oboler (2016) identified four categories of online anti-Semitism-traditional antisemitism, Holocaust denial, promoting violence against Jews, and new antisemitism. Traditional antisemitism mirrors the rhetoric of the past, with statements on social media suggesting that Jews control the world. Holocaust denial and distortion have intensified; while it has often been limited to fringe groups, polarizing echo chambers online have enabled these ideas to spread. Holocaust denial on social media involves the denial that the Holocaust ever occurred or claiming that the number of Jews killed is exaggerated, and it can also mock the victims of the Holocaust. As Gerstenfeld (2007) argued, Holocaust distortion takes many forms and "the number of mutations of such distortions is also expanding". The promotion of violence against Jews is the most direct form of antisemitic hate expressed online according to Oboler (2016). Violence can also be expressed as being against "Israelis" or "Zionists". On social media today, antisemitic slogans expressing violent anti-Semitism have emerged, such as "gas the Jews", "death to Israel" and "race war now". New anti-Semitism is a manifestation of online antisemitism which targets the state of Israel rather than the Jews, arguing that both Zionism and the State of Israel are evil. Sacks (2016) pointed out that new antisemitism is different from old antisemitism as "once Jews were hated because of their religion. Then they were hated because of their race. Now they are hated because of their nation state". Furthermore, as Oboler (2016) pointed out, new antisemitism infers that anyone who to a greater or lesser degree supports or stands for the rights of Zionism and the State of Israel are evil.
New antisemitism, therefore, consists of the synthesis of antisemitism and anti-Zionism. An insidious form of antisemitism disguises itself as contempt for Israel, with Israel typically singled out for criticism by its enemies. In 2005, the International Holocaust Remembrance Alliance (IHRA) published a working definition of antisemitism which has been adopted by the U.S. State Department since 2010, amongst other government bodies worldwide. The definition states that "Antisemitism is a certain perception of Jews, which may be expressed as hatred toward Jews. Rhetorical and physical manifestations of antisemitism are directed toward Jewish or non-Jewish individuals and/or their property, toward Jewish community institutions and religious facilities" (IHRA 2005). Accompanying the IHRA definition are eleven examples that "may serve as illustrations (Ibid.) ranging from Holocaust denial to holding Jews collectively responsible for the actions of the state of Israel and historical tropes.
Antisemitism on TikTok
Extremists have found a home on TikTok, exploiting the platforms' young audience and lax security to prey on the vulnerable. The hate speech found on TikTok ranges from neo-Nazi to Boogaloo with a range of antisemitic and racist content. Until 2019, the hate speech on TikTok had gone virtually unnoticed until Motherboard reported that it had found examples of "blatant, violent white supremacy and Nazism", including direct calls to kill Jews and black people (Cox 2018). Some postings verbatim read "kill all n*****", "all Jews must die", and "killn******". (the words are uncensored on the app). One video, for example, contained a succession of young users performing Nazi salutes. Another TikTok video included the note, "I have a solution; a final solution," referring to the Holocaust. Some postings include 1488, a reference to two 14-word slogans "we must secure the existence of our people and a future for white children", which originated with American white supremacist David Eden Lane, and the 88 standing for HH or Heil Hitler.
Since then, an increasing number of antisemitic TikTok trends have come to light. In 2020, a new trend emerged known as the "#holocaustchallenge" wherein people pretended to be Holocaust victims. Users shared clips of themselves with fake bruises and wearing items of clothing that Jews were ordered to wear by the Nazis. In 2021, another trend emerged, wherein users used the "Expressify" filter to exaggerate their facial features whilst singing "If I Were A Rich Man" from the musical Fiddler on the Roof. This filter resembles the happy merchant, an antisemitic meme that has become popular among the alt-right, whilst users sang about money, wealth, and greed, which fit traditional antisemitic stereotypes.
An increasing number of videos featuring antisemitic agitation or Holocaust denial are being spread on TikTok. According to Wheatstone and O'Connor (2020), some posts "feature sickening antisemitic taunts-with cartoons depicting Jewish men with large noses and joking about the Holocaust receiving hundreds of likes and comments." In one video, racist sketches of characters labeled "A Sneaky Jew" and "Mega Jew" are followed by antisemitic statements that Jewish people control the media, the financial sector, and the government.
Amidst the conflict between Israel and Hamas in May 2021, there was an increase in antisemitism, in particular on TikTok as Jewish users found themselves flooded with messages of hate, including Lily Ebert, one of the oldest Jewish creators on TikTok who is also an Auschwitz survivor. After posting a video wishing her followers a restful Shabbat, she was flooded with messages with many blaming her for the violence; the messages included "Happy Holocaust", "Peace be upon Hitler", and "Ask her if she thinks the treatment of Palestinians reminds her [of] the treatment she got in the camps" (Kampeas 2021).
A study published by the Center for Countering Digital Hate (CCDH 2021) in 2021 found that social media platforms including TikTok failed to act on most antisemitic posts. They collected 78 antisemitic comments sent directly to Jewish users and found that they failed to act against 76% of antisemitic abuse. When TikTok did act, it more frequently removed individual antisemitic comments instead of banning users, with TikTok removing 19.2% of posts and banning only 5% of accounts sending direct antisemitic abuse (CCDH 2021). These findings demonstrate the need for an empirical, systematic, and objective study of TikTok's use for antisemitic propaganda, incitement, and hate.
Research Questions
The present study set out to explore the various presentations of antisemitism on TikTok. More specifically, it addressed four research questions: • RQ1: How is TikTok used to propagate antisemitism? • RQ2: What are the different formats which are used for spreading hatred and antisemitism on TikTok such as posts, comments, hashtags, and usernames? • RQ3: What are the various characteristics of antisemitism used on TikTok, focusing on whether the material shared on TikTok can be defined as "new antisemitism" or "old antisemitism"? • RQ4: Are there changes over time, and new trends in antisemitism on TikTok?
Data Collection
Data were collected twice, in two four-month periods, February-May 2020 and February-May 2021, to allow for comparisons of changes and trends over time. To scan TikTok, we applied a systematic content analysis, which was conducted in Israel. The first stage involved searching for posts (video clips posted), comments (texts written by viewers, following the video clip), hashtags, and usernames relating to Judaism and antisemitic beliefs for around 20 to 60 min every day. All posts, comments, and usernames relating to the IHRA definition of antisemitism were noted, including screenshots and URLs. The keywords used for data collection were in English and included: Jew, Jewish, Jews, an-tisemitic, antisemitism, holocaust, 109 countries (the number 109 is white supremacist numeric shorthand for an antisemitic claim that Jews have been expelled from 109 different countries), dancing Israeli, Rothschild, and 6 million. These keywords are a combination of generic terms, some of which are associated with antisemitic conspiracy theories. We selected these keywords as many relate to conspiracy theories which are popular amongst young people, who are TikTok's main audience.
These terms enabled us to find users who posted antisemitic hate content associated with these terms; however, memes and videos that did not contain captions were unsearchable. It should also be noted that TikTok contains limitations on what is searchable, for example, a search for "Adolf Hitler" will result in "no results found-this phrase may be associated with hateful behavior"; whilst users can use these hashtags when posting content, they cannot be searched for.
The IHRA Working Definition of Antisemitism was selected for our study as it is intended to guide and educate. The Working Definition is in its essence a non-legally binding document, which makes it a useful tool in identifying antisemitism. Moreover, antisemitism today is primarily rooted in three different groups: the extreme right, the extreme left, and Islamic extremism. The IHRA Working Definition includes targeting the state of Israel which is important as antisemitism today is increasingly disguised as political stances against Israel and Zionism known as "new antisemitism". A recent study by the Kantor Center for the Study of Contemporary European Jewry (Jewish News Syndicate 2021) revealed that, over the past 5 years, over 450 leading organizations, including 28 countries, have adopted or endorsed the Working Definition of Antisemitism. As of June 2020, Switzerland adopted the Working Definition of Antisemitism, becoming the 36th country to do so.
The antisemitic postings on TikTok yielded numerous comments: we scanned 56,916 comments responding to the antisemitic postings. In addition to searching for posts and comments, we also searched for usernames with the same keywords. The methodology was developed, tested, and applied during our 2020 study (Weimann and Masri 2020), and the findings of the present study, a year later, allow for monitoring changes, especially after TikTok's declarations in 2020 that it will remove all antisemitic postings (Levine 2020). 9. Findings 9.1. RQ1: How Is TikTok Used to Propagate Antisemitism?
Despite multiple pledges by TikTok over the years to do more to tackle hateful content, it still fails. A recent report by The Centre for Countering Digital Hatred (CCDH) found that TikTok only removed 18.5% of reported antisemitic posts (CCDH 2021). Our scan of TikTok postings in 2021 revealed a total of 61 antisemitic postings (a rise of 41% when compared with our 2020 scan). More alarming was the growing frequency of antisemitic comments on TikTok, rising from 41 in 2020 to 415 in 2021 (an increase of 912%). We also found a sharp increase in usernames with antisemitic titles (e.g., "@holocaustwasgood" or "@eviljews"), rising from only four in 2020 to 59 in 2021 (an increase of 1375%).
By failing to remove these posts, they remain viewable to TikTok users, who can like, comment, and share the post, thus gaining it more viewers. TikTok's algorithm also means that antisemitic content is likely to be shared with users who the algorithm deems as being interested in similar posts. The algorithm learns users' hidden interests and emotions, which can drive users deep into a rabbit hole of dangerous content. This is particularly dangerous due to TikTok's young and captive audience, who are at risk of being radicalized and mobilized.
Let us present an illustrative example of antisemitism in TikTok's contents. On Holocaust Memorial Day on 27 January 2021, TikTok populated the app with "educational videos about the Holocaust, the Jewish community and antisemitism today." (Kanter 2021). TikTok boasted that "when a UK TikTok user first opens the app, they will find at the top of their For You feed an educational video featuring Robert Rinder, as well as our top creators, to encourage our community to access the new educational resources and learn more about Holocaust Memorial Day. The new resources include Lily Ebert BEM sharing her story of surviving Auschwitz-Birkenau." (Ibid). TikTok's attempts of promoting educational videos of the Holocaust were met with a barrage of antisemitic comments. The video from Lily Ebert received comments that included "it never happened", "Holocaust never happened". The video of Lily Ebert, a Holocaust and COVID-19 survivor, received similar comments: "yo I think we were in the same camp bro" and "burn." Other videos of Holocaust survivors received the following comments: "can not see 6 million," "holocaust is like 9/11 it was made to happen," "if the gas chambers at Auschwitz were real, how come the holes used to inject the gas were installed after the war," "most holocaust survivors are hoaxers", and "holocaust is the biggest lie in century." However, these videos also showed that many people did not know that six million Jewish people were killed in the Holocaust, such as "is this true" and "I only know 6 mill died cuz of TikTok." Whilst these videos demonstrate that TikTok has attempted to provide educational videos about the Holocaust and antisemitism today, their attempts were futile as they failed to monitor comments on these posts which they promoted.
RQ2: What Are the Different Formats Which Are Used for Spreading Hatred and Antisemitism on TikTok Such as Posts, Comments, Hashtags, and Usernames
TikTok was originally designed as a lip-synching app, similar to Vine, wherein users can create their own videos where they lip-sync along to popular songs and audio clips. Today, TikTok has evolved into a meme factory, with most forms of hatred taking this format. Many memes include antisemitic tropes including the "Happy Merchant", a meme illustrating a drawing of a Jewish man with a greatly stereotyped face who is greedily rubbing his hands together.
Another format used for spreading hatred on TikTok is via user handles and display names. TikTok users identify themselves in two ways, through a user handle (@johnsmith) and a display name that appears on their profile such as "John Smith"; 21 of the names found were categorized under the IHRA working definition of "Making mendacious, dehumanizing, demonizing, or stereotypical allegations about Jews as such or the power of Jews as collective-such as, especially but not exclusively, the myth about a world Jewish conspiracy or of Jews controlling the media, economy, government or other societal institutions". These names included "@antisemeticandproud", "@violentantisemite", "@eviljews", "@thejewsrunmedia", and "@jewdestroyer88". Several of these names included the far-right numerical code of "88" which is the white supremacist numerical code for "Heil Hitler". Another 18 names were categorized under "Denying the fact, scope, mechanisms (e.g., gas chambers) or intentionality of the genocide of the Jewish people at the hands of National Socialist Germany and its supporters and accomplices during World War II (the Holocaust)". These names included "@holocaust.was.a.pr.stunt", "@holocaustfake", and "@holocaust.is.fake". Additionally, we found 18 names that were categorized under "accusing Jews as a people of being responsible for real or imagined wrongdoing committed by a single Jewish person or group, or even for acts committed by non-Jews". These names included "@jews_did_9_11", "@jewscaused911", and "@jewscausedvietnam". A further six names were characterized as "Calling for, aiding, or justifying the killing or harming of Jews in the name of a radical ideology or an extremist view of religion". These included, "@jewgasser88", "@jewdestroyer1939" and "@holocaustwasgood". One account named "@holocaust_hype_house", a reference to houses where Gen-Z TikTok influencers live together and create TikTok videos, had a bio reading "let's gas the jews".
Another more recent format used for spreading hatred is TikTok challenges. TikTok challenges start with a viral video, and users quickly start putting their spin on it. While some of these challenges are innocent, such as the stair-step challenge, other challenges have been antisemitic in nature. In 2020, the "#holocaustchallenge" emerged where people pretended to be Holocaust victims, sharing clips of themselves with fake bruises and wearing items of clothing that Jews were ordered to wear by the Nazis. In 2021, another trend encouraged users to use the "Expressify" filter to exaggerate their facial features whilst singing "If I Were A Rich Man" from the musical Fiddler on the Roof, resembling the happy merchant.
The comment section on public posts is one of the main ways of spreading hatred on TikTok. TikTok's comment section was largely unfiltered until March 2021, when it gave post users the ability to filter their comments and users that are commenting now receive a pop-up intended to prompt the user to reconsider a comment that may be inappropriate or unkind. Within 4 months in 2021, we found 415 comments containing one or more of the antisemitic attributes of the IHRA working definition. The majority of these comments fit the IHRA definition of "Calling for, aiding, or justifying the killing or harming of Jews in the name of a radical ideology or an extremist view of religion." These comments predominantly related to conspiracy theories such as the notion that Jews are conspiring to control the world, the media, banking industry, and government. Comments included "and yet they still control the banks and media, jews must be the master race", "agree 100% they [jews] control the media and all worldwide governments they are no good parasites", "the jews world bankers fabricated lies and wanted you to truly believe in it", "fake Holocaust that Jews can milk world out of pity and money", and "Jewish people are the most over-represented group in all circles of power. But keep pretending you're horribly prosecuted [sic] lmfao." Another 125 comments related to "calling for, aiding or justifying the killing or harming of Jews in the name of a racial ideology or an extremist view of religion." These comments typically referred to the Holocaust such as "bring back the Holocaust", "the pogroms were justified", "we need another Holocaust", "we aren't sorry that we persecuted Jews in fact we'll do it again", "Jews are kind of people who should not exist". Fifty-four comments related to "Holding Jews collectively responsible for actions of the state of Israel." Comments included; "aw poor wittle Jew is such a victim, its not like you literally have a ethnostate," "I remember the Palestinians in Gaza who were snipered cold blooded by Jews for being Palestinian teens", and "brother until you stop Jewish atrocities people will never accept and sympathise with the holocaust. It is hypocritical." Other comments related to "Denying the fact, scope, mechanisms (e.g., gas chambers) or intentionality of the genocide of the Jewish people at the hands of National Socialist Germany and its supporters and accomplices during World War II (the Holocaust)." These comments asserted that the Holocaust was invented, such as "can not see 6 million", "holocaust is like 9/11 it was made to happen", and "holocaust is the biggest lie in century". Other comments related to "Accusing Jews as a people of being responsible for real or imagined wrongdoing committed by a single Jewish person or group, or even for acts committed by non-Jews." One comment argued that "Jews spread the virus" referring to the COVID-19 pandemic. Other comments suggested that Jews were responsible for the 9/11 attacks, such as "I am educated a lot and the only terrorists I mentioned are Jews the biggest terrorists in the world as well as the biggest war criminals in the world . . . they also blew up the world trade centers, there the biggest terrorists in the world they kill and rape women and little girls in Palestine." More comments were found that "accusing the Jews as a people, or Israel as a state, of inventing or exaggerating the Holocaust." These comments included "Holocaust is like 9/11 it was made to happen" and "fun fact Holocaust never happened, they rebuilt the buildings, its fake with crisis actors." Other comments were found "Denying the Jewish people their right to self-determination, e.g., by claiming that the existence of a State of Israel is a racist endeavour." These included "there was never a Jewish state and will never be one" and "Jews don't deserve a homeland". 9.3. RQ3: What Are the Various Characteristics of Antisemitism Used on TikTok, Focusing on Whether the Material Shared on TikTok Can Be Defined as "New Antisemitism" or "Old Antisemitism" A substantial amount of antisemitism on TikTok was disguised as criticism of Israel or Holocaust denial. Holocaust denial and distortion is a form of antisemitism, claiming that the Holocaust was invented or exaggerated by the Jews to advance their interests. As Jack Fischel stated, "in continuing Hitler's racial war against the Jews, through its attack on Israel's legitimacy, anti-Semites in both the United States and Europe realized that to be successful they must overcome the link the public made between the Holocaust and the subsequent creation of Israel" (Fischel 1995, p. 207). Over recent years, especially over the COVID-19 pandemic that led to the increased time spent at home on the internet, conspiracy theories including Holocaust denial have gained more traction. As a report from Hope Not Hate pointed out, "COVID-19 related conspiracy theories have provided a more worrying new route towards antisemitic politics . . . Holocaust denial and admiration for Hitler are in fact a progression through different conspiracy theories, which may contain antisemitic undertones but don't necessarily require them" (Hope Not Hate 2021). Holocaust denial usually comes in the form that not all Jews were killed, that the numbers are wrong/inflated, there are other equivalent tragedies, or that Israel has perpetuated its own Holocaust. Holocaust denial has become a fundamental aspect of new antisemitism, often used by the right and left-wing as well as anti-Israel critics to delegitimize the Jewish people and Israel.
Whilst not all criticism of Israel is antisemitic, comparisons to Nazi Germany such as claiming that what Israel is doing to the Palestinians now is the same as the systematic extermination of Jews by the Nazis in WWII are antisemitic and popular on TikTok. This type of demonization and unfair criticism against Israel fits the category of "new antisemitism" by singling out Israel. As Rabbi Sacks stated "Today, Jews are attacked because of the existence of their nation state, Israel. Denying Israel's right to exist is the new antisemitism. And just as antisemitism has mutated, so has its legitimization" (Sacks 2017).
The antisemitic contents on TikTok combine both "old" and "new" attributes of antisemitism. Thus, in addition to the "classical" attribute of Holocaust denial or stereotypical allegations about the power of Jews as a collective, we also found the newer attributes of blaming Israel for atrocities and comparisons of contemporary Israeli policy to that of the Nazis. These messages may be more powerful to young, gullible, naïve, and less-informed recipients, as most TikTok audiences are. 9.4. RQ4: Are There Changes over Time, New Trends in Antisemitism on TikTok?
In addition to the rising amounts of antisemitic postings, comments, and even usernames, we also found a rise in user awareness on the app surrounding antisemitism. Posts that were either antisemitic or received antisemitic comments also had comments including "Why am I on antisemitic TikTok", "holy s*** that is based", "nah just antisemitic", "how is this post not hateful towards Jews" and "please don't joke about the murder of 6 million innocent jews show some respect". Whilst only a small proportion of comments reflect user awareness of antisemitism on the app, it shows the prominent role that antisemitism plays on TikTok.
Another trend noticeable on the app was users trying to avoid detection by purposefully misspelling words. Users replace the word "Jews" with "juice" to avoid having accounts deleted, such as "f*** the juice, I'm dyslexic" and "juice have been expelled from 109 countries". One account posted a clip from the UN speech on Jewish Refugee Day in 2019 where Hillel Neuer asked Algeria, Egypt, Iraq and other countries "Where are your Jews?". This post received comments such as "where is the juice", "they're dead lol", "where are your juice? I put the juice in the oven", and "karma that's what they do to Palestinians, liars deceiver juice". This demonstrates that users are aware of algorithms and hate speech policies and are purposefully adapting their spelling of keywords to avoid detection. These terms were only noticed in our study as they were comments on antisemitic postings, otherwise, they would not have appeared when searching for them. This trend highlights a need for more stringent measures and content moderation on TikTok.
Conclusions
Since its inception in 2016, TikTok has repeatedly claimed that there it will not tolerate antisemitism on its platform. Elizabeth Kanter, director of TikTok's government relations in Israel said "Antisemitism is an abomination, and therefore anti-Semitic content that expresses hatred has no place on our platform. We have zero tolerance for organized hate groups and those associated with them. Our community guidelines reflect our values and when they are violated, we take action, including removing content and closing accounts" (Knesset News 2020). However, TikTok remains a hotbed of antisemitism, failing to delete reported and non-reported posts and comments. TikTok's algorithm is a disconcerting feature, which sets TikTok apart from other social media applications as users do not know what type of content will appear on their homepage and can push users who unintentionally view disturbing content to view more.
Similar concerns exist on other social media platforms. In October 2020, Facebook and Twitter announced that they would remove tweets and posts that denied or diminished the Holocaust, insisting that antisemitism had no place on these platforms. However, they have failed to go far enough; in June 2021 The Campaign Against Antisemitism stated that "We do not have confidence in Twitter's capacity to address the rampant antisemitism on its platform" (Campaign Against Antisemitism 2021) citing Twitter's slow response in removing antisemitic posts. Likewise, in June 2021 the ADL criticized Facebook for their "inaction" in removing antisemitic postings, arguing that they were not upholding their community standards (ADL 2021b).
TikTok has unique features to make it more troublesome than other social media platforms. First, unlike other social media platforms, TikTok's users are predominantly young children and teenagers who are more naïve and gullible when it comes to malicious content. Second, TikTok is the youngest platform thus severely lagging behind its rivals, who have had more time to grapple with how to protect their users from disturbing and harmful content. Yet, TikTok should have learned from these other platforms' experiences and apply TikTok's own Terms of Service that does not allow postings that are deliberately designed to provoke or antagonize people, or are intended to harass, harm, hurt, scare, distress, embarrass, or upset people or include threats of physical violence. | 2021-11-21T16:19:58.679Z | 2021-11-18T00:00:00.000 | {
"year": 2021,
"sha1": "f7b2616b9eee3ae4a900092f26bbc17c78c28708",
"oa_license": "CCBY",
"oa_url": "https://www.mdpi.com/2673-5172/2/4/41/pdf",
"oa_status": "GOLD",
"pdf_src": "Adhoc",
"pdf_hash": "6e6beb76f383af42911da8435e4968b57990c1ef",
"s2fieldsofstudy": [
"Political Science",
"Sociology"
],
"extfieldsofstudy": []
} |
2705493 | pes2o/s2orc | v3-fos-license | AnIta: a powerful morphological analyser for Italian
In this paper we present AnIta, a powerful morphological analyser for Italian implemented within the framework of finite-state-automata models. It is provided by a large lexicon containing more than 110,000 lemmas that enable it to cover relevant portions of Italian texts. We describe our design choices for the management of inflectional phenomena as well as some interesting new features to explicitly handle derivational and compositional processes in Italian, namely the wordform segmentation structure and Derivation Graph. Two different evaluation experiments, for testing coverage (Recall) and Precision, are described in detail, comparing the AnIta performances with some other freely available tools to handle Italian morphology. The experiments results show that the AnIta Morphological Analyser obtains the best performances among the tested systems, with Recall = 97.21% and Precision = 98.71%. This tool was a fundamental building block for designing a performant PoS-tagger and Lemmatiser for the Italian language that participated to two EVALITA evaluation campaigns ranking, in both cases, together with the best performing systems.
Introduction
Stemming and lemmatisation are fundamental tasks at lowlevel Natural Language Processing (NLP) in particular for morphologically complex languages involving rich inflectional and derivational phenomena. These tasks are usually based on powerful morphological analysers able to handle the complex information and processes involved in successful wordform analysis. After the seminal work of Koskenniemi (1983) (see also the recent books (Beesley and Karttunen, 2003;Roark and Sproat, 2006) for general overviews) introducing the two-level approach to computational morphology, a lot of successful implementations of morphological analysers for Western European languages has been produced (Beesley and Karttunen, 2003;Cöltekin, 2010;Pianta et al., 2008;Schmid et al., 2004;Tzoukermann and Libermann, 1990). Although this model has been heavily challenged by some languages (especially semitic languages (Gridach and Chenfour, 2010;Kiraz, 2004)), it is still the reference model for building such kind of computational resources at least for Western European languages. These models usually implement two different operations: a) analysis, which extracts all the information connected with a wordform associating it to a standardised notation "lemma+features" -for example the form libri ('books') becomes "libro+Noun+Masc+Plur" and the form amo (which is ambiguous in Italian, and may correspond to 'I love' or to 'hook') is associated to two different lemmas, "amare+Verb+Ind+Pres+1p+Sing" and "amo+Noun+Masc+Sing" -and b) generation, the opposite operation, which associates to a structure "lemma+features" the corresponding wordform -for example the structure "dormire+Verb+Ind+Pres+1p+Sing" is associated to the wordform dormo ('I sleep'). In the late nineties some corpus-based/machine-learning methods were introduced to automatically induce the information for building a morphological analyser from corpus data (see the review papers (Creutz and Lagus, 2007;Hammarström and Borin, 2011)). These methods seem to be able to induce the lexicon from data, avoiding the complex work of manually writing it, despite some reduction in performance.
Italian Morphology
Italian is one of the ten most widely spoken languages in the world. It is a highly-inflected Romance language and simple words can be modified by, essentially, three morphological processes: inflection, derivation and compounding. This section will give a short overview of Italian morphological phenomena.
Inflection
Words belonging to inflected classes (adjectives, nouns, determiners and verbs) exhibit a rich set of inflection phenomena. There are essential two basic types of inflection: noun inflection and verb inflection. Noun inflection, also shared with adjectives and determiners, has different suffixes (or inflection endings) expressing at the same time both gender and number, while verb inflection presents a rich variety of inflection endings for tense, mood, person and number. Both nouns and verbs can present a large set of regular inflections and a relevant range of irregular behaviours changing the wordform base. All inflection phenomena are realised by using different suffixes, and some morphophonological rules have to be applied to adjust the orthographic form in the juncture between the base and the inflectional ending.
Derivation
Nouns, adjectives and verbs form the base for deriving new words through complex combinations of prefixes and suf-fixes added to a base form and through conversion processes. A large number of affixes can be combined in various ways in order to derive new words: for example the word 'riallineabilità' -'realignability' -is formed by adding two prefixes, 'ri-' and 'a-', and two suffixes, '-bile' and '-ità', to the base 'linea'. Deciding the actual order of the derivational processes is not obvious and, in most cases, cannot be established in a clear way. We will discuss this problem in detail in one of the following sections.
Compounding
Also compounded forms are quite frequent in Italian. Compounding regards the combination of two base forms to produce a new form: in Italian various combinations of word classes are acceptable, for example Noun+Noun, 'pesce+cane' -'shark ', Adj.+Adj., 'dolce+amaro' -'bittersweet', Verb+Verb, 'gira+volta' -'wirl, somersault', Verb+Noun, 'canta+storie' -'storyteller', and so on. Not all combinations of word classes, even if attested in some cases, are really productive. Some compounds can be written also by connecting the two forms with an hyphen or even by keeping the two words separate, but this kind of orthographic spelling is not usually handled by a morphological analyser.
Computational tools to handle Italian Morphology
From a computational point of view there are some resources able to manage the complex morphological information of the Italian language. On the one hand we have open source or freely available resources, such as: • Morph-it (Zanchetta and Baroni, 2005) an open source lexicon that can be compiled using various packages implementing Finite State Automata (FSA) for twolevel morphology (SFST-Stuttgart Finite State Transducer Tools and Jan Daciuk's FSA utilities). It globally contains 505,074 wordforms and 35,056 lemmas. The lexicon is quite small and, in order to be used to successfully annotate real texts, it requires to be extended. Moreover, the lexicon is presented as an annotated wordform list and extending it is a very complex task. Although it uses FSA packages it does not exploit the possibilities provided by these models of combining bases with inflection endings, thus the addition of new lemmas and wordforms requires listing all possible cases.
• TextPro/MorphoPro (Pianta et al., 2008) a freely available package (only for research purposes) implementing various low-level and middle-level tasks useful for NLP. The lexicon used by MorphoPro is composed of about 89,000 lemmas, but, being inserted into a closed system, it cannot be extended in any way. The underlying model is based on FSA.
On the other side we have some tools not freely distributed that implement powerful morphological analysers for Italian: • MAGIC (Battista and Pirrelli, 2000) is a complex platform to analyse and generate Italian wordforms based on a lexicon composed of about 100,000 lemmas. The lexicon is quite large, but it is not available to the research community; ALEP is the underlying formalism used by this resource.
• Getarun (Delmonte, 2009) is a complete package for text analysis. It contains a wide variety of specific tools to perform various NLP tasks (PoS-tagging, parsing, lemmatisation, anaphora resolution, semantic interpretation, discourse modeling...). Specifically, the morphological analyser is based on 80,000 roots and large lists of about 100,000 wordforms. Again the lexicon is quite large, but, being a close application not available to the community, it does not allow to profitably use such resource to develop new NLP tools for the Italian language.
The AnIta Morphological Analyser
In this paper we present AnIta, a morphological analyser for Italian based on a large hand-written lexicon and twolevel rule-based finite-state technologies. The motivations for such choice can be traced back, on the one hand, to the availability of a large electronic lexicon ready to be converted for such models and, on the other hand, on the the aim of obtaining an extremely precise and performant tool able to cover a large part of the wordforms found into real Italian texts (this second requirement drove us to choose a rule-based manually-written system instead of unsupervised machine-learning methods for designing the lexicon). It is quite common, in computational analysis of morphology, to implement models covering most of the inflectional phenomena involved in the studied language. Implementing the management of derivational and compositional phenomena in the same computational environment is less common and morphological analysers covering such operations are quite rare (e.g. (Schmid et al., 2004;Tzoukermann and Libermann, 1990)). The implementation of derivational phenomena in Italian considering the framework of two-level morphology has been extensively studied by (Carota, 2006); she concludes that "...the continuation classes representing the mutual ordering of the affixes in the word structure are not powerful enough to provide a motivated account of the co-selectional restriction constraining affixal combination. In fact, affix co-selection is sensitive to semantic properties." Considering this results we decided to implement only the inflectional phenomena of Italian by using the considered framework and manage the other morphological operations by means of a different annotation scheme. The development of the AnIta morphological analyser is based on the Helsinki Finite-State Transducer package (Lindén et al., 2009). Considering the morphotactics combinations allowed for Italian, we have currently defined about 110,000 lemmas, 21,000 of which without inflection, 51 continuation classes to handle regular and irregular verb conjugations (following the proposal of (Pirrelli and Battista, 1996) for the latter) and 54 continuation classes for noun and adjective declensions. In Italian clitic pronouns can be attached to the end of some verbal forms and can be combined together to build complex clitic clusters. All these phenomena have been managed by the analyser through specific continuation classes. Nine morphographemic rules handle the transformations between abstract lexical strings and surface strings, mainly for managing the presence of velar and glide sounds in the edge between the base and the inflectional ending. The Appendix shows a lexicon fragment for three simple lemmas.
The management of inflectional phenomena for Italian is fairly standard and do not require special devices or complex solutions in the implementation. The most interesting feature introduced into AnIta concerns the complex morphological annotation devised to mark the derivational and compounding processes. AnIta is able to produce wordforms where the various morphemes (base, prefixes and suffixes) are clearly marked and segmented.
A first extension: wordform segmentation
In order to describe derivational phenomena, we devised a first level of annotation able to mark the internal segmentation of word forms. Each form will be associated with a linear structure that can be described by the following regular expression schema: where PREF, BASE, SUFF, CLITCL and INFLEND are strings that represent a prefix, a base, a suffix a clitic cluster and an inflectional ending, respectively. The insertion of this annotation inside a corpus allows for a large number of sophisticated queries by using regular expressions, for example: /dis>.+/ wordforms prefixed with dis-/.+<on-[eia]/ wordforms suffixed with -one (-oni, -ona) /in>.+<ità/ wordforms simultaneously prefixed with in-and suffixed with -ità We followed two simple rules to segment the lemmas for marking derivational phenomena: (a) segment a lexicon entry only if its base is an Italian independent word clearly recognisable (we have thus excluded all the bases taken from Greek, Latin or other languages); (b) we decided to keep the affix unchanged, maintaining all possible variations (geminations, clipping, phonetic readjustments, ...) onto the base. While this first level of morphological annotation allows for a large number of complex queries, it is still unsuitable to represent some fundamental information. First of all, it does not contain any indication about the lexical class of the bases and of the derived forms and, secondly, the representation of Italian complex words it provides is not enough detailed and powerful. A more complete annotation schema, able to complete this first level segmentation, has to be devised in order to capture the complex details of Italian morphological processes.
Derivation Graphs for representing morphological processes
Two problems are pressing while annotating real texts. First of all, the derivational processes underlying some wordforms cannot be easily described as single derivational trees; instead, a single derived word can involve different possible interpretations giving rise to different trees; consequently, a one dimension model is unsuitable to account for such complex words. Moreover, in order to be able to retrieve all possible morphological combinations, we need to incorporate into the corpus annotation information about the lexical classes both of the bases and of the complex words derived by affixation and to make it available for the users.
We will present the proposed solution to these problems by discussing an example. Let us consider the complex word s>componi<bile 'decomposable'. This form can be described as the result of two possible derivational paths, and, consequently, it can be represented by two different trees (we represent the tree using the parenthesised notation indicating the class of the derived form as a subscript): Choosing one of these options, and, consequently, discarding the other, is a strong theoretical choice, since it is impossible to determine, on empirical grounds, whether the adjective scomponibile is derived from the adjective componibile by adding the prefix sor from the verb scomporre by adding the suffix -bile. See also (Mahlow and Piotrowski, 2009) and (Celata and Bertinetto, 2010) for similar discussions about this problem.
The formal structure that naturally extends a tree is the "graph". If we consider each element intervening in a derivational process (the base and the affix(es)) as the nodes of a graph (keeping the information on the nature of the affix, as in the segmentation annotation) and the "derivation relation" as the formal device for defining the edges of the graph, we can build the "Derivation Graph" (DG) for the form scomponibile as in Figure 1. The edges have arrows which mark the direction in which a derivation can take place and the class of the derived word. In order to navigate a graph, two rules must be obeyed: • the starting point is always the base, that is the upper element (highlighted with grey in Figure 1); • each edge must be always travelled in the opposite direction of the arrow.
Therefore it is possible to reconstruct all the possible interpretations of a derivational process by navigating the DG following a simple rule: every path in the graph starting from the base and built reversing the derivation relation (i.e. traveling the edges in the opposite direction of the arrows) that includes all the nodes at once leads to a possible interpretation of the derivational history of a complex word, and produces a tree describing this process. From a theoretical/computational point of view there are various ways of representing a graph structure, depending on the intended final use of such information. One of these methods consists in listing all the graph edges. Using this representation we can describe an entire graph as a single string considering the concatenation of graph edges. For example, the DG in Figure 1 can be expressed through the following list of its edges: s V>componi V#componi V<bile A# s A>bile A#s A<bile A where the character '#' acts as a separator between the edges. Once each wordform in the corpus has been annotated with the string expressing the DG associated with it, the construction of simple but extremely powerful queries is possible with any corpus management program permitting the use of regular expressions in corpus queries, such as the IMS/Corpus Workbench. Some examples of these queries are given below: /.+ V<bile A/ all the instances/concordances in which the suffix '-bile' forms an adjective from a verb; /s A>.+ A/ all the instances/concordances in which the prefix 's-' forms an adjective from another adjective; /.+ V<.+ A/ all the instances/concordances in which a suffix forms an adjective from a verb. In Italian, it is common that a single affix derives words belonging to different lexical classes. It is the case, for example, of the word [oper N < aio] N/A . In order to take these cases into account, we propose to encode all the possible combinations of the four major lexical classes (N, A, V, D(=adverb)) by using the simple encoding schema depicted in Table 1. So, a problematic word like operaio can be associated with the structure [oper N < aio] C . In this way, operaio will be included in the results from queries aimed at extracting derived nouns and derived adjectives from the corpus. The class encoding schema we propose covers all the combinations that are logically possible, although most of them are not attested in Italian. Again, a system based on regular expression searches will help find all the relevant combinations while querying the corpus. Table 2 show some examples of complete analysis performed by AnIta. The second column -Morphological analysis -shows the inflectional analysis of the wordform, while the third column -Segmentation -depicts the internal wordform segmentation. The fourth analysis -Derivational Process -contains the DG of each wordform, when a derivation process is present. Please refer to (Grandi et al., 2011) for a complete description of DG, the annotation schema and for the theoretical grounds and consequences involved by this model. Table 2: Some examples of AnIta analyses. Various special characters mark univocally the different components in the wordform segmentation ('-' for inflectional suffixes, '<' for derivational suffixes, '>' for prefixes, '∼' for clitic clusters and '+' for compounds).
Code Combination Code Combination
The wordform segmentation and the DG are not implemented directly into the main morphological analyser, but they are implemented as secondary two-level transducers that take the output of the first analyser as input and produce the proper wordform segmentation and DG. The information about the internal segmentation has been inserted directly into the lemma string and multiple affixation is handled by inserting multiple symbols (e.g. ri>ab>bassa<ment-o). With regard the DG, we inserted directly in the annotation the string representing the edges list that results from the analysis process. (2000) definition of "base disctionary" (divided into three further classes 'Fundamental', 'High Use' and 'High availability'), will be made available freely to the research community 1 .
Evaluation
In the literature, various possibilities have been proposed for evaluating morphological analysers: (a) compare the results produced by the morphological analyser with a manually checked set of data (Gold Standard) as in (Faaß, 2011;Mahlow and Piotrowski, 2009;Sawalha and Atwell, 2008). This approach requires, on the one hand, the availability, or production, of an expensive gold standard that, for this reason, is usually quite small. On the other hand we can evaluate the obtained results on a fine-grained basis checking coverage and also classification accuracy among the different morphological possibilities; (b) compare the analyser coverage against well attested lexicons/dictionary as in (Zanchetta and Baroni, 2005). This approach requires the availability of a large electronic lexical resource and tests the analyser on a standard well attested lexicon, leaving aside most of the terminology that you can find in real texts. Moreover, we have to test the analyser only against the base form of each lemma and cannot verify the correct recognition behaviour for each wordform; (c) the third possibility involves the computation of the analyser coverage over a large corpus as in (Cöltekin, 2010;Keselj and Sipka, 2008;Schmid et al., 2004;Yablonsky, 1999). Testing the morphological analyser on authentic texts gives a good measure of its coverage performances when working on real data.
First evaluation step
As a first step, we chose to evaluate AnIta using the third method. We followed the same procedure suggested in (Schmid et al., 2004) for evaluating SMOR, a morphological analyser for German. We extracted the wordform frequency list from CORIS and compute the total amount of wordforms identified by the analyser multiplied for its respective frequency inside the corpus (multiplying each analyser output for the wordform frequency is simply a trick to speed up the process, but it does not change the final results). For testing, we considered only wordforms satisfying the regular expression /[a-zA-Z]+'?/, as the purpose of this evaluation is to test the analyser on real words excluding all non-words (numbers, codes, acronyms, ...), quite frequent in real texts. The metric used for the evaluation of the AnIta coverage is the Word Error Rate (WER), as suggested in (De Pauw and de Schryver, 2008), consisting in the ratio between the total number of tokens recognised by the analyser divided by the total number of tokens analysed (those satisfying the regular expression described before). It is worth noting that in this experiment WER, as defined before, is equivalent to make a measure of the Recall obtained by the system defined as the number of true positives (wordforms that were to be analysed and that were analysed by the system) divided by the sum of true positives and false negatives (wordforms that were to be analysed but that were not analysed by the system) (Faaß, 2011), or, in other words, WER = 1 -Recall.
Number of CORIS tokens 110303560 Number of analysed tokens 86297311 Number of analyses types 592500 Table 3: Statistics for the evaluation data extracted from CORIS.
Thanks to the availability of Morph-It and TextPro we were able to compare AnIta performances against these other two, commonly used, tools for Italian language. See Table 3 for a complete overview of the experiment figures. Table 4 shows results for all the three tested morphological analysers. The AnIta results are presented considering the two options with (AnIta-PN) and without (AnIta) the insertion of a Proper-Noun list (3,461 entries comprising person names, cities, countries, etc.) into the analyser lexicon. In both cases AnIta outperforms the other systems obtaining smaller WER, as absolute value, which is significantly lower than the others. The insertion of a proper-noun list into the lexicon proved to increase the analyser performance quite significantly.
A second evaluation step
The second evaluation step is aimed at measuring the precision of the AnIta Morphological Analyser. For this second experiment we chose to implement an evaluation scheme of type (a): the wordforms contained in the Gold Standard corpus used in the EVALITA 2011 Lemmatisation Task (Tamburini, 2012a), annotated manually both with PoS-tags and correct disambiguated lemmas, were provided to the systems and analysed, verifying if the correct lemma extracted from the Gold Standard is one of the option provided by the tested system. The actual Precision score is then computed as the ratio between the number of wordforms for which the correct lemma is part of the analyser proposed solutions divided by the total number of recognised wordform (for which we have at least one possible solution). To provide a complete picture, we introduced also a measure of ambiguity, computed as the number of wordforms having more than one lemma as possible solution divided by the number of wordforms recognised by the system. Table 5 depicts the results of this second evaluation step. Unfortunately, for this kind of evaluation, we can compare AnIta precision only with Morph-It, because TextPro implements also the disambiguation step, thus the results are, in this case, not comparable. The Precision exhibited by AnIta is quite high, both as absolute value and compared with the other system performance.
Using the AnIta morphological analyser as fundamental resource, we built a new Part-of-Speech tagger, derived from the one presented in (Tamburini, 2007), and a new lemmatiser program able to solve the ambiguity described before and choose the correct lemma among the possibilities provided by the AnIta Morphological Analyser for an ambiguous wordform (Tamburini, 2012b). This Lemmatiser system participated to the Lemmatisation Task of the EVALITA 2011 evaluation campaign (http://www.evalita.it/) obtaining very good accuracy scores. These results are mainly due to the AnIta's large lexicon that allows for a high coverage of Italian texts. | 2015-06-10T20:56:49.000Z | 2012-05-01T00:00:00.000 | {
"year": 2012,
"sha1": "8a54d8d44344e4db6a17bcbbb329e229c707f75b",
"oa_license": null,
"oa_url": null,
"oa_status": null,
"pdf_src": "ACL",
"pdf_hash": "8a54d8d44344e4db6a17bcbbb329e229c707f75b",
"s2fieldsofstudy": [
"Computer Science"
],
"extfieldsofstudy": [
"Computer Science"
]
} |
234708568 | pes2o/s2orc | v3-fos-license | Niaoduqing Granules Inhibits TGF-β1-induced Epithelial-Mesenchymal Transition in Human Renal Tubular Epithelial HK-2 Cells
Background: Chronic renal failure (CRF) is a worldwide public health burden. Niaoduqing granules (NDQ) is widely used for CRF treatment in China. However, the underlying mechanism of NDQ is not fully studied. This study is aimed to investigate whether NDQ ameliorate CRF by inhibiting TGF-β1-induced EMT in human renal tubular epithelial HK-2 cells. Methods: MTT assay and colony formation assay were used to investigate the cytotoxicity of NDQ in HK-2 cells. Morphological changes of HK-2 cells after TGF-β1 or/and NDQ treatment were observed under a microscope. Wound-healing, migration and invasion assays were performed to determine the cell movement, migratory and invasive abilities, respectively. Western blot analysis was carried out to examine the protein levels of TGF-β type I receptor (TβRI) and EMT-associated factors. Fluorescence confocal microscopy was applied to observe the organization of F-actin. Results: NDQ suppressed TβRI expression dose-dependently. NDQ inhibited TGF-β1-stimulated EMT in HK-2 cells, supported by the evidences that NDQ prevented morphology change, attenuated cell migration and invasion, downregulated EMT factors and reorganized F-actin distribution in TGF-β1-stimulated HK-2 cells. Conclusions: NDQ attenuates chronic renal failure which may be associated with inhibition of TβRI expression and EMT process.
reduce tubulointerstitial brosis and relieve CKD in rat, which were related with the modulation of TGF-β and erythropoietin signaling pathways [6,8,9]. However, it's still unclear that if NDQ attenuates CRF by inhibiting TGF-β1-induced EMT.
In this study, we rst demonstrated that NDQ downregulated the expression level of TGF-β receptor I (TβRI) in human renal tubular epithelial HK-2 cells, inhibited TGF-β1-induced EMT and dysregulated the distribution of lamentous actin (F-actin), which may account for the treatment of CRF by NDQ. HK-2 cells were cultured in RPMI-1640 with 10% fetal bovine serum (FBS, Sijiqing, Hangzhou, China) and 1% penicillin-streptomycin (Gibco, USA) in a humidi ed incubator with 5% CO 2 at 37 o C.
Cell viability assay
The viability of NDQ in HK-2 cells was determined by MTT assay as previously described [10]. Cells (10 4 /well) were seeded in 96-well plates for 24 h and then exposed to different concentrations of NDQ for 24 h, 48 h and 72 h, respectively. After treatment, the supernatant was discarded and 30 ml of MTT solution (5 mg/ml) was added and incubated for another 4 h at 37 o C. And then, the supernatant was discarded and the purple formazan crystals were dissolved in 100 ml of DMSO and the absorbance was measured at 570 nm by a microplate reader (Multiskan FC, Thermo Scienti c, USA).
Colony formation assay HK-2 cells (300/well) were seeded in 6-well plates for 24 h and then treated with NDQ at different concentrations. After incubated for 24 h, cells were washed with phosphate-buffered saline (PBS), and cultured in fresh medium for 10 days. Then, cells were xed in 75% alcohol at 4 o C and stained with Giemsa dye [11].
Cell morphological observation HK-2 cells (10 5 /well) were seeded in 6-well plates and incubated for 24 h. Cells were then serum-starved for 24 h followed by NDQ treatment for 24 h with or without TGF-β1. After that, cells were washed with PBS and morphological changes were observed under a microscope (IX 53, Olympus, Tokyo, Japan).
Cell migration and invasion experiment
For wound healing experiment, HK-2 cells (3´10 5 /well) were seeded in 6-well plates for 24 h and serumstarved for another 24 h. Cells were then scratched in a straight line with 200-μl pipette tips and treated with different concentrations of NDQ in the presence or absence of TGF-β1. Images of HK-2 cells were acquired with a microscope (IX 53, Olympus, Tokyo, Japan) at 0 h, 12 h and 24 h, respectively.
In the Transwell migration assay, HK-2 cells (10 5 /well) were seeded in Transwell chamber with RPMI-1640 medium for 24 h and 400 μl of RPMI-1640 medium containing 10% FBS was added into the bottom chamber. Cells were then incubated with NDQ for 24 h under TGF-β1-stimulated or unstimulated conditions, and chambers were xed using 75% alcohol before stained with Giemsa dye. After that, cells remaining on the upper surface of the membrane were removed by wiping and the images of migrated cells were obtained under a microscope (IX 53, Olympus, Tokyo, Japan). The invasion assay shared the same method with the Transwell migration assay, the only difference was, matrigel was applied in the Transwell chamber before cells were seeded.
Western blot analysis HK-2 cells (3×10 5 dish) were seeded in 60 mm dishes for 24 h and serum-starved for another 24 h. After treated with different concentrations of NDQ for 48 h with or without TGF-β1 stimulation, cells were harvested and washed with cold PBS, and then lysed for 15 min at 4 o C with RIPA buffer (0.1 mM phenylmethanesulfonyl foride, 0.1 mM sodium orthvanadate, 0.1 mM dthiothreitol and phosphatase inhibitor). After centrifugation at 13,500 rpm for 15 min, supernatants were collected as total protein. The protein concentrations were determined by a BCA protein assay kit. The method of Western blotting was performed as previously described [12]. Protein levels were quanti ed by ImageJ 1.4.3 (National Institutes of Health, USA). F-actin uorescence confocal microscopy HK-2 cells (3×10 5 dish) were seeded in confocal dishes for 24 h. After incubated in serum-free medium for 24 h, cells were exposed to NDQ for another 24 h in the presence or absence of TGF-β1. Cells were then xed with 4% paraformaldehyde and permeabilized with 0.5% TritonX-100 in PBS. After blocked with 5% bull serum albumin (BSA) for 15 min at room temperature (RT), cells were incubated with 100 μl of rhodamine phalloidin (70 nM) for 1 h at RT. Subsequently, cells were washed and counterstained with 100 μl of DAPI. The uorescence was observed by confocal microscopy (LSM800, Carl Zeiss, Oberkochen, Germany).
Statistical analysis
All data were expressed as the means ± SEM and analyzed by Graphpad Prism 5.0 software (Graphpad software Inc., USA). Tukey's test was used for multiple comparison. The values were considered statistically signi cant when P < 0.05.
Cytotoxicity of NDQ in HK-2 cells
To determine the cytotoxicity of NDQ in HK-2 cells, MTT assay and colony formation assay were used.
The results showed that NDQ inhibited the proliferation of HK-2 cells in a dose-and time-dependent manner (Fig. 1A). However, NDQ exerted no signi cant cytotoxicity in HK-2 cells at the dosage below 5 mg/ml (Fig. 1B), and data from colony formation assay further con rmed this result (Fig. 1C). We therefore use NDQ with the dosage of 5 mg/ml for further study.
NDQ downregulates the expression level of TβRI in HK-2 cells TGF-β1 signaling is considered to have a pivotal role in renal brosis and EMT process [5]. It is reported that the activation of TGF-β1 signaling is initiated by the binding of TGF-β1 ligand to TβRII and TβRI [13]. Therefore, we wonder whether NDQ affect the expression of TβRI. HK-2 cells were exposed to indicated concentrations of NDQ for 48 h, followed by Western blot analysis. As shown in Fig. 2 NDQ attenuates the morphological changes in TGF-β1stimulated HK-2 cells TβRI is critical in TGF-β1 signaling, which mediates downstream and leads to EMT [14]. Based on the evidence that NDQ downregulated the expression level of TβRI, we suspected NDQ exerts inhibitory effects on TGF-β1 signals and subsequently suppresses EMT in renal epithelial cells. TGF-β1, a wellknown inducer of EMT, was added and incubated with HK-2 cells for 24 h after serum-starved for 24 h to establish a model of EMT in HK-2 cells. The morphology of HK-2 cells was changed and became slender and broblast-like after simulated with TGF-β1 ( Figure 3, red arrow), while NDQ prevented cells from these morphological changes.
NDQ inhibits TGF-β1-induced EMT in HK-2 cells
To verify the effect of NDQ on TGF-β1-induced EMT, we determined the expression levels of EMTassociated protein, including N-cadherin, vimentin, slug and snail. Our data showed that TGF-β1 upregulated the expression levels of N-cadherin, vimentin, slug and snail, while NDQ prevented the induction of EMT by TGF-β1, with downregulation of the N-cadherin, vimentin, slug and snail expression levels ( Figure 4).
NDQ inhibits cell movement, migration and invasion in TGF-β1-stimulated HK-2 cells
When cells undergo EMT, cell adherence is attenuated and cell mobility is enhanced [15]. To investigate the effects of NDQ on cell movement in HK-2 cells, wound-healing assay was performed. We found that TGF-β1 promoted cell movement, nevertheless, NDQ inhibited the TGF-β1-induced cell movement in HK-2 cells ( Figure 5A). Further evidences were provided by Transwell migration assay and invasion assay. As shown in Figure 5B, TGF-β1 remarkably enhanced the migratory ability of HK-2 cells. However, NDQ treatment reduced cell migration. As such, the invasion assay result also demonstrated that NDQ reversed cell invasion induced by TGF-β1 in HK-2 cells ( Figure 5C).
NDQ reorganized the arrangement of F-actin in TGF-β1stimulated HK-2 cells F-actin, an important component of cytoskeleton, involves in cell migration and invasion [16]. We subsequently detected the arrangement of F-actin followed by NDQ treatment. Rhodamine phalloidin was applied to visualize the distribution of the F-actin in HK-2 cells. As shown in Figure 6, TGF-β1 increased the expression and longitudinally distribution of F-actin. Compared to TGF-β1-stimulated cells, NDQ treatment decreased the intensity and reorganized the distribution of F-actin in HK-2 cells. These data further validated that NDQ attenuated TGF-β1-induced EMT in HK-2 cells.
Discussion
As a burden on global health, CRF is mainly featured by tubulo-interstitial brosis, regardless of the initial etiology of the renal disease. Hence, renal brosis is a universal feature of CRF and contributes dramatically to end-stage renal failure [17]. There are two main therapeutic strategies for CRF to improve renal function: dialysis and renal transplantation. Nevertheless, these therapies lead to the increased risk of cardiovascular problem [18]. Clinical observation showed that NDQ dramatically alleviate renal dysfunction and tubulointerstitial brosis for CKD patients at stage III to stage IV [19]. Previous studies about CRF treatment tended to choose NDQ as a positive medicine [20][21][22][23], indicating its certain e cacies. In this study, we demonstrate that NDQ contributes to CRF by downregulation of TβRI expression level, inhibiting the EMT processing, and rearranging the F-actin organization.
Renal brosis is a characteristic by which CKD develops into CRF. Previous studies have reported that EMT is a phenotypic conversion that has emerged as a critical role in renal brosis and contributes to the progression of CRF [24]. TGF-β1 has a predominant role in the progress of CRF. Accumulating evidences have pointed out that TGF-β1 activates its downstream to contribute to EMT and renal brosis. TGF-β receptors, including TβRI and TβRII, participates in TGF-β signals. At the cell membrane, TGF-β1, serving as a ligand, rst binds to TβRII, and subsequently recruits TβRI and induces its activation. After receptor binding, TGF-β1 exerts its cellular functions and promotes the EMT process [25]. Hence, the activation of TβRI is pivotal in TGF-β1 signaling. It was reported that TβRI downregulation led to EMT suppression in TGF-β-stimulated nasal epithelial cells [14]. Previous research also suggested that suppressed TGF-β1/TGF-β type I receptor/Smads signaling activation in vivo and in vitro, suppressing the EMT and collagen deposition to alleviate renal brosis that the inhibition of renal brosis was regarded as an effective therapeutic approach for CRF [26]. In this study, we rst demonstrated that NDQ inhibits the expression level of TβRI in human renal tubular epithelial HK-2 cells, while previous study revealed that NDQ downregulated the expression level of TβRI in CRF rats [9]. We will further investigate if NDQ inhibit the membrane TβRI levels, and the effects of NDQ on TβRI transcription, translation, or post-translation.
The occurrence of EMT leads to the loss of epithelial markers of renal tubular cells, the development of invasive and metastatic in cells and the increased of mesenchymal features in morphology, such as spindle-cell-like morphology. Moreover, during the EMT process, the expression of the epithelial markers, including E-cadherin, is downregulated, whereas the mesenchymal markers, such as N-cadherin, Snail, Slug, and Vimentin are increased. N-Cadherin is the main contributors to reduced cell-cell adhesion in epithelial tissues and eventually lead to cell de-differentiation, tissue disorder and increased cell invasion capacity, ultimately, to metastasis. Vimentin is an intermediate lament protein and acts as a cytoskeletal element, which is also known as a mesenchymal marker. Besides, EMT-inducing transcription factors include Snail and Slug. Especially, Snail is associated with the initial cell-migratory phenotype and is regarded as an early marker of EMT [27,28]. In our research, TGF-β1 alternated the morphology of HK-2 cells, while NDQ prevented cells from these changes. It was also found that NDQ attenuated cell migratory and invasive abilities as well as inhibited TGF-β1-induced upregulation of EMT markers including N-cadherin, Snail, Slug, and Vimentin in HK-2 cells, indicating that NDQ inhibited TGF-β1induced EMT in HK-2 cells. These results were in agreement with a previous study by Lu et al [29]. However, Lu et al used rat serum containing NDQ for the mechanistic study and our research is the rst time use NDQ directly in human renal tubular epithelial cells and demonstrates its inhibitory effects on EMT.
For years, EMT is considered as a binary process with cells individually detach from tissue. However, recent studies demonstrate epithelial cells and mesenchymal cells are co-exist as a cluster in many cases, which is called hybrid E/M phenotype, and this phenotype presents more migratory and invasive properties [30]. This concept indicates an alternative EMT process in NDQ intervention. In this regard, more experiments will be needed to explore the extent of NDQ-induced EMT suppression. Recent research also suggested that transcription factors like GRHL2 or OVOL2 may be the crucial EMT drivers [31,32], whether NDQ inhibit the expression levels of the other regulators is unknown and deserves further exploration.
F-actin is an important protein of muscle thin laments, which constitute the eukaryotic cytoskeleton.
Remodeling of F-actin facilitates cell migration and invasion [16,33]. At the nal process of EMT, the cells reorganize their cortical actin cytoskeleton to enable dynamic cell elongation and directional motility [34]. Thus, F-actin reorganization directly controls cell migration/invasion. Since EMT is characterized by increased cell migration/invasion and actin stress ber formation, F-actin plays an important role in renal brosis. Our results showed that NDQ signi cantly reduced F-actin intensity and remodeled its organization in HK-2 cells. These evidences strongly suggest that NDQ suppresses CRF through cell migration and invasion via reorganizing the arrangement of F-actin.
It should be pointed out that once TβRI is activated, the canonical Smad signaling and non-canonical Smad pathways are generally activated to regulate gene transcription for performing physiological functions. Although the inhibition of the protein expression of TβRI by NDQ was demonstrated, the speci c pathways that it regulates need further exploration [35][36][37].
Conclusion
In conclusion, we rst demonstrated that the renoprotective effect of NDQ is partially attributed to the downregulation of TβRI, which account for the inhibition of TGF-β1-induced EMT in HK-2 cells (Fig. 7). This study provides a pharmacological basis for the clinic use of NDQ in the treatment of CRF.
Availability of data and materials
The data used to support the ndings of this study are available from the corresponding upon request.
Competing interests
No con icts of interest were declared by the authors. Morphology changes in HK-2 cells. Cells were serum-starved for 24 h and then exposed to vehicle or NDQ for 24 h with or without TGF-β1 stimulation. Fibroblast-like morphology changes were observed in TGF-β1-stimulated HK-2 cells (red arrow), and these changes were attenuated after NDQ treatment. NDQ reorganized the distribution of F-actin. HK-2 cells were serum-starved for 24 h and then exposed to vehicle or NDQ (5 mg/ml) in the presence or absence of TGF-β1 for another 24 h, the distribution of Factin was visualized by confocal after rhodamine phalloidin staining. Left: F-actin, middle: DAPI, right: merge.
Figure 7
The underlying mechanism of NDQ on preventing epithelial cells from TGF-β1-induced EMT. → refers to a promoting effect and ⊥ refers to a blockage effect.
Supplementary Files
This is a list of supplementary les associated with this preprint. Click to download. | 2020-06-18T09:06:10.545Z | 2020-06-12T00:00:00.000 | {
"year": 2022,
"sha1": "2663e7e852c4b614cda43a48e1b53d81d32ab939",
"oa_license": "CCBY",
"oa_url": "https://www.researchsquare.com/article/rs-34640/v1.pdf",
"oa_status": "GREEN",
"pdf_src": "Adhoc",
"pdf_hash": "fbd26115f46ccae1c25ceae2027cfdb73becf5c3",
"s2fieldsofstudy": [
"Medicine",
"Biology"
],
"extfieldsofstudy": [
"Chemistry"
]
} |
234275343 | pes2o/s2orc | v3-fos-license | Study on Strategy in University Laboratory Class Teaching
Laboratory teaching is a critical way to ensure the effective input of techniques in engineering learning. Laboratory teaching not only contributes to improving course quality but also helps enrich comprehensive engineering application ability. However, there are some typical problems in current university laboratory teaching, such as rigid and isolated course design, outdated contents and materials, and not encouraging innovation and real-world problem solving. To overcome these challenges, a three-step teaching reconstruction strategy has been proposed to enhance university educators’ teaching effects, including introduce new laboratory teaching methods, updating contents and materials, and organizing innovative and multi-discipline Learning. Through efforts made by university teachers, students, and industry partners, the goal can be achieved by following the proposed strategy.
INTRODUCTION
For around twenty decades, science and engineering educators have stated that laboratory teaching and practice enhance students' understanding of the natural world [1] [2]. Over the years, many have argued that only engineering courses with worthwhile practical experiments can be meaningful to students in the university laboratory [3].
Laboratory teaching is a critical way to ensure the effective input of techniques in engineering learning. Laboratory teaching not only contributes to improving course quality but also helps enrich comprehensive engineering application ability. Moreover, these skills and techniques learned in the laboratory are the key qualifications for students getting a job in the industry once they graduate from their university studies.
Based on the study and analysis of some typical problems in current university laboratory teaching, this paper proposes a practical strategy to improve laboratory teaching and learning for contemporary university students in order to help them master realworld problem solving and better adapt to the current fast-developing technology and industrial environment in China.
Rigid and Isolated Course Design
Laboratory teaching is one of the most popular approaches for getting students actively involved in the learning process of a subject, especially for science and engineering disciplines. It is also an effective educational method for universities to train applied engineering students. However, in many courses, the teaching content is still mainly aimed at theoretical knowledge teaching but practical skills training, and the teaching content is usually organized based on basic knowledge of one particular subject. As a result, the laboratory experiments are rigidly designed according to the course content as well as the basic theoretical knowledge teaching aim. Focusing on theoretical knowledge teaching can lead to a lack of necessary implementation skills that are crucial for one's future career success. Also, students can hardly improve their practical skills through laboratory classes.
Moreover, focusing on basic textbook knowledge ignores industrial problems, which makes university courses not only isolated from the real world but also isolated from each other, because, in today's industry, a problem often involves knowledge of several disciplines and topics [4]. For example, in the course Software Engineering, the students are asked to master the software life-cycle, and learn a range of diagrams, like use-case diagram, data-flow diagram, etc. However, without real industry projects and experienced engineers, the instructor can hardly make the class juicy. Students usually feel bored during the learning, and perform poorly in the laboratory classes when they make the Software Requirements Specifications. In order to make this course interesting, the teacher should design the lectures based on real-world projects, which involves knowledge from different disciplines. Otherwise, the course will become rigid and dry.
Rigid and isolated course design makes it very hard for students to solve real-world problems by using learned knowledge. This is very harmful to engineering education and the country's industry.
Outdated Contents and Materials
Continuous change and rapid progressing are the main challenges for today's engineering companies. Accordingly, engineering courses shall be updated as soon as possible [5]. Unfortunately, university educators are not tightly connected with industry. Therefore, laboratory teaching contents and materials are mostly outdated. For instance, the course Data Science, it requires students to know how to acquire useful information from raw data by using a number of techniques. However, sometimes, university uses outdated or impractical tools for teaching, like using Matlab or Weka, they are good for demonstrating the basic ideas, but they are barely used by high-tech companies. If students don't learned popular tools in their field, it will be difficult to them to develop demanded skills and find good jobs. Moreover, the neural networks and deep learning are hot topics these days, but they are not included in many textbooks yet. This also makes students outdated.
Additionally, it is hard to force professors to catch up with the latest industrial concepts and frontier engineering techniques because professors are usually focusing on very specific research problem, and they don't need to work for the industry as an engineer. Thus, it is hard to them to pay more attend to what is going on in industry. If the professors are not familiar with popular techniques and tools, then the courses will be outdated. As a result, students are often passive in laboratory experiments, and it makes the problem even worse.
Not Encouraging Innovation and Realworld Problem Solving
Existing laboratory teachings are usually specifictopic-oriented. In other words, one experiment or one laboratory class is commonly designed to one particular course. For example, in the course of Data Science, there is a chapter on Data Visualization, which introduces a range of different plots and tools for visualizing the data. Consequently, in its laboratory class, students are asked to master the process of how to generate these plots. This means, the laboratory class is designed to guide student draw diagrams based on fixed datasets rather than developing students' analytical ability and thinking skills.
It is obvious that if the university laboratory teaching keep current states, it will be very hard to meet the expectation for it, which is training future engineers who are able to analyze newly obtained data, study realworld problems, and address constantly changing conditions.
STRATEGIE ON THE RECONSTRUCTION OF UNIVERSITY LABORATORY TEACHING
Today, both the industry and technology are progressing rapidly. Together with the nature of engineering courses (i.e. abstract and complex), not to mention the mathematics, engineering education is becoming increasingly harder to effectively achieve its goal. As a result, it is difficult for educators and students to gain good outcomes.
To overcome these challenges and problems, in this section, we propose a three-step reconstruction strategy for university laboratory teaching to enhance university educators' teaching effects. This proposed strategy is focusing on addressing these three previously mentioned problems and to allow students to effectively learn and master the skills and techniques required of the course. Also, by following the 3-step reconstruction of the laboratory teaching, it is possible for students to have a higher level of understanding about the course as well as the discipline during the learning.
Introducing New Laboratory Teaching Methods
The traditional laboratory teaching is more like a one-way-input approach. During the teaching, the teacher demonstrates in front of the class, and students are expected to learn by watching what the teacher is doing or reading of an instruction guide. Such form of teaching approach usually focuses on single theoretical ideas. In other words, the instructor is meant to explain only one idea once a time, so that the students can hardly put learned ideas together and have higher-level
Advances in Social Science, Education and Humanities Research, volume 517
understanding about the subject. Also, it is hard to students to gain practical skills while taking traditional laboratory classes like this.
Therefore, there is a need to redesign traditional laboratory classes to allow students to learn by various means. Project-based [6] or problem-based learning [7] is one of the main successful approaches broadly used in computing science courses. However, it can be insufficient when tackling practical problems that implicitly require many functions and timely feedback. For instance, in course Software Engineering, if the teacher use the project-based approach, it will be much better for students to follow the class in comparison with traditional teaching method because the projectbased approach can give students a clear goal at the very beginning. To have a clear goal at the beginning will allow students to understand the big picture and each step of the laboratory teaching. But, if the project is based on a real-world problem, then a domain expert is demanded, otherwise, the students can hardly investigate the problem deeply and insightfully. Moreover, in software engineering, if there is lack of domain experts, the software engineers will most likely have problems with confirmation of requirements because the engineers know neither what the real requirements are nor how the task's procedures are. Hence, even teachers can take project-based or problembased method, but it still can be insufficient or unsuccessful if there are not experienced engineers and domain experts involved in the teaching team. Therefore, in order to succeed in new laboratory teaching methods, experienced experts are needed to co-design the materials and experiments in order to ensure the students can learn practical skills during the course.
Updating Contents and Materials
The university laboratory teaching contents and materials are usually outdated, which makes it not only hard to attract students' interest but also hard to have good teaching outcomes. Furthermore, outdated contents and materials can cause the students to be outdated too, which will have a big impact on them when they start to look for jobs. Also, if most of the universities are teaching outdated contents, the industry will be affected too.
Thus, it is urgent to find a way that can motivate university teachers to improve and update their course contents and materials every year. Industry-oriented cooperation is good way, and it can be beneficial for both universities and companies. By having cooperative connections with companies, the university can have a better understanding of the real demand from industry. University teachers can visit the companies and have meeting with real-world problem solvers in the company and learn from them. At the meantime, companies can send their engineers to universities to co-design the courses and prepare the materials according to their real-world problems, so that the industry can have better students who are able to solve their problems in return.
Organizing Innovative and Multi-Discipline Learning
Real-world problem solving often involves knowledge of multiple disciplines and requires the ability to analyze novel data under different conditions.
In order to enhance students with thinking skills and help them connect multi-disciplinary knowledge together to solve real-world problems, the traditional laboratory teaching needs to be changed from being single-topic focused towards multi-discipline integrated. Also, the laboratory classes shall be organized as problem-oriented rather than traditional content-oriented where students learn from reading-lists [8] [9]. If the university can build well cooperative connections with companies, then the teachers will learn real-world problems, and the multi-discipline integrated course will be possible.
Moreover, to meet the expectations for future engineers, the teaching shall be dedicated to strengthen not only the students' skills in analysis and evaluation of complex problems but also the skills of teamwork that solving multi-disciplinary problems [10].
CONCLUSION
This paper proposes a strategy by solving existing problems in a targeted way. On the one hand, university educators should improve their course contents, introduce new laboratory teaching methods, and design up-to-date materials. On the other hand, establish cooperative connections with industry are beneficial for both students and companies, which allows effective communication between educators and employers. Furthermore, laboratory teaching should also change toward improving students' real-world problem-solving skills.
In conclusion, it will be a long-term work to improve current universities' engineering laboratory teaching. However, it is necessary and worth changing. Most importantly, only through efforts made by university teachers, students, and industry partners, the goal can be achieved by following the proposed strategy. | 2021-05-11T00:03:04.911Z | 2021-01-23T00:00:00.000 | {
"year": 2021,
"sha1": "925f37dbd442e57f6d6a4860f68d05eb86ba4b0d",
"oa_license": "CCBYNC",
"oa_url": "https://www.atlantis-press.com/article/125951601.pdf",
"oa_status": "GOLD",
"pdf_src": "Anansi",
"pdf_hash": "4b0347d21765106efaf8811a92e1d5a8bd4ae616",
"s2fieldsofstudy": [
"Engineering",
"Education"
],
"extfieldsofstudy": [
"Engineering"
]
} |
54491091 | pes2o/s2orc | v3-fos-license | Initial Interest , Goals , and Changes in CLASS Scores in Introductory Physics for Life Sciences
To examine the effect of extensive life science applications on student attitudes to learning physics, we analyzed CLASS data from life science students in introductory physics. We compare the same students’ responses from the first semester, taught with a standard syllabus, to the second semester, taught with extensive life science applications (IPLS). Although first semester responses become less favorable (pre to post), IPLS responses show an increase in favorable and a decrease in unfavorable responses. This is noteworthy because improvement is rarely observed without direct attention to attitudes/beliefs, and suggests IPLS courses are one possible approach to improving attitudes. Finally, we analyzed CLASS responses by gender, major, students’ stated goals in taking physics, and initial interest in physics; initial interest was determined from CLASS items chosen based on the Four-Phase Model of Interest Development. Most notably, we find that in the IPLS course, students identified as having low interest initially had the greatest gains.
INTRODUCTION
As the understanding of the physical mechanisms of biology increases, and as physics-based technological tools permeate both biological research and clinical medicine, national reports from the life science (e.g.BIO 2010) [1] and medical (e.g.Scientific Foundations for Future Physicians) [2] communities stress the value of a deep understanding of the physical sciences and a high level of problem solving and mathematical skills.Simultaneously, there has been a widespread effort to reform the introductory physics course for life science students (hereafter IPLS) to better match these goals.[3] Organizing IPLS courses around rich biological examples is a centerpiece of many course reforms, both to motivate students to learn physics and to give students the opportunity to apply physics to the complex biological situations they need to learn to analyze.As one of us and Heller described, [4] the cognitive apprenticeship model of pedagogy stresses the importance of embedding learning in a context meaningful to the student.[5] For students pursuing biology or medicine, this implies that required introductory physics course work should anchor physics principles in meaningful biological contexts.
Research suggests that supporting students to make meaningful connections to the content to be learned enables interest to develop, [6] and, in turn, interest enhances attention, goal setting, and learning strategies.[7] For example, research by Häussler and Hoffman demonstrated that teaching physical science using life science contexts led to improved learning for students who were interested in those contexts.[8] Finally, a preliminary intervention study by Engle and coworkers suggests that combining a context meaningful to the student with "expansive framing," in which the instructor emphasizes that the material being learned will be valuable to the student outside the classroom, makes a difference in learning.[9] Anecdotally, we (and others) have also observed that IPLS students are enthusiastic about the integral life science examples.We set out to determine whether including these examples also leads to improvements in student interest in, attitudes to, and beliefs about learning physics, as measured by both the Colorado Learning About Science Survey (CLASS) [10] and a survey we designed to probe the development of students' interest.[7] Here we report the results of our CLASS study, together with those course evaluation responses that help interpret those results; we present the results of the interest survey separately.[11]
STUDY DESIGN
Swarthmore College formerly offered a year-long calculus-based introductory physics course that was taken by engineering, chemistry, and biochemistry majors, and pre-medical students.When the IPLS reform was initiated, all students continued to take the standard first semester course (Physics 3), and a new IPLS second semester course (Physics 4L) was offered as an alternative to the standard second semester.
Engineering students continued to take the standard second semester while biochemistry, some chemistry, and students took the IPLS course.
We therefore have the opportunity to do a withinstudent comparison, comparing students' CLASS responses from the standard first semester to IPLS second semester.We examine matched data from two academic years (Year 1: N = 75 [28 male, 47 female]; Year 2: N = 38 [13 male, 25 female]).Enrollment was twice as large in Year 1 because at the time of course enrollment it was not certain that the IPLS course would be offered in Year 2; consequently, we draw our primary conclusions from the Year 1 data, and discuss the consistency of Year 1 with the Year 2 data.
In addition to the CLASS survey, we separately obtained demographic information about the students: class year, major, and their reasons for enrolling in the course.This survey was administered separately to minimize stereotype threat effects.[12] Finally we tested students with the Brief Electricity and Magnetism Assessment (BEMA) both pre and post.The BEMA and CLASS were administered online through a secure course website and homework credit was given for completion; demographic information was obtained separately.
The IPLS course and the standard first semester course were taught by different instructors (CHC taught the IPLS course and a colleague taught the standard course).Both courses were taught with three hours of Peer Instruction (PI) lecture, [13] although the instructor for the IPLS course was more experienced with using PI.Both had a weekly three-hour laboratory in which the labs were loosely informed by PER-based curricula.Informal (and optional) evening meetings to work on the weekly problem sets were facilitated by peer tutors; there was neither a scheduled recitation nor a required additional time for formal group problem solving.
METHODS OF ANALYSIS
We analyzed the CLASS responses in two complementary ways.We used the established method of collapsing the responses to a three-point scale (favorable, unfavorable, neutral), and then determining percent favorable and unfavorable changes from pre to post, both overall and in eight categories.[10] This approach gives two sub-scores for each category.While this gives a great deal of rich information, it can be difficult to interpret clearly, such as when both favorable and unfavorable responses increase, or both decrease.
We therefore also recoded negative statements in the reverse direction and calculated mean changes in scores on the five-point scale.This approach has the advantage of providing a single score and also facilitates analysis.
Both methods of analysis were examined for correlations with student characteristics (demographic data, goals for taking course, and BEMA scores).To investigate the role of interest, we developed a metric for initial interest in physics [14] using twelve items1 from the CLASS pre-survey: the six items from the CLASS Personal Interest category used to assess feelings and value, and six other items that assess the knowledge components of interest, providing an assessment of interest as a developmental motivational variable, used here to identify initial interest in physics [7] We then divided the class into high (top quartile), medium (two middle quartiles), and low (bottom quartile) level of initial interest.
RESULTS
Consistent with the literature [10], we find that, on average, students' attitudes hold steady or improve during the IPLS second semester, while students' attitudes become less expert-like during the standard first semester.Table 1 displays the results of the favorable-unfavorable analysis, along with the means on the five-point scale, for all students in each semester, and for the matched student population who took both semesters.The two approaches to analysis give consistent results, although they differ in the level of statistical significance.We observe the same trends in both years.
Demographics and Background
We examined several demographic factors for influence on changes in CLASS scores: gender, math background (as measured by college math courses taken prior to/concurrently with physics), major (life science or not), and self-reported goals for the IPLS course.Using two-way repeated measure ANOVAs, no effects of major or math background or incoming knowledge as measured by the BEMA pre score were found with p< 0.05.
Although female students displayed more negative initial attitudes than males on two of the problemsolving categories (Confidence and General), there was no effect of gender on changes either semester.Although this is encouraging, as others have reported an increasing gender gap from pre to post in CLASS scores, [15] it is not clear to what to attribute this as it occurs in both semesters with different instructors.
Initial Interest and Goals
The most striking effects were associated with students' initial levels of interest and self-reported goals.In both courses, as shown in Fig. 1, CLASS pre scores overall and in all categories tracked students' initial level of interest.Remarkably, in the IPLS course, we observed that the low initial interest students' CLASS scores increased significantly from pre to post, both overall and in the Real World Connection, Personal Interest, and Problem Solving General categories, as shown in Fig. 2, while the medium interest students' scores remained steady and those of the high interest students declined slightly.(In other categories the three interest groups were less distinguishable.)However, in both years of the study all groups in the standard course declined by similar amounts from pre to post.This suggests that the IPLS course promotes the improvement of students' perceptions of physics for initially low interest students.
Students' goals for the course showed similar effects.We categorized students' statements of their Full category names are listed in Table 1.
goals in taking the IPLS course as (a) learning the material, (b) meeting a requirement, or (c) both.Students expressing learning goals had the highest initial CLASS scores, followed by those with both goals, and then those whose goals were to meet a requirement.Those students with a requirement goal showed the greatest CLASS gains from pre to post, while scores of those with a learning goal did not change significantly.In most cases those with both goals did not change, but in Sense-Making/Effort their scores declined (p < 0.05).Moreover, students with high initial interest were likely to report learning goals, whereas students with low interest were likely to report requirement goals (p < .05).
Course Evaluation
We asked questions on the IPLS end-of-semester course evaluation that probed students' perception of the utility of the IPLS course and their level of interest.One pair of questions used in both Years 1 and 2 asked students to compare their perception of the utility of the IPLS course at its beginning and end; students
DISCUSSION AND CONCLUSIONS
Our findings suggest that the IPLS course, unlike the standard course, supports students with low initial interest and/or requirement goals to develop interest, along with more positive attitudes and beliefs as assessed by the CLASS.These results are consistent with those observed in some other IPLS courses.[4,17] Based on course evaluation responses, we propose that positive on the CLASS during the IPLS semester can be attributed at least in part to the focus on topics and examples most relevant to life science students.This is consistent with findings from Häussler and Hoffman's [8] intervention study, in which students improved in their performance, sense of competence, and self-concept in physics, when physics was taught through contexts that were of interest.
Although previous studies have found that sometimes CLASS scores improve in the second semester even without explicit attention to attitudes/beliefs, [10] the difference in gains reported here suggest that the improvements are related to engaging student interest, with interest defined so as to include its developmental nature.Clearly research is also needed comparing standard to IPLS first semester courses.
Given studies indicating the role of utility as a support for developing meaningful connections to content [16], and others pointing to the relationship between interest and goals, especially in early phases of interest development, [18] our findings further suggest that the life science content contributes utility and meaning for students to the IPLS course.
Given that high initial interest students showed a modest decline in CLASS scores, it also appears future work should focus on strategies to maintain or further develop these students' interest.Other studies of the development of interest [19] suggest that high initial interest students could also be engaged by IPLS courses employing more mathematical/technical life science applications that challenge them to extend their present understanding of the life science contexts.
FIGURE 1 .
FIGURE 1. Mean CLASS pre-scores (5-point scale, error bar = standard error) by category and initial interest level.Full category names are listed in Table1.
FIGURE 2 .
FIGURE 2. Mean changes in CLASS scores (pre to post, 5point scale) by initial interest level.*Significant differences between interest groups (p < 0.05).Error bars = standard error. | 2018-12-04T00:36:33.530Z | 2014-02-01T00:00:00.000 | {
"year": 2014,
"sha1": "b6fb13a9a46d02d63bb3e34d1698a9496c751610",
"oa_license": "CCBY",
"oa_url": "https://www.compadre.org/per/items/3669.pdf",
"oa_status": "HYBRID",
"pdf_src": "ScienceParseMerged",
"pdf_hash": "b6fb13a9a46d02d63bb3e34d1698a9496c751610",
"s2fieldsofstudy": [
"Education",
"Physics"
],
"extfieldsofstudy": [
"Psychology"
]
} |
1836767 | pes2o/s2orc | v3-fos-license | Multi-soliton energy transport in anharmonic lattices
We demonstrate the existence of dynamically stable multihump solitary waves in polaron-type models describing interaction of envelope and lattice excitations. In comparison with the earlier theory of multihump optical solitons [see Phys. Rev. Lett. {\bf 83}, 296 (1999)], our analysis reveals a novel physical mechanism for the formation of stable multihump solitary waves in nonintegrable multi-component nonlinear models.
positively identified only in the nonlinear optical model of Refs. [1,2] that is known to possess additional symmetries, which might be the reason for their unique stability.
In this Letter, we demonstrate the existence of dynamically stable multihump solitary states in a completely different (in both the physics and properties) but even more general model that describes the interaction of envelope and lattice excitations, a generalisation of the well-known polaron model. We reveal a novel physical mechanism for the formation of stable multihump solitary waves in nonintegrable multi-component nonlinear models.
Model. Let us consider the continuous model of the energy (or excess electron) transport in an anharmonic molecular chain [8], described by the system of coupled nonlinear Schrödinger (NLS) and Boussinesq equations: where t and x are the normalised time and spatial coordinate, correspondingly, ψ(x, t) is the excitation wave function, and w(x, t) is the chain strain. The system is characterized by three dimensionless parameters: the particle mass m, the anharmonicity of the chain α, and the dispersion coefficient µ. Equation (1) appears in a number of other physical contexts including, for example, the interaction of nonlinear electron-plasma and ion-acoustic waves [9], coupled Langmuir and ion-acoustic plasma waves [10], interaction of optical and acoustic modes in diatomic lattices [11], particle theory models [12], etc.
System (1) is known to be integrable for αµ = 6 [13]. In this case, it possesses two types of single-soliton solutions: scalar (ψ = 0) supersonic Boussinesq (Bq) solitons and vector Davydov-Scott (DS) solitons [8] which can be both subsonic and supersonic. Because of the complete integrability for αµ = 6, these solitons do not interact with each other. For αµ = 6, the situation changes dramatically, and it has been recently shown [14,15] for the nearly-integrable case (αµ ≈ 6) that Bq and DS solitons can form a bound state for αµ > 6. From the other hand, it is also known that in a weakly anharmonic lattice two subsonic DS solitons can form a bisoliton [16]. However, it remains a mystery what happens when the system (1) is far from its integrable limit and, especially, when the solitons are supersonic. In this Letter we examine the model (1) numerically for arbitrary values of αµ and, employing the concept of soliton bifurcations, demonstrate the origin and exceptional robustness of multihump supersonic stationary solitary waves. (1) can be found in the form of the traveling waves where z = c/4µ(x−vt), and the constant c = (mv 2 −1) is positive for a supersonic velocity, v > 1/ √ m. Substituting Eq. (2) into Eq. (1), we derive a system of coupled ordinary differential equations where g = αµ is an effective anharmonicity parameter, and λ = µ(v 2 − 4Ω)/c is a characteristic eigenvalue of the stationary localized solutions. Equation (3) has two types of one-soliton solutions: a one-component Bq soliton which exists for arbitrary values of g, and a twocomponent DS soliton which exists only in the integrable case g = 6.
To understand what happens for g = 6, we consider the limit φ/u ∼ ε ≪ 1 and apply a multi-scale asymptotic analysis. In the zeroth order in ε, φ = 0 and Eq.
(3) reduces to a nonlinear equation for the component u(z) only, with the supersonic Bq soliton solution (4). In the first order in ε, we obtain a linear eigenvalue problem for φ(z) characterized by the effective potential u = u 0 (z). For a given value of g, the spectrum of the eigenvalue problem consists of N + 1 discrete eigenvalues λ n = (N − n) 2 , where n = 0, 1, ... N, and N is the integer part of (1/2)[ 1 + (48/g) − 1]. Each cut-off value λ n corresponds to a bifurcation point of the nodeless scalar soliton u 0 where a two-component solution with a nonzero component φ emerges. The latter has n nodes and, near the bifurcation point, can be treated as a fundamental (or higher-order) bound mode of an effective potential created by the soliton u 0 (z). The emerging vector soliton can therefore be characterised by a "state vector" |0, n , according to the number of nodes in the corresponding components.
It is easy to see that for g > 6 only bifurcations of the |0, 0 state, which corresponds to the DS soliton (5), are possible. First bifurcation of the |0, 1 state occurs for the completely integrable case g = 6 at λ 1 = 0. In this case, the bifurcation pattern is identical to that of the completely integrable Manakov limit of two coupled NLS equations [17], namely, the |0, 0 state appears at λ 0 = 1, and the |0, 1 state appears at λ 1 = 0.
Weaker anharmonicity (smaller g) means larger number of possible bound states supported by the effective potential u(z), and thus the increasing number of bifurcations. Indeed, the depth of the effective trapping potential is inversely proportional to g. The |0, 0 state always exists, even for a shallow potential u(z).
We now consider in detail the formation of multihump solitons in the cases of weak (subcritical, g < 6) and strong (supercritical, g > 6) anharmonicity, respectively.
Supercritical regime. In the absence of bifurcating higher-order solutions, the multihump solitons are formed only via binding of the |0, 0 vector solitons. The physics of this mechanism is simple. The interaction forces between closely separated fundamental solitons are different for both the φ and the u components. Namely, while two in-phase u-solitons attract, the two in-phase φsolitons repel. This allows for the existence of multihump nodeless modes of the field φ trapped in the multi-well potential u. Each of such multihump solitons can be considered as a bound state of several |0, 0 DS solitons, with in-phase humps in both components.
It is convenient to represent the solution families as branches on the bifurcation diagram Q vs. λ, where Q ≡ Q u + Q φ ≡ u 2 dz + φ 2 dz is the total soliton power. Typical bifurcation diagram for a supercritical case (g = 7) is shown in Fig. 1. Solid line represents the bifurcating solution |0, 0 , and it can be seen on the close-up of the bifurcation region, that the branches representing two-and three-hump solutions start off at the same point λ = λ 0 but with the energies approximately equal to that of two or three u-solitons. Examples of such multihump solitons are shown in Fig. 2, and it is clear that this novel type of soliton bifurcations occurs from a countable set of infinitely separated single solitons. With increasing λ, separation between the humps decreases until all solitons of this type become single-humped (Fig. 2, right column). Subcritical regime. In this case, bifurcations of the ustate do not lead to multihump solitons. That is, in sharp contrast to the coupled NLS equations describing vector solitons in nonlinear optics, none of the higher-order states |0, n become multihumped in the system under consideration. Although the function φ does have multiple maxima in its intensity profile, because of the nonself-consistent source for the u-component, it does not cause significant distortions in the shape of the effective potential, u(z), and the total intensity I(z) = u 2 + φ 2 re-mains single-humped. Typical bifurcation diagram for the case g = 1 is presented in Fig. 3. In this case N = 2, and only bifurcations of the |0, 1 (dot-dashed line) and |0, 2 (solid line) solitons are shown. Corresponding modal profiles of the bifurcating solitons are presented in Fig. 3 (top row).
Similar to the supercritical regime, the multihump solitons can exist only as bound states of the bifurcating |0, 1 or |0, 2 solitons. They appear at the bifurcation points λ n , and they have energies equal to a number of lowerorder solitons "glued" together by the low-amplitude components. The number of nodes that the φ-component has in the composite soliton depends on the number of |0, n solitons forming that bound state. Typical examples of such solutions are presented in Fig. 3 (bottom row).
From this analysis, we can conclude that, in this model, the multihump solitons appear as bound states of |0, 0 solitons for any value of anharmonicity parameter g. In addition, multihump solitons of more sophisticated modal structure are also possible. 4. (a,b) Stable dynamics of a three-hump soliton for g = 7 and λ = 0.824. The family of these solitons is shown in Fig. 1 (dotted line), and the corresponding profiles are presented in Fig. 2 (top row). (c,d) Stable dynamics of a four-hump soliton for g = 3.005 and λ = 2.9.
Dynamical stability. The second equation of the system (1) is the so-called "ill-posed" (or "bad") Bq equation [18]. It possesses an intrinsic linear instability and therefore its reliable numerical solution for non-zero µ is unfeasible. This linear instability is not inherent in the original physical model, and it can be traced to neglecting higher-order spatial derivatives in Eq. (1). In the case of the energy transport in anharmonic molecular chains, Eq. (1) with µ = 1/12 originates from the following system of discrete equations [14,15] i d dt where ∆ 2 (X n ) ≡ X n+1 +X n−1 −2X n . Discrete functions Ψ n and W n define, in the continuous limit, the excitation wave function, ψ(x, t), and the strain function of the lattice, w(x, t). Therefore, it would be justified to study the dynamics of the stationary solutions of Eqs. (3) numerically by employing the original discrete dynamical system (6). Besides, the argument can be reversed, and such a discrete system can be treated as a regularised numerical discretisation scheme for a general system of coupled NLS and ill-posed Bq equations.
We investigate the dynamical stability of the multihump solitons for two distinct cases of a subcritical and supercritical anharmonicity discussed above. The condition of a unit norm for the envelope function Ψ n ( n |Ψ n | 2 = 1) is satisfied in all cases, and v is chosen close to the sound velocity (c = 0.0244) to allow for a smooth discretisation.
In the supercritical regime (g > 6), where multihump solitary waves can be formed through binding of several |0, 0 states together, our numerical simulations indicate that such solitons are stable as long as the separation between the humps is sufficiently large. This property is just opposite to that observed for multihump optical solitary waves [2]. An example of the stable dynamics of a three-hump soliton for g = 7 is shown in Figs. 4(a,b). All solitons of the DS type, i.e. |0, 0 soliton states, presented in Fig. 1 (by solid line) and Fig. 2 (bottom row), exhibit similar stable dynamics.
It is important that the same mechanism of the creation of multihump solitons applies to the subcritical regime. This means that the dynamically stable multisoliton bound states described above exist also for g < 6.
As an example, propagation of a four-hump soliton at g = 3.005 is demonstrated in Figs. 4(c,d). After initial adjusting of the soliton amplitudes (due to the discretisation), only small amplitude breathing occurs [see Fig. 4(d)], otherwise the soliton dynamics is stable. In contrast, all bifurcating higher-order solitons are dynamically unstable.
Our results on the robustness and stability of multisoliton states call for a systematic revision of our understanding of the role of nonlinear localized modes in a number of physical phenomena related to the nonlinear transport in macromolecules [8] and even artificial nanoscale structures [19], where the coupling of two (or more) degrees of freedom occurs. How the soliton binding and existence of multi-soliton states modifies the nonlinear kinetics, nonequilibrium thermodynamics [20], and other properties of the system? These questions remain to be answered.
In conclusion, we have found robust two-component solitary waves in a polaron-type model of the energy transport in anharmonic lattices. We have revealed a novel physical mechanism for the formation of multihump solitons in a discrete anharmonic lattice and demon-strated their dynamical stability. Along with the recent studies on multihump optical solitons [1,2], these results call for re-examination of the role of multi-component solitary waves in other fields of nonlinear physics. | 2014-10-01T00:00:00.000Z | 2000-09-19T00:00:00.000 | {
"year": 2000,
"sha1": "dc0303491a816f95bf4202f0d97b2ad6ca8bd841",
"oa_license": null,
"oa_url": "http://arxiv.org/pdf/nlin/0009038",
"oa_status": "GREEN",
"pdf_src": "Arxiv",
"pdf_hash": "34970f6acdb9caf5267fefb45374fd2f339a703f",
"s2fieldsofstudy": [
"Physics"
],
"extfieldsofstudy": [
"Physics"
]
} |
221110695 | pes2o/s2orc | v3-fos-license | Prognostic performance of endothelial biomarkers to early predict clinical deterioration of patients with suspected bacterial infection and sepsis admitted to the emergency department
Background The objective of this study was to evaluate the ability of endothelial biomarkers to early predict clinical deterioration of patients admitted to the emergency department (ED) with a suspected sepsis. This was a prospective, multicentre, international study conducted in EDs. Adult patients with suspected acute bacterial infection and sepsis were enrolled but only those with confirmed infection were analysed. The kinetics of biomarkers and organ dysfunction were collected at T0, T6 and T24 hours after ED admission to assess prognostic performances of sVEGFR2, suPAR and procalcitonin (PCT). The primary outcome was the deterioration within 72 h and was defined as a composite of relevant outcomes such as death, intensive care unit admission and/or SOFA score increase validated by an independent adjudication committee. Results After adjudication of 602 patients, 462 were analysed including 124 who deteriorated (27%). On admission, those who deteriorated were significantly older (73 [60–82] vs 63 [45–78] y-o, p < 0.001) and presented significantly higher SOFA scores (2.15 ± 1.61 vs 1.56 ± 1.40, p = 0.003). At T0, sVEGFR2 (5794 [5026–6788] vs 6681 [5516–8059], p < 0.0001), suPAR (6.04 [4.42–8.85] vs 4.68 [3.50–6.43], p < 0.0001) and PCT (7.8 ± 25.0 vs 5.4 ± 17.9 ng/mL, p = 0.001) were associated with clinical deterioration. In multivariate analysis, low sVEGFR2 expression and high suPAR and PCT levels were significantly associated with early deterioration, independently of confounding parameters (sVEGFR2, OR = 1.53 [1.07–2.23], p < 0.001; suPAR, OR = 1.57 [1.21–2.07], p = 0.003; PCT, OR = 1.10 [1.04–1.17], p = 0.0019). Combination of sVEGFR2 and suPAR had the best prognostic performance (AUC = 0.7 [0.65–0.75]) compared to clinical or biological variables. Conclusions sVEGFR2, either alone or combined with suPAR, seems of interest to predict deterioration of patients with suspected bacterial acute infection upon ED admission and could help front-line physicians in the triage process.
present in 6% of adult hospitalizations [2]. Over the last decade, a decrease in the mortality rate has been observed [3] in particular thanks to improved management, and more appropriate intervention approaches in the emergency department (ED) [4]. Although the recently proposed qSOFA score [5] aims to help frontline clinicians detecting severe patients with a higher risk of mortality [6], it fails to get decisive support for discharge decision, especially in patients without initial organ dysfunction [7,8], that could help to reduce ED crowding and cost.
Even if widely used as an infection biomarker and diagnostic of severity, procalcitonin (PCT) has not been fully validated for deterioration assessment, and no other biological marker has yet been validated to accurately early predict clinical deterioration in unselected patients admitted to the ED with infection or sepsis [9][10][11]. Asymptomatic endothelial injury participates in the development of organ failure with poor outcome [12,13]. Endothelial biomarkers have been presented as predictors of death and/or organ dysfunction during sepsis [14][15][16][17][18][19][20][21][22]. Of those, soluble vascular endothelial growth factor receptors 2 (sVEGFR2, growth factor for vascular endothelial cells) and soluble urokinase plasminogen activator receptor (suPAR, pro-inflammatory activation of the immune system) were proposed. VEGFR2, which is selectively expressed in the endothelium, mediates endothelial growth, proliferation and permeability and pathological angiogenesis, and bound to VEGF increases microvascular permeability resulting in oedema and hypotension [23]. The uPAR receptor is expressed on different cell types including vascular endothelial cells [24]. After cleavage from the cell surface, the soluble receptor, suPAR, can be found in the blood and other organic fluids. Increased activation of the immune system caused by different types of infections results in increased suPAR concentrations. These biomarkers have been shown to be associated with initial severity and subsequent clinical worsening [25][26][27][28][29][30][31][32], but their ability to early predict deterioration on ED admission remains to be determined.
This study aimed to evaluate the ability of sVEGFR2 and suPAR biomarkers to early predict the clinical deterioration of patients with infection upon ED admission and compare them to conventional clinical and biological parameters (qSOFA and SOFA score, lactates, PCT, CRP). Second, we assessed the prognostic performance of biomarkers according to the presence of sepsis or not in accordance with the new definitions of Sepsis-3.
Population
We conducted a prospective, multicentre, international study in 14 EDs from 2015 to 2018. Inclusion criteria included adult patients (age ≥ 18 years) with an acute suspected bacterial community-acquired infection (≤ 3 days), evolution time window being checked with the patient and/or relatives, associated with at least two systemic inflammatory response syndrome (SIRS) criteria [33], which were currently the most sensitive criteria for sepsis [34,35]. All patients admitted to the ED with a suspected infection, based on fever and/or any other infectious symptom reported by referral practitioner were screened 24/7 by emergency physicians for eligibility and treated following the Surviving Sepsis Campaign guidelines [36]. Exclusion criteria were patients with septic shock (based on ACCP/SCCM criteria), patients with a healthcare-associated infection, immunosuppression (e.g. human immunodeficiency virus (HIV), transplant, ongoing chemotherapy, steroid treatment > 20 mg/ day of prednisone or equivalent for more than a week), non-infectious diseases potentially associated with SIRS (cancer), patients with a prior episode of infection within the 30 days before ED admission and onset of symptoms greater than 72 h and absence of consent. The protocol was recorded on ClinicalTrials.gov (N°: NCT02739152), approved by the Ethics Committee for Clinical Research (CPP SOOM IV: CPP15-004).
Endpoints
The primary endpoint was the occurrence of early clinical deterioration within 72 h following ED admission. Deterioration was determined by an independent adjudication committee (including one experienced emergency physician and two intensive care physicians) who were blinded to biomarker results, and followed a pre-defined adjudication charter. Patients were then classified according to their initial course during the first 72 h of hospitalization, as exhibiting an early deterioration defined by a composite endpoint (increase SOFA score of at least 1 point, or ICU admission directly related to the initial infectious disease because of documented sustained hypotension requiring vasopressors or ventilation support requirement, or death) or not. The same adjudication committee also confirmed the bacterial origin of infection according to available clinical, biological and microbiological data and based on pre-defined criteria for every different type of infection [37]. Patients without confirmed infection were excluded from the analysis.
Study design and measured variables
All the patients were included and received their first care and blood collection in ED. Clinical criteria, biological data, lactates [38], qSOFA score, SOFA score [39] and studied biomarkers were measured at three time points: the first within emergency room (T0) and the others at 6 ± 2 h (T6) and 24 ± 2 h (T24) after ED admission. The following data were prospectively collected by the study team blinded to biomarker results during ED stage: demographics, Charlson score, site of infection, antimicrobial therapy and initiation time, traditional biological parameters (leukocytes, CRP, platelets) and orientation after ED discharge. Pathogens, length of stay and mortality at day 28 were collected during hospitalization or at the end of follow-up.
Biomarker measurements
Serum sVEGFR2 concentrations (soluble vascular endothelial growth factor receptors 2) were measured using the enzyme-linked fluorescent assay (ELFA) technique. The results were automatically analysed by VIDAS ® and expressed in relative fluorescence intensity or RFV (relative fluorescent value). Plasma suPAR (soluble urokinase plasminogen activator receptor) levels were analysed using the commercially available CE/IVDlabelled suPARnostic ® AUTO Flex ELISA kit, according to the manufacturer's instructions (Virogates, Birkeroed, Denmark). For sUPAR ELISA test, the inter-assay coefficient of variation (CV) given by the manufacturer is below 6%. For sVEGFR2, the inter-assay coefficient of variation was calculated at 3.09%. Serum PCT (procalcitonin) levels were measured using VIDAS BRAHMS PCT assay (Biomerieux, Marcy l'Etoile, France) according to the manufacturer's instructions.
Analysis
The prognostic performance of studied biomarkers was evaluated in the entire cohort at T0. Following the current definitions and to analyse the prognostic performance of biomarkers according to the severity, the Sepsis-3 criteria were applied to define 2 groups: infected patients (SOFA score < 2) and septic patients (SOFA score ≥ 2) [5]. A model of risk of early deterioration based on the value of biomarkers on ED admission was proposed on non-septic patients. Data were censored after deterioration. No patient was lost to follow-up until T24.
Statistics
Data are presented either as means ± SD, median with interquartile range or as box and whisker plots with representation of the median value, 25th, 75th and 90th percentiles, and outliers. Parameters and biomarkers were compared between the two groups of patients according to their initial course (i.e., early deterioration or not), using the nonparametric Mann-Whitney U test for continuous variables, while categorical variables were compared with the Pearson χ 2 test or the Fisher's exact test when appropriate. The level of significance was set at 5% and results of regression analyses were presented with their 95% CI. All analyses were computed using the R version 3.4.0.
Logistic regressions were fit using single or both biomarkers. Association with clinical variables was independently evaluated, and clinical parameters with a p value below 0.1 in the univariate analyses were selected as adjustment covariates for multivariate analyses. Among significant clinical parameters, a selection was made to avoid collinearity and limit the number of variables introduced in multivariate models. Strength of association was reported using inter-quartile range (IQR) adjusted odds ratios (OR). Areas under the ROC curve and their 95% confidence interval (CI) were computed and compared using the DeLong's method. Predictive performances were calculated/evaluated under constraint of a sensitivity higher than 90% (rule-out test). In a complementary approach, a decision tree was built. The thresholds used to partition the data were chosen to optimize sensitivity (> 90%).
Biomarkers: predictive performances in the global cohort
At T0 in univariate analysis, the age, the Charlson score, the qSOFA score and the SOFA score were associated with the early clinical deterioration, but not traditional biological markers. Biomarker association with clinical deterioration was also observed in secondary excluded patients (Additional file 1: Figure 1). Levels of sVEGFR2, PCT and suPAR at inclusion were significantly associated with the degree of organ dysfunction, as reflected by the SOFA score on ED admission (Fig. 2).
Biomarkers: predictive performances in infected patients without sepsis (SOFA < 2)
Among the 233 patients considered with infection but no sepsis at enrolment (SOFA < 2 at admission), 48 of them (21%) deteriorated within 72 h (Fig. 1b). The clinical characteristics of this derivative cohort at admission were not different from those of the overall cohort ( Table 4). At T6, only sVEGFR2 and sUPAR were found significantly associated with worsening (p < 0.01) (Additional file 1: Figure 2b.). As data were censored after deterioration, the low number of patients deteriorating after T6 and T24 did not allow further analysis. The prognostic value of biomarkers on septic patients (SOFA ≥ 2) is presented in Additional file 1: Table 1.
Proposal of a stratification model in non-severe patients on ED admission
The best prognostic model including sVEGFR2 and suPAR combination and using cut-off values optimized to yield a high sensitivity allowed identifying distinct levels of risk (i.e. low and high) for deterioration. When comparing risk groups, we found that the low-risk group had a 15-fold lower risk of worsening than the high-risk group (OR = 14.50 p < 0.05) (Fig. 3).
Discussion
In this ED-based multicentre study, the endothelial biomarker sVEGFR2, used either alone or combined with suPAR, proved the best early predictor of patient deterioration, independently of potential confounders. High medical value biomarkers are those that are able to predict outcomes even before any clinical evidence of deterioration to help front-line physicians to better anticipate the complicated course. Since early prediction of patient deterioration is crucial to allow safe rule-out, over-triage reduction and better allocation of hospital resources, the high negative value of these potential endothelial biomarkers appears particularly useful in the ED settings with inherent peaks of activity and overcrowded units.
In the present study, 27% of the entire cohort deteriorated within 72 h of ED admission. Importantly, 21% of non-sepsis patients, without any severity criteria on ED admission, did deteriorate within the first 72 h of hospitalization. Intrinsically, these patients presented to the ED with low SOFA (i.e., SOFA < 2) and non-qualifying qSOFA score. This proportion is similar to that reported in previous studies [18][19][20]. About 20 to 25% of patients progressed to severe sepsis whereas they had no sign of seriousness at first medical contact [7]. Saeed et al. reported that early clinical deterioration occurred in more than 16% of patients presenting to the ED with sepsis, even when patients were non-severe with low lactate level (< 2 mmol/L) or low clinical score (qSOFA < 2) [40]. Recently, Cleek et al. confirmed that in predicting 28-day in-hospital mortality among infected ED patients, qSOFA did not outperform or improve physician judgment [41]. Overall, the clinical deterioration occurred very early after ED admission since two-third of the patients deteriorated within the first 6 h of inclusion. Although information on the delay between ED admission and deterioration are scare, some authors have reported that it may occur within 48 h [42], even within the first 12 h following ED arrival [43].
Due to various presentations of infected patients on ED admission, determining the severity early in the disease course remains challenging since clinical scoring systems have limited prognostic accuracy [44,45]. Many conventional biomarkers reflecting end-organ compromise are not informative until significant clinical deterioration has occurred [46,47]. The Sepsis-3 definition underlines organ dysfunction as the mainstay of sepsis and the value of the SOFA score to identify patients with a higher risk of subsequent death [48]. However, 21% of our patients with a SOFA score < 2, i.e. non-sepsis according to Sepsis-3, deteriorated within 72 h after ED arrival. In these circumstances, assessment of endothelial injury could be a good predictor of deterioration [49]. Likewise Fang et al. [20] described a relationship between endothelial biomarkers and variations of the SOFA score during the first week after admission. Liu et al. [50] also showed an association between the presence of endothelial injury on admission and severity of sepsis. More recently, Henning et al. also confirmed that biomarkers of endothelial activation and inflammation in combination with emergency department physician judgment improved prediction in hospital mortality [51,52]. These observations are concordant with our findings showing that the level of sVEGFR2 and suPAR is associated with that of the SOFA score. Importantly, we have shown that sVEGFR2 alone or combined with suPAR is the best predictor of patient deterioration, independently of potential confounding factors. If confirmed, this result could allow safe rule out of patients who have low risk of deterioration, hence leading to a decrease in hospital admissions.
This prospective, multicentre, international, observational study presents several strengths, such as (i) a biological collection of biomarkers combining with the evolving clinical criteria/in line with the requirements of the new Sepsis-3 definition of sepsis, and (ii) the appointment and careful evaluation by an independent adjudication committee. We also demonstrated that circulating markers of endothelial activation, at the earliest time in ED, have a potential of risk stratification and could help emergency physicians better manage patients with sepsis.
Our study, however, has several limitations, the first one being the limited possibility to fully investigate the heterogeneity of the different subtypes of infections. Indeed, patients with pneumonia may differ from patients with abdo-pelvic infections. The population may be biased against deterioration, as it requires decompensation from a less ill state. Half of the cohort having a SOFA ≥ 2 at ED baseline may have been already quite ill. Therefore, the study may have been stronger if focused on a light-/middle-severity sepsis cohort, using new definition of sepsis if it had been available when designing the study. Also, the design and purpose of the study did not allow analysis of patients with septic shock, while they could have been used as a control group of severity. In addition, as the number of patients with non-confirmed or viral infection was low, no prediction analysis was done. The entire analysis has somewhere a modest sample size (n = 462 patients, of which 127 experienced deterioration) and do not support strong conclusions but serve as a robust early basis for future validation. Finally, we did not perform health economics and outcome research that could have brought useful information on the potential cost savings for hospitals.
Conclusion
The current findings highlight the potential interest of the sVEGFR2 protein, alone or in combination with suPAR, to diagnose initial endothelium stress and to predict/anticipate subsequent organ dysfunction. Such tool, suitable for routine test measurement, with timeto-results within 1 h and only one-time measurement required, could be used together with other laboratory findings and clinical assessments, to help in early prediction of the risk of deterioration and safely ruling out infected patients after ED admission. | 2020-08-13T10:05:27.998Z | 2020-08-12T00:00:00.000 | {
"year": 2020,
"sha1": "e5e2a8863f3a78d38c65100a9bcb80897b2a082b",
"oa_license": "CCBY",
"oa_url": "https://annalsofintensivecare.springeropen.com/track/pdf/10.1186/s13613-020-00729-w",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "7c9f7b1156a07f3ff16c2f68af9ee5d5bad2ecd6",
"s2fieldsofstudy": [
"Medicine"
],
"extfieldsofstudy": [
"Medicine"
]
} |
860808 | pes2o/s2orc | v3-fos-license | Somatic mutation and gain of copy number of PIK3CA in human breast cancer
Introduction Phosphatidylinositol 3-kinases (PI3Ks) are a group of lipid kinases that regulate signaling pathways involved in cell proliferation, adhesion, survival, and motility. Even though PIK3CA amplification and somatic mutation have been reported previously in various kinds of human cancers, the genetic change in PIK3CA in human breast cancer has not been clearly identified. Methods Fifteen breast cancer cell lines and 92 primary breast tumors (33 with matched normal tissue) were used to check somatic mutation and gene copy number of PIK3CA. For the somatic mutation study, we specifically checked exons 1, 9, and 20, which have been reported to be hot spots in colon cancer. For the analysis of the gene copy number, we used quantitative real-time PCR and fluorescence in situ hybridization. We also treated several breast cancer cells with the PIK3CA inhibitor LY294002 and compared the apoptosis status in cells with and without PIK3CA mutation. Results We identified a 20.6% (19 of 92) and 33.3% (5 of 15) PIK3CA somatic mutation frequency in primary breast tumors and cell lines, respectively. We also found that 8.7% (8 of 92) of the tumors harbored a gain of PIK3CA gene copy number. Only four cases in this study contained both an increase in the gene copy number and a somatic mutation. In addition, mutation of PIK3CA correlated with the status of Akt phosphorylation in some breast cancer cells and inhibition of PIK3CA-induced increased apoptosis in breast cancer cells with PIK3CA mutation. Conclusion Somatic mutation rather than a gain of gene copy number of PIK3CA is the frequent genetic alteration that contributes to human breast cancer progression. The frequent and clustered mutations within PIK3CA make it an attractive molecular marker for early detection and a promising therapeutic target in breast cancer.
Introduction
Phosphatidylinositol 3-kinases (PI3Ks) are a group of lipid kinases composed of 85-kDa and 110-kDa subunits. The 85-kDa subunit lacks PI3K activity and acts as adaptor, coupling the 110-kDa subunit (P110) to activated protein tyrosine kinases and generating second messengers by phosphorylating membrane inositol lipids at the D3 position. The resulting phosphatidylinositol derivatives then permit activation of downstream effectors that are involved in cell proliferation, survival, metabolism, cytoskeletal reorganization, and membrane trafficking [1,2].
PIK3CA, the gene encoding the 110-kDa subunit of PI3K, was mapped to 3q26, an area amplified in various human cancers including ovarian, head and neck, breast, urinary tract, and cervical cancers [3][4][5]. PIK3CA was specifically found to be amplified and overexpressed in ovarian and cervical cancer [6][7][8][9]. The increased copy number of the PIK3CA gene is associated with increased PIK3CA transcription, P110-alpha protein expression, and PI3K activity in ovarian cancer [9]. Treatment with a PI3K inhibitor decreased proliferation and increased apoptosis, suggesting that PIK3CA has an important role in ovarian cancer. More recently, PIK3CA mutations were identified in different human cancers. In that report, PIK3CA was mutated in 32%, 27%, 25%, and 4% of colon, brain, gastric, and lung cancers, respectively. Only 12 cases of breast cancer were examined, of which one was found to harbor a mutation in PIK3CA [10].
In an effort to identify the genetic alterations of the PIK3CA gene in breast cancer, we determined the mutation frequency and the change in the gene copy number of PIK3CA in a set of primary breast tumors and breast cancer cell lines. We found a high frequency of these somatic alterations of PIK3CA gene in a large number of primary breast cancers. In addition, mutation of the PIK3CA gene correlated with the activation of Akt. Inhibition of PIK3CA induced significant apoptosis in cells with PIK3CA mutation.
Breast cancer cell line and tumors
Of the breast cancer cell lines examined, MCF12A, Hs.578t, and MDA436 were kindly provided by Dr Nancy Davidson at Johns Hopkins University, and MDA-MB157, MDA-MB468, BT474, T47D, and UACC893 were kindly provided by Dr Fergus J Couch at Mayo Clinic. The other cell lines were obtained from the American Type Culture Collection. A total of 92 cases of breast tumor, including 33 paired primary invasive breast carcinomas and adjacent normal tissues (frozen tissue), were obtained from the Surgical Pathology archives of the Johns Hopkins Hospital, Baltimore, MD, USA, in accordance with the Institutional Review Board protocol and DNA was isolated using a standard phenol-chloroform protocol. Prof Saraswati Sukumar at the Sidney Kimmel Comprehensive Cancer Center at Johns Hopkins University provided isolated DNA. Each tumor used in this study was determined to contain greater than 70% tumor cells by H&E staining. Among these specimens, 3 were stage 1, 52 were stage 2, 22 were stage 3, and 4 were stage 4. Eleven were of uncharacteristic stage status. All of the tumors were high grade.
PCR, sequencing, and mutational analysis
Cell line and tumor DNA were isolated as standard protocol. The primers we used for PCR and sequencing were as follows. For exon 1: forward, CTCCACGACCATCATCAGG, reverse, GATTACGAAGGTATTGGTTTAGACAG, and sequencing primer, ACTTGATGCCCCCAAGAATC; for exon 9: forward, GATTGGTTCTTTCCTGTCTCTG, reverse, CCACAAATATCAATTTACAACCATTG, and sequencing primer, TTGCTTTTTCTGTAAATCATCTGTG; for exon 20: forward, TGGGGTAAAGGGAATCAAAAG, reverse, CCTAT-GCAATCGGTCTTTGC, and sequencing primer, TGA-CATTTGAGCAAAGACCTG. We used the same PCR conditions for all three exons. After incubation at 95°C for 5 min, two cycles of amplification were performed at the initial annealing temperature of 62°C, with a subsequent annealing temperature decrease of 2°C for every two cycles until 54°C. Twenty-five amplification cycles were then performed. After PCR reaction, samples were subjected to automated DNA sequencing using the ABI 377 Sequencer. The positive samples were confirmed by re-PCR and sequencing using the same primers and conditions.
Western blotting
To evaluate Akt phosphorylation status, MDA231, MD361, MCF7, BT20, BT474, and T47D cells were grown in appropriate medium and cell lysates were collected in SDS lysis buffer (cell signaling). Lysates were cleared of insoluble material by microcentrifugation at 15,800 g for 15 min at 4°C, and protein concentrations were determined (protein assay kit; Bio-Rad, Hercules, CA, USA). Approximately 50 µg of total protein from each sample was denatured in loading buffer for 10 min, electrophoresed through 10% polyacrylamide gels, and electroblotted to a nylon transfer membrane (Schleicher & Schuell, Bioscience, Keene, NH USA). The membrane was incubated overnight with primary antibody Akt ser473 (antirabbit, cell signaling), Akt (anti rabbit, Cell signaling) or β-actin (antimouse antibody; Sigma, St Louis, MO, USA) at 4°C. Then the membrane was washed three times in Tris-buffered saline with 0.1% Tween 20 at room temperature and incubated for 1 hour at room temperature with horseradish-peroxidase-labeled secondary antibody (goat antirabbit IgG; or goat antimouse IgG; Sigma). Signal detection was by horseradish peroxidase chemiluminescent reaction (ECL; Amersham).
Quantitative real-time PCR
For real-time PCR, specific primers and probes were designed using software from Applied Biosystems (Foster City, CA, USA) to amplify the PIK3CA and control β-actin (sequences are available on request). Using this combination and the protocol described by Mambo and colleagues [11], the samples were run in triplicate. Primers and probes to β-actin were run in parallel to standardize the input DNA (4 ng). Standard curves were developed using serial dilutions of DNA extracted from MCF12A. PCR amplifications were performed on an ABI 7900 TaqMan (Applied Biosystems) according to the manufacturer's protocol.
Fluorescence in situ hybridization (FISH)
Bacterial artificial chromosome (BAC) clone RP11-466H15 for PIK3CA was obtained from Research Genetics (Invitrogen Corporation, Carlsbad, CA, USA). BAC DNA isolation was carried out using the standard laboratory protocol for phenolchloroform extraction. The chromosome 3 α-satellite plasmid and BAC DNA were labeled directly in SpectrumOrange-dUTP ® and SpectrumGreen-dUTP ® (Vysis, Downers Grove, IL, USA), respectively, using the Vysis nick translation kit (Vysis) in accordance with the manufacturer's instructions. Slides were fixed using methanol:acetic acid (3:1), followed by pretreatment with RNase, and dual-color FISH was performed as described previously [12]. Slides were counterstained with 4',6-diamidino-2-phenylindole (DAPI; Sigma), mounted with antifade (Vysis), and stored at -20°C. At least 100 nuclei were evaluated for each sample. Analysis was carried out using an Olympus (New Hyde Park, NY, USA) BHS fluorescence microscope, and images were captured using a CytoVision Ultra (Applied Imaging, Santa Clara, CA, USA).
Apoptosis detection
We assessed cellular apoptosis using an Annexin V-FITC (fluorescein isothiocyanate) apoptosis detection kit (BD Biosciences, San Jose, CA, USA). Cells were cultured in 100-mm dishes until 50% confluent, serum-starved overnight, and then treated with LY294002, 3 µM and 10 µM, for 72 hours. Both detached and adherent cells were then collected and labeled with Annexin V-FITC and Propidium Iodide. The apoptosis was evaluated using FACScan (Becton Dickinson ImmunoSystems, Mountain View, CA, USA) flow cytometer.
PIK3CA is frequently mutated in breast cancer cell lines and primary tumors
A previous report suggested that more than 80% of the mutations of the PIK3CA gene occur in three small clusters, namely in the p85 (exon 1), helical (exon 9) and kinase (exon 20) domains [10]. Based on this information, we sequenced exon 1, 9, and 20 in 15 breast cancer cell lines, 92 primary tumors, and 33 normal tissues. A total of five mutations were identified only in the 15 breast cancer cell lines (33.3%). No mutations were detected in the normal epithelial cell line MCF12A. Three of the mutations were identified in exon 9 and two were found in exon 20. No mutation was identified in exon 1. The BT20 cell line contained two different mutations, C1616G in exon 9 and A3140G in exon 20 (Fig. 1), corresponding to P539R and H1047R amino acid change, respectively.
A total of 19 mutation cases (20.6%) were identified in 92 primary tumors. Six of the 19 mutations were identified in 33 tumor samples but not in their paired normal samples. This indicated that the identified mutations are somatic mutations. Thirteen of these 19 mutations were in exon 9, and 6 were in exon 20. No mutation was identified in exon 1 in any of the 92 tumors. As shown in Table 1, the E545K mutation in exon 9 and the H1047R mutation in exon 20 were the two most frequent mutations in both breast cancer cell lines and primary tumors.
Gain of copy number of PIK3CA gene in primary breast cancer cell lines and tumors
To determine the PIK3CA gene copy number, we performed real-time quantitative PCR on 12 breast cancer cell lines, 92 primary tumors, and 33 normal controls. Standard curves for PIK3CA and β-actin amplification were generated using serially diluted MCF12A DNA, and showed linearity over the range used. Fig. 2a shows the standard curve for PIK3CA amplification with a slope of -4.044, while Fig. 2b shows the standard curve for β-actin amplification with a slope of -3.919. We did not observe any deletion of β-actin in the tumor samples. Most samples showed no difference in β-actin amplification between the paired tumor and a normal samples. A representative figure of β-actin amplification in a paired tumor and normal sample is shown in Fig. 2c. To evaluate the gene copy number in all samples, we set the cutoff line at 4 copies. Among the 33 cases with paired tissue, 8 (24.2%) showed a much higher gene copy number than normal controls (Fig. 3a). Only one case showed more than 4 copies. In a total of 92 cases of primary tumors, 8 (8.7%) had more than 4 copies, with the highest number being 7.8 copies (Fig. 3b). In addition, PIK3CA gene copy number was also determined in 12 breast cancer cell lines, and the MCF7, T47D, and BT474 cell lines had more than 4 copies (Fig. 3c). We also confirmed the gene copy number results of these 12 cell lines with FISH analysis. Representative FISH images are shown in Fig. 3d. Thus, our data of gene copy analysis indicates that gene amplification/ gain of copy number of PIK3CA gene is not a frequent genetic alteration in breast cancer.
Biological effect of PIK3CA mutations in breast cancer
To determine whether the mutation of PIK3CA correlated with the activation of Akt (a downstream gene of PIK3 that mediates carcinogenic events such as proliferation), we performed western blot analysis to check the phosphorylation of Akt in several breast cancer cell lines. As shown in Fig. 4a, Akt phosphorylation was strongest in BT20 cells (which harbor two PIK3CA mutations) and MCF7 cells (which harbor PIK3CA mutation and high PIK3CA gene copy numbers). We also observed weak phosphorylation of Akt in MDA361 (which has one mutation) and in BT474 and T47D (no observable mutation but with high PIK3CA gene copy numbers). We did not observe phosphorylation of Akt in MDA231 (Fig. 4a) or in MCF12A and MDA157 cells (data not shown) that had no observable mutations and no copy number gain of PIK3CA. These data indicate that PIK3CA mutations might increase kinase activity and in turn activate the PI3K/AKT pathway.
We further investigated the biological effects of PIK3CA mutation in breast cancer cell lines by treating breast cancer cells with or without PIK3CA mutation with the PIK3CA inhibitor LY294002. As shown in Table 1, MCF7 harbors one mutation at position E545K, and BT20 harbors two mutations, which are located at positions P539R and H1047R. These somatic mutations were recently shown to have oncogenic transforming activity [13]. As shown in Fig. 4b and 4c, the fractions of apoptotic cells at 72 hours after treatment with 3 µM and 10 µM LY294002 were increased in MCF7 and BT20 cells. In addition, 3 µM and 10 µM LY294002 did not induce further apoptosis in MDA157 cells (Fig. 4c) or MDA231 cells (data not shown), even though serum starvation alone can induce more than 50% apoptosis in MDA157 cells (Fig. 4c).
Discussion
This study describes two innovations. First, we show a 20.6% mutation rate of the PIK3CA gene in breast cancer, indicating that PIK3CA mutation is a frequent genetic alteration in breast cancer. The 8% mutation rate of PIK3CA in breast cancer, reported in a previous study, was underestimated [10], proba-bly because of the smaller number of cases examined. Another possibility might be the grade status of the tumors used, as all of the tumors in our study were of high grade. It will be useful and interesting in the future to explore whether PIK3CA mutation is correlated with tumor grade status.
Second, CGH (comparative genomic hybridization) studies have shown that 3q26 is an amplified chromosome region in various cancers, including breast cancer [4,5]. Unfortunately, it was not previously possible to identify the PIK3CA gene Detection of somatic mutation of PIK3CA in breast cancer Detection of somatic mutation of PIK3CA in breast cancer. In each case, the left sequence chromatogram was obtained from normal control and the right sequence chromatogram was obtained from tumor. Arrows indicate the location of missense mutations. The nucleotide and amino acid alterations are indicated on the left. amplification pattern, because of the low resolution of the methods used. In our study, we used quantitative real-time PCR, a very sensitive and far more accurate technique [14,15], to specifically quantitate the genomic copy number of PIK3CA not only in primary breast tumors but also in paired tissues. Our data showed that gene amplification or gain of PIK3CA copy number is not a frequent genetic alteration event. This suggests that gene amplification is not the main molecular mechanism in activating the PIK3/AKT-driven tumorigenesis pathway in breast cancer. Even though a complex and heterogeneous set of genetic alterations, including gene amplification/gain of copy number, deletion, and mutation, were reported to be involved in the etiology of breast cancer [16,17], our paper confirmed that gain of gene copy number and somatic mutation of one oncogene exist in parallel in breast cancer. Both amplification/gain of gene copy number and somatic mutation of PIK3CA have been shown to be associated with increased PI3K activity and might contribute to cancer through inhibition of apoptosis [6,9]. Gene amplification/gain of gene copy is well accepted as a later event in tumor progression [18,19], as is somatic mutation [10]. To determine the relation between somatic mutation and gain of gene copy number of PIK3CA gene in breast cancer, we integrated our mutation and gene copy Table 2 The relation of somatic mutation and gain of copy number of PIK3CA in breast cancer number data. As shown in Table 2, 19 (20.6%) of 92 cases had a PIK3CA gene mutation and 4 cases did not harbor a mutation but showed a gain of gene copy number. Overall, a quarter (23 of 92) of all breast tumors examined had either a mutation or gain of copy number of the PIK3CA. In addition, 15 of 19 mutations were identified in tumors without gain of copy number of PIK3CA, suggesting that somatic mutations are a major contributory factor in the PIK3CA signaling pathway. Only four cases in the whole study had both a mutation and gain of copy number of PIK3CA. We did not observe a significant association between somatic mutation and gain of PIK3CA gene copy number in 92 cases of breast tumors ( Table 2). We suggest that further studies using larger number of cases be undertaken in order to determine whether somatic mutation and gene amplification are independent genetic alterations in breast cancer.
Conclusion
The results from this study indicate that somatic mutation rather than gene amplification of PIK3CA is the main genetic alternation in breast cancer. The frequent and clustered mutations within PIK3CA make it an attractive molecular marker for early detection of breast cancer. In addition, the somatic mutations lead to activation of PIK3CA and also correlate with the activation of the PI3K/AKT pathway. Inhibition of PIK3CA can significantly induce apoptosis in cells with PIK3CA mutation. This suggests that PIK3CA might be a promising therapeutic target in breast cancer.
(During the writing of this manuscript, Bachman KE and colleagues published their results in Cancer Biology and Therapy [20]. They also reported more than 20% somatic mutations in breast cancer, a finding consistent with this study). | 2014-10-01T00:00:00.000Z | 2005-05-31T00:00:00.000 | {
"year": 2005,
"sha1": "3017fe7fe1cc2998b7490e0b24cad13edec9933e",
"oa_license": "CCBY",
"oa_url": "https://breast-cancer-research.biomedcentral.com/track/pdf/10.1186/bcr1262",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "88851795495d64fde2dd8888f0638f8866da3949",
"s2fieldsofstudy": [
"Biology"
],
"extfieldsofstudy": [
"Biology",
"Medicine"
]
} |
259376581 | pes2o/s2orc | v3-fos-license | “Geen makkie”: Interpretable Classification and Simplification of Dutch Text Complexity
An inclusive society needs to facilitate access to information for all of its members, including citizens with low literacy and with non-native language skills. We present an approach to assess Dutch text complexity on the sentence level and conduct an interpretability analysis to explore the link between neural models and linguistic complexity features. Building on these findings, we develop the first contextual lexical simplification model for Dutch and publish a pilot dataset for evaluation. We go beyondprevious work which primarily targeted lexical substitution and propose strategies for adjusting the model’s linguistic register to generate simpler candidates. Our results indicate that continual pre-training and multi-task learning with conceptually related tasks are promising directions for ensuring the simplicity of the generated substitutions.
Introduction
Reading is a foundational skill for acquiring new information. Many sources of information are only available in written form, including educational material, newspaper articles, and letters from municipalities. Although many people learn how to read as a child, not everyone becomes equally skilled at it. In the Netherlands alone, more than 2.5 out of 14 million people over 16 years old are low-literate, meaning that they experience challenges with reading or writing. 3 As a result, they face obstacles in achieving academic success, seeking employment * Equal contribution. + The experiments were conducted when all authors were affiliated with Vrije Universiteit Amsterdam. 1 The colloquial Dutch expression "Geen makkie" in the title can be translated as "not easy" or "not a walk in the park".
One way to address this problem is to reduce text complexity. Texts that contain many infrequent words and complex sentence structures are difficult to read, especially for readers with low literacy and language learners. Automated natural language processing tools for text complexity assessment can help both in assisting editors in the selection of adequate texts and by signaling potential comprehension problems to copywriters. By estimating text complexity, we can select texts that are sufficiently easy for a particular target audience or simplify texts that are too difficult.
Recent neural models for text complexity assessment have obtained good results in classifying texts into discrete categories of complexity (Deutsch et al., 2020;Martinc et al., 2021). The global classification label can be a first indicator but it does not point to specific parts of the input that are complex, leaving it to the human editor to identify the necessary simplifications. In this work, we first explore Dutch complexity prediction on the sentence level (as opposed to full-text classification in previous work) and then zoom in even further.
The complexity of a text is affected by an interplay of various factors, including its structural characteristics, domain, and layout. A crucial component is the choice of the lexical units and their complexity. A system for lexical simplification can support humans in detecting lexical complexity and suggest simpler alternatives. In the sentence children bear the future, and our resolution to support them determines the world they inherit, a lexical simplification model could propose to substitute bear with simpler words such as carry, hold, or shape. These suggestions can assist human writers in revising and simplifying their text.
Previous approaches to Dutch lexical simplification generated substitution candidates by naively substituting words according to a static alignment of synonyms without considering the context of the sentence. This approach does not account for ambiguous words and synonyms that only maintain semantic coherence in a subset of contexts. In the example above, resolution can be interpreted as intention, but in the context of TV screens, it refers to sharpness. In order to ensure meaning preservation, lexical simplification needs to be context-sensitive.
Contributions We fine-tune BERTje (de Vries et al., 2019), a Dutch pre-trained transformer model, to predict sentence-level complexity and use interpretability methods to show that it captures relevant linguistic cues. We visualize the local attribution values of the model's predictions in a demo to point end users to complex parts of the sentence. In order to facilitate the simplification process, we introduce LSBertje, the first contextual model for lexical simplification in Dutch. We explore three approaches to adapt the linguistic register of the model, to re-enforce a preference for simplicity in the generated substitutions.
Related Work
We discuss complexity assessment and lexical simplification as separate consecutive stages in line with related work.
Complexity Assessment
Text complexity is affected by the words we choose and the way we combine them into meaning. The complexity of individual words is determined by features such as length, frequency, morphological complexity, abstractness, and age of acquisition. At the sentence level, syntactic features such as parse tree depth, syntactic ambiguity, and the number of subordinate clauses affect complexity.
Features that indicate lexical variety, such as the type-token ratio, can also serve as a proxy for complexity (Schwarm and Ostendorf, 2005;Feng et al., 2009;Vajjala and Meurers, 2012).
Traditional surface-based metrics such as the Flesch-Kincaid score are widely used to automatically assess text complexity, but they only consider length characteristics and do not take into account the various intricate factors that influence text complexity. In contrast, featurebased machine learning models leverage numerous features to predict complexity labels, surpassing the capabilities of surface-based metrics (Collins-Thompson and Callan, 2005). Nevertheless, handengineering effective features is an expensive and time-consuming process (Filighera et al., 2019).
Neural models for classifying complexity do not rely on hand-engineered features and show marginal improvements over feature-based models (Deutsch et al., 2020;Martinc et al., 2021), but they lack interpretability. In this study, we analyze if neural models leverage relevant linguistic cues when predicting binary complexity labels for Dutch sentences and can therefore reliably detect sentences that qualify for a simplification procedure.
Lexical Simplification
Lexical simplification characterizes a substitution operation on the lexical level with the goal of reducing the complexity of a sentence and making the text accessible to a wider audience. Lexical simplification of a sentence is typically performed as a pipeline of four consecutive stages: complex word identification, substitution generation, substitution selection and substitution ranking (Sikka and Mago, 2020;Thomas and Anderson, 2012;Paetzold and Specia, 2017b). In this work, we focus on the first two stages.
Complex Word Identification In the initial stage, words with simplification potential need to be identified. Traditional approaches for this subtask use curated lists of complex words (Lee and Yeung, 2018) or word frequency resources to flag words below a certain frequency threshold as complex (Sikka and Mago, 2020). In the most recent shared task for complex word identification (Yimam et al., 2018), feature-based machine learning techniques using length and frequency features obtained the best results. More recent approaches express lexical complexity on a continuous scale (Shardlow et al., 2021) as a binary classification is too simplistic for most educational scenarios. We explore the applicability of gradient-based interpretability techniques for complex word identification (Danilevsky et al., 2020;Sundararajan et al., 2017).
Substitution Generation
The generation of substitution candidates has traditionally been performed with lexical resources such as WordNet (Miller, 1995;Carroll et al., 1998). In a more datadriven approach, simple-complex word pairs have been extracted from a parallel corpus that aligns sentences in Wikipedia with their counterparts in Simple Wikipedia (Kauchak, 2013;Paetzold and Specia, 2017a). These static approaches are unable to generate substitution candidates for words that do not occur in the resources or that are spelled differently. In addition, they are prone to generate semantically incoherent candidates since the substitutions are not context-sensitive.
Context-Aware Substitution Generation For meaning-preserving simplification, it is important to consider the context of the complex word. Paetzold and Specia (2016b) propose to use the part of speech of a word to narrow down its meaning. Their approach relies on proximity in a static embedding space to find simplifications, which are then disambiguated with respect to their part of speech. As a result, the relatively simple noun bear is represented by a different vector than the rather complex verb bear. This syntactically informed approach leads to improvements over noncontextualized models, but it still falls short in capturing more fine-grained differences in meaning; even the verb bear can be used in a semantic spectrum ranging from bearing/delivering a child to bearing/having a resemblance.
To capture such subtle distinctions, recent approaches use contextualized language models such as BERT (Devlin et al., 2019) to generate substitutions tailored to the specific context. Alarcón et al. (2021) search the contextual embedding space of a complex word to find context-aware simplification candidates. They find antonyms of the complex word among the generated candidates, which is detrimental to the goal of preserving the meaning of the complex sentence. Qiang et al. (2020) introduce LSBert, which uses a prompting strategy based on BERT's masked language modeling objective to generate context-aware lexical simplification candidates for English sentences. They generate simplifications by masking the complex word. In order to enforce semantic coherence of the masked word, Qiang et al. (2020) feed the input sentences as a duplicated pair and apply the masking operation only on the second sentence. In the recent shared task on multi-lingual lexical simplification (Saggion et al., 2022), approaches that use pre-trained language models produced very competitive results. In all three languages covered in the shared task, English, Spanish, and Portuguese, state-of-the-art results were obtained. In this work, we evaluate the LSBert lexical simplification approach and adapt it to Dutch.
Complexity Assessment and Simplification for Dutch
Work on complexity and simplification for Dutch is sparse. Vandeghinste and Bulte (2019) analyze complexity classification at the document level using feature-based classifiers, but there is currently no known work on neural sentence-level complexity classification for Dutch. Regarding lexical simplification, Bulté et al. (2018) develop a pipeline using various resources. However, systematically evaluating the pipeline is challenging as there is no existing benchmark dataset for lexical simplification in Dutch.
Complexity Classification
We train a neural classifier for determining binary labels of Dutch sentence complexity and compare its performance to several feature-based classifiers. We then analyze if the neural model captures relevant complexity cues.
Experimental Setup
Data We contrast articles from the Dutch newspapers De Standaard and Wablieft in line with Vandeghinste and Bulte (2019). The two newspapers cover similar topics and events. As Wablieft targets an audience that prefers simpler language, the articles are significantly shorter (on average, there are 164 words in Wablieft articles vs 383 words in De Standaard articles). The source of an article (Wablieft vs De Standaard) can therefore be easily determined by its length. 4 However, identifying the source is just a proxy for identifying the linguistic characteristics that determine complexity. To go beyond this superficial approach, we instead train our models to predict the complexity of individual sentences. The corpus contains 12,683 articles from Wablieft and 31,140 articles from De Standaard. 5 We create a balanced dataset by randomly selecting 12,000 articles from each newspaper and preprocessing them using the same steps as Vandeghinste and Bulte (2019). We split the articles into individual sentences and only keep the first sentence of each article to keep the dataset balanced. We label all sentences from Wablieft articles as easy and all sentences from De Standaard as complex. We use 80% of the data for training, 10% for validation, and 10% for testing. The validation set was used for checking model accuracy at each epoch. Statistics regarding the length and frequency of the words in both types of sentences are shown in Table 1.
4.95 (1.95-6.38) 4.78 (1.39-6.44) (2019)) available from Huggingface and add a linear output layer with ReLU activation and dropout (0.5). The model is optimized using ADAM with a learning rate of 1e-6 and crossentropy loss. We use Support Vector Machines (SVM) as our feature-based classification models. We employ the scikit-learn implementation with all default parameters (Pedregosa et al., 2011).
Complexity Features Our complexity features can be grouped into three categories: length characteristics, frequency effects, and morpho-syntactic properties. Word frequencies are obtained as standardized Zipf frequencies using the Python package wordfreq (Speer et al., 2018). The package combines several frequency resources, including SUBTLEX lists, e.g. Brysbaert and New (2009), and OpenSubtitles (Lison and Tiedemann, 2016). The morpho-syntactic features are computed using the Profiling-UD tool (Brunato et al., 2020). We calculate all features on the sentence level and train our feature-based models on different combinations of these features. An overview of the features is given in Table 3. Table 2 shows the prediction accuracy of the finetuned BERTje model and several feature-based SVM classifiers for sentence-level complexity classification. We see that the neural model outperforms all feature-based models by 10 percent or more. For the feature-based classifiers, the best results can be obtained by all types of features (frequency + length + morpho-syntactic), but the morpho-syntactic features only improve the frequency and length-based classifiers with 1 percent accuracy. This might be caused by the fact that the morpho-syntactic features are correlated with length (e.g., parse tree depth naturally increases as the sentence length increases). We conclude that frequency and length are the most predictive features for Dutch sentence-level complexity classification, which is in line with previous work for English (Vajjala Balakrishna, 2015). Prediction Confidence To gain more insight in the linguistic cues that the neural model relies on, we analyze model confidence with respect to the complexity features that our feature-based models were trained on. Table 3 shows the Spearman correlation between complexity features and model confidence for the complex class. We see that the model allocates higher probability values to the complex class when word length, sentence length, dependency link length, or the number of lowfrequency words increases. As the classification is binary, the inverse relationship can be observed for the easy class.
Results
Since the correlation values in Table 3 are relatively low, we analyze the corresponding scatter plots. Figure 1 depicts the correlation between model confidence for the complex class and the maximum dependency link of the input sentences. We see that low to medium values for the maximum dependency link length do not clearly affect model confidence, but that high dependency link values always lead to high confidence. We observe the same pattern for the other complexity features. This suggests that the model considers relevant complexity features when making its predictions, but that the evidence needs to be strong enough (i.e., the sentence should be sufficiently complex).
Complex Word Identification
Our results indicate that the fine-tuned BERTje model is a reliable tool for sentence-level complexity classification. It can show an editor which sentences qualify for simplification. Nevertheless, binary complexity classification is an overly simplified operationalization that lacks educational usability. We go one step further and combine the model with feature attribution methods and analyze its utility for the first component of the lexical simplification pipeline: complex word identification.
We implement a demo that explains the predictions of our neural complexity classifier. Users can type Dutch input sentences, which are classified as either easy or complex. Words that contributed positively or negatively to the model's prediction are highlighted, as shown in Figure 2. We use Captum (Kokhlikyan et al., 2020) for extracting token-level attributions. Additionally, the sentence-level com-plexity features from Table 3 are calculated and shown to the user, which give a more fine-grained perspective on the complexity of the input sentence (see Appendix Figure 4).
Attribution Methods Selecting the right attribution method is not straightforward. Different attribution methods produce varying, sometimes even contrasting explanations for model predictions (Bastings et al., 2022). Atanasova et al. (2020) find that gradient-based techniques produce the best explanations across different model architectures and text classification tasks. We therefore include three gradient-based attribution methods in our demo: Gradient, InputXGradient, and Integrated Gradients. The vanilla Gradient method estimates feature importance by calculating the gradient (i.e. the rate of change) of a model's output with respect to a given input feature (Danilevsky et al., 2020). In-putXGradient additionally multiplies the gradients with the input, and Integrated Gradients integrates the gradient of the model's output with respect to the input features along a chosen path between a feature x and a baseline x' (Sundararajan et al., 2017). We use the [PAD] token as our baseline.
Linguistic Plausibility of Attributions Explanations of the complexity predictions are most useful for end-users of the demo (e.g. teachers) if the attribution scores are linguistically plausible. This means that the scores should match our expectations of what makes a sentence complex or easy to understand. Given the intended use of the demo for complex word identification, we analyze the linguistic plausibility of the attributions with respect to lexical complexity. We expect short and frequent words to receive high attributions when the model predicts that a sentence is easy to understand, while longer and less frequent words should receive high attributions when the model predicts that the sentence is complex.
To better understand the differences between our selected attribution methods and to analyze the linguistic plausibility of the observed patterns, we calculate the Spearman correlation between lexical complexity features and attribution scores. Since our model uses subword tokenization, both attribution scores and complexity features are calculated on the subword level. We exclude the special tokens [CLS] and [SEP] from our analyses. Table 4 shows that Integrated Gradients is the only method for which the correlations have the ex- pected directionality, i.e. when the model predicts the easy class, high attributions are assigned to short/frequent words, and when the model predicts the complex class, high attributions are assigned to long/infrequent words. For InputXGradient, we see the opposite pattern, and for Gradient, the directionality of the correlations is the same for both the easy and complex class. The inconsistency of the three attribution methods is surprising but in line with previous findings (Bastings et al., 2022). More user-centered analyses are required to identify their practical benefits.
To further explore the linguistic plausibility of the attribution scores, we calculate average attribution scores with respect to part-of-speech tags. We again find that the most plausible attributions are generated by the Integrated Gradients approach. In Figure 3, we see that nouns, adverbs, and adjectives are assigned relatively high importance scores when the model predicts the easy class. Prepositions, conjunctions, and complementizers receive higher importance when the model predicts the complex class. This is plausible since function words often signal a complex sentence structure, while easier sentences typically contain more content words. Additionally, we observe that subwords, which indicate the presence of compound words, receive higher scores when the model predicts the complex class. This is helpful for lexical simplification, as compound words are often challenging to read. Finally, we observe that determiners receive high scores when the model predicts the easy class, which aligns with lexical complexity since determiners are short and frequent.
Context-Aware Simplification
In the second step of the simplification pipeline, we generate context-aware simplifications for Dutch.
LSBertje We present LSBertje, the first model for contextualized lexical simplification in Dutch. We base LSBertje on LSBert (Qiang et al., 2019(Qiang et al., , 2020 by altering its language-specific components to Dutch. We replace the language model that generates simplifications with the Dutch BERT model, BERTje. We also replace the stemmer used in filtering with the snowball stemmer. 6
Dutch Evaluation Data
Dutch evaluation data for lexical simplification does not yet exist. To evaluate our approach, we develop a pilot benchmark dataset using authentic municipal data. We select sentences from a collection of 15,334 sentences from 48 municipal documents based on the presence of a complex word from a list curated by domain experts and based on their word count (less than 20 words). We exclude incomplete sentences such as headers, sentences without verbs, or with less than four words. From the remaining 6,084 sentences, we randomly sample 250 of complex words from the list and find a sentence for the dataset for 108 of the complex words. Eight sentences where simplification was not possible were removed because: 1) they were part of a named entity, 2) the sentence was incomplete or 3) a simple sense of the word was used. This resulted in 100 sentences.
The sentences were simplified by 23 native speakers of Dutch who pursued or obtained an academic degree. They were shown a sentence with the highlighted complex word and five simplification options that LSBertje generated. The annotators could select from these options and propose additional simplifications. For five sentences, no annotator could come up with a lexical simplification candidate. The remaining 95 sentences contained an average of 2.9 simplification candidates, with a maximum of 7. Table 5 shows that the LSBertje model yields good simplification performance for our dataset. The potential metric shows that the model was able to predict at least one correct simplification candidate in 85% of the sentences. It should be noted that the English benchmark datasets come with a greater variety. In our dataset, a sentence is annotated with 2.9 simplifications on average, whereas BenchLS lists 7.4 substitutions. These size differences can explain the slightly lower potential score and the higher recall for Dutch.
Results and Analysis
To evaluate the simplicity of the generated substitutions, we assess their frequency using the SUBTLEX-NL corpus (Keuleers et al., 2010) and find that 517 out of 650 generated words occur with higher frequency than the original word. This indicates that the generated simplifications are indeed simpler.
Register Adaptation Techniques
LSBertje relies on a base model that was pretrained for masked language modeling and captures aspects of text complexity only as an incidental byproduct. It uses a masked language modeling mechanism that induces semantic preservation by repeating the input sentence. The goal of generating simpler substitutions is only implicitly targeted by restricting the generation to tokens consisting of a single subtoken. This effectively prevents the model from generating infrequent or morphologically more complex words, but the model is not explicitly optimized for capturing different levels of text complexity. We explore three strategies to adapt the linguistic register of the model so that it generates simpler substitutions: conceptual finetuning, continual pre-training, and multi-task learning.
Conceptual Fine-tuning We aim at adapting the linguistic register of the model by fine-tuning LS-Bert to predict the linguistic complexity of sentences before applying it for generating substitution candidates. The model is fed a pair of sentences and is trained to predict whether the first sentence is simpler or more complex than the second example. We use sentence pairs from the sentence-aligned simple-complex Wikipedia corpus (Kauchak, 2013). The sentences are balanced with respect to the simplification order condition, and we experiment with the number of sentences. 7 Continual Pre-Training For the second strategy, we adapt the linguistic register by exposing the model to simpler texts using continual pre-training. We continue the pre-training combination of masked language modeling and nextsentence prediction using only sentences from simple Wikipedia. 8 We pair each sentence either with the directly following sentence or with a randomly selected sentence from another Wikipedia article.
Multi-Task Learning
We then combine the two ideas and train a model on two tasks simultaneously. We use the same training method but replace nextsentence prediction with complexity prediction.
Experimental Setup
As the Dutch dataset is too small for representative evaluation, we first explore the register adaptation strategies using English evaluation data and the English LSBert model.
Evaluation Data
We evaluate the models on three commonly used benchmarking datasets. They consist of sentences from Wikipedia with the complex word highlighted and a list of humangenerated simplifications. LexMTurk (Horn et al., 2014), BenchLS (Paetzold and Specia, 2016a) and NNSEval (Paetzold and Specia, 2016b) contain respectively 500, 929, and 239 sentences.
Implementation Details
We base our implementation on the Huggingface documentation Bert.for_Pretraining and the same model as LSBert. 9 10 For the masked language modeling components, we mask 15% of the tokens in the input sentences. Optimization is performed using an ADAM optimizer and a batch size of two. The continual pre-training is run for two epochs, the multi-task learning for four epochs. We varied the learning rate (5e-5, 5e-6, 5e-7) and the number of sentences (1000, 10.000, 50.000).
Results
We find that the model adapted with conceptual fine-tuning lost its ability to perform masked language modeling. Its predictions for bear in children bear the future were: swallowed, if, knicks, cats, nichol. These predictions clearly indicate a case of catastrophic forgetting (Liu et al., 2020). In learning a new task, the model forgot its original capabilities.
Both continual pre-training and multi-task learning lead to improved performance on the simplification task in two and three configurations respectively. We find that the configuration of LR 5e-6 and 10.000 sentences is the best for both fine-tuning methods as shown in Table 5. See the Appendix for all scores.
The multi-task learning strategy seems to be the most promising approach. We test the robustness of our findings by training the model using 26 different random seeds. The model outperforms LSBert in 20 cases, see Table 8 of the Appendix for a detailed overview. Overall, we see an increase in precision, recall, and F 1 -score. While the model's performance is highly sensitive to taskspecific components (the learning rate and the num-9 bert-large-uncased-whole-word-masking 10 https://huggingface.co/transformers/ v3.0.2/model_doc/bert.html# bertforpretraining ber of sentences), the performance remains robust for variation in the task-independent random seed.
The results indicate that multi-task learning is a promising strategy for adapting the model's linguistic register.
Analysis
We analyze the effect of the register adaptation techniques by comparing the frequency of the generated substitutions using the same resources as Qiang et al. (2019) that contains word frequency counts for Wikipedia articles and a children's book corpus. We see that the fine-tuned model generates simplifications that occur more frequently compared to the substitutions generated by LSBert (13,030 vs 20,000 occurrences on average). When we zoom in on the generations, we find that the fine-tuned model correctly generates 356 words that were not captured by LSBert and that these words have a high average frequency of 27,000. These findings indicate that the fine-tuning process indeed leads to the generation of simpler words.
Register Adaptation Results for Dutch
Due to the absence of a sentence-aligned simplification corpus for Dutch, we only test the continual pre-training strategy on the Dutch data. The results show that the improvements obtained for English cannot yet be observed for Dutch. In the future, we plan to extend our experiments to a larger dataset and to the multi-task learning strategy.
Conclusion
In this work, we have introduced two state-of-theart components for complexity prediction and simplification in Dutch. It can support teachers and text editors in making texts more accessible for people who face reading challenges.
We developed a demo that predicts binary complexity labels for Dutch sentences and highlights words that contributed positively or negatively to the prediction. Additionally, the demo interface provides scales for different aspects of sentencelevel complexity to enable a more fine-grained interpretation by the user.
We introduced LSBertje, which is the first model for contextualized lexical simplification in Dutch (to the best of our knowledge). We show that the model can generate adequate simplifications without additional fine-tuning. This base setup can serve as a reasonable starting scenario for context- Table 5: Simplification performance of the register adaptation techniques as potential (Pot.), precision (P), recall (R), and F 1 for the configuration with a learning rate of 5e-6 and 10,000 fine-tuning sentences.
aware simplification generation for resource-poor languages. We developed a pilot evaluation dataset for Dutch that allowed us to perform initial comparisons. For a more elaborate analysis, a larger Dutch dataset needs to be curated in future work.
We explored strategies to adapt the linguistic register of the model to ensure the simplicity of the generated substitutions and find that both multi-task learning and continual pre-training show considerable potential. We further analyzed the model's robustness and discovered a strong sensitivity to task-specific hyperparameters but little variation across random seeds. Table 8: Multi-task learning results for NNSEval with varying random seeds. The learning rate is fixed at 5e-6 and fine-tuning is conducted on 10,000 sentences. | 2023-07-10T13:04:36.398Z | 2023-01-01T00:00:00.000 | {
"year": 2023,
"sha1": "192f19a2c288d71889aa9e3f73be28e6ef6667fd",
"oa_license": null,
"oa_url": null,
"oa_status": null,
"pdf_src": "ACL",
"pdf_hash": "192f19a2c288d71889aa9e3f73be28e6ef6667fd",
"s2fieldsofstudy": [
"Computer Science"
],
"extfieldsofstudy": [
"Computer Science"
]
} |
235795804 | pes2o/s2orc | v3-fos-license | The Fundamental Theorem of Natural Selection
Suppose we have n different types of self-replicating entity, with the population Pi of the ith type changing at a rate equal to Pi times the fitness fi of that type. Suppose the fitness fi is any continuous function of all the populations P1,…,Pn. Let pi be the fraction of replicators that are of the ith type. Then p=(p1,…,pn) is a time-dependent probability distribution, and we prove that its speed as measured by the Fisher information metric equals the variance in fitness. In rough terms, this says that the speed at which information is updated through natural selection equals the variance in fitness. This result can be seen as a modified version of Fisher’s fundamental theorem of natural selection. We compare it to Fisher’s original result as interpreted by Price, Ewens and Edwards.
Introduction
In 1930, Fisher [10] stated his "fundamental theorem of natural selection" as follows: The rate of increase in fitness of any organism at any time is equal to its genetic variance in fitness at that time.Some tried to make this statement precise as follows: The time derivative of the mean fitness of a population equals the variance of its fitness.But this is only true under very restrictive conditions, so a controversy was ignited.
An interesting resolution was proposed by Price [14], and later amplified by Ewens [8] and Edwards [7].We can formalize their idea as follows.Suppose we have n types of self-replicating entity, and idealize the population of the ith type as a positive real-valued function P i (t).Suppose d dt P i (t) = f i (P 1 (t), . . ., P n (t)) P i (t) where the fitness f i is a differentiable function of the populations of every type of replicator.The mean fitness at time t is where p i (t) is the fraction of replicators of the ith type: .
By the product rule, the rate of change of the mean fitness is the sum of two terms: The first of these two terms equals the variance of the fitness at time t.We give the easy proof in Theorem 1.Unfortunately, the conceptual significance of this first term is much less clear than that of the total rate of change of mean fitness.Ewens concluded that "the theorem does not provide the substantial biological statement that Fisher claimed".But there is another way out, based on an idea Fisher himself introduced in 1922: Fisher information [9].Fisher information gives rise to a Riemannian metric on the space of probability distributions on a finite set, called the 'Fisher information metric'-or in the context of evolutionary game theory, the 'Shahshahani metric' [1,2,15].Using this metric we can define the speed at which a time-dependent probability distribution changes with time.We call this its 'Fisher speed'.Under just the assumptions already stated, we prove in Theorem 2 that the Fisher speed of the probability distribution is the variance of the fitness at time t.
As explained by Harper [11,12], natural selection can be thought of as a learning process, and studied using ideas from information geometry [3]-that is, the geometry of the space of probability distributions.As p(t) changes with time, the rate at which information is updated is closely connected to its Fisher speed.Thus, our revised version of the fundamental theorem of natural selection can be loosely stated as follows: As a population changes with time, the rate at which information is updated equals the variance of fitness.The precise statement, with all the hypotheses, is in Theorem 2. But one lesson is this: variance in fitness may not cause 'progress' in the sense of increased mean fitness, but it does cause change.
The time derivative of mean fitness
Suppose we have n different types of entity, which we call replicators.Let P i (t), or P i for short, be the population of the ith type of replicator at time t, which we idealize as taking positive real values.Then a very general form of the Lotka-Volterra equations says that where f i : [0, ∞) n → R is the fitness function of the ith type of replicator.One might also consider fitness functions with explicit time dependence, but we do not do so here.Let p i (t), or p i for short, be the probability at time t that a randomly chosen replicator will be of the ith type.More precisely, this is the fraction of replicators of the ith type: Using these probabilities we can define the mean fitness f by and the variance in fitness by These quantities are also functions of t, but we suppress the t dependence in our notation.
Fisher said that the variance in fitness equals the rate of change of mean fitness.Price [14], Ewens [8] and Edwards [7] argued that Fisher only meant to equate part of the rate of change in mean fitness to the variance in fitness.We can see this in the present context as follows.The time derivative of the mean fitness is the sum of two terms: and as we now show, the first term equals the variance in fitness.
Proof.First we recall a standard formula for the time derivative ṗi .Using the definition of p i in equation ( 2), the quotient rule gives ṗi = Ṗi where all sums are from 1 to n.Using the Lotka-Volterra equations this becomes ṗi = f i P i j P j − P i j f j P j ( j P j ) 2 where we write f i to mean f i (P 1 , . . ., P n ), and similarly for f j .Using the definition of p i again, this simplifies to: ṗi = f i p i − j f j p j p i and thanks to the definition of mean fitness in equation (3), this reduces to the well-known replicator equation: Now, the replicator equation implies On the other hand, since i f i p i = f but also i f p i = f .Subtracting equation ( 8) from equation (7) we obtain The second term of equation ( 5) only vanishes in special cases, e.g. when the fitness functions f i are constant.When the second term vanishes we have This is a satisfying result.It says the mean fitness does not decrease, and it increases whenever some replicators are more fit than others, at a rate equal to the variance in fitness.But we would like a more general result, and we can state one using a concept from information theory: the Fisher speed.
The Fisher speed
While Theorem 1 allows us to express the variance in fitness in terms of the time derivatives of the probabilities p i , it does so in a way that also explicitly involves the fitness functions f i .We now prove a simpler formula for the variance in fitness, which equates it with the square of the 'Fisher speed' of the probability distribution p = (p 1 , . . ., p n ).
The space of probability distributions on the set {1, . . ., n} is the (n − 1)-simplex The Fisher metric is the Riemannian metric g on the interior of the (n − 1)simplex such that given a point p in the interior of ∆ n−1 and two tangent vectors v, w we have Here we are describing the tangent vectors v, w as vectors in R n with the property that the sum of their components is zero: this makes them tangent to the (n − 1)simplex.We are demanding that x be in the interior of the simplex to avoid dividing by zero, since on the boundary of the simplex we have p i = 0 for at least one choice of i.
If we have a time-dependent probability distribution p(t) moving in the interior of the (n − 1)-simplex as a function of time, its Fisher speed is defined by if the derivative ṗ(t) exists.This is the usual formula for the speed of a curve moving in a Riemannian manifold, specialized to the case at hand.These are all the formulas needed to prove our result.But for readers unfamiliar with the Fisher metric, a few words may provide some intuition.The factor of 1/p i in the Fisher metric changes the geometry of the simplex so that it becomes round, with the geometry of a portion of a sphere in R n .But more relevant here is the Fisher metric's connection to relative information-a generalization of Shannon information that depends on two probability distributions rather than just one [6].Given probability distributions p, q ∈ ∆ n−1 , the information of q relative to p is This is the amount of information that has been updated if one replaces the prior distribution p with the posterior q.So, sometimes relative information is called the 'information gain'.It is also called 'relative entropy' or 'Kullback-Leibler divergence'.It has many applications to biology [5,11,12,13].Suppose p(t) is a smooth curve in the interior of the (n − 1)-simplex.We can ask the rate at which information is being updated as time passes.Perhaps surprisingly, an easy calculation gives Thus, to first order, information is not being updated at all at any time t 0 ∈ R.However, another well-known calculation (see e.g.[4]) shows that So, to second order in t − t 0 , the square of the Fisher speed determines how much information is updated when we pass from p(t 0 ) to p(t).The generality of this result is remarkable.Formally, any autonomous system of first-order differential equations d dt P i (t) = F i (P 1 (t), . . ., P n (t)) can be rewritten as Lotka-Volterra equations d dt P i (t) = f i (P 1 (t), . . ., P n (t)) P i (t) simply by setting f i (P 1 , . . ., P n ) = F i (P 1 , . . ., P n )/P i .In general f i is undefined when P i = 0, but this not a problem if we restrict ourselves to situations where all the populations P i are positive; in these situations Theorems 1 and 2 apply.
Theorem 2 .
Suppose positive real-valued functions P i (t) obey the Lotka-Volterra equations for some continuous functions f i : [0, ∞) n → R. Then the square of the Fisher speed of the probability distribution p(t) is the variance of the fitness:g( ṗ, ṗ) = Var(f (P )).Proof.Consider the square of the Fisher speed g( ṗ, ṗ) = (P ) − f (P )) 2 p i = Var(f ) as desired. | 2021-07-13T01:15:58.611Z | 2021-07-12T00:00:00.000 | {
"year": 2021,
"sha1": "00a13c61378bf73ae904bfa72621164c8b736a6e",
"oa_license": "CCBY",
"oa_url": "https://www.mdpi.com/1099-4300/23/11/1436/pdf?version=1635921302",
"oa_status": "GOLD",
"pdf_src": "ArXiv",
"pdf_hash": "2a86d5c2a01b6b54343807393e22ae3b7a48c2f5",
"s2fieldsofstudy": [
"Mathematics",
"Biology"
],
"extfieldsofstudy": [
"Computer Science",
"Medicine",
"Biology",
"Mathematics"
]
} |
54558723 | pes2o/s2orc | v3-fos-license | Biological Sex, Estradiol and Striatal Medium Spiny Neuron Physiology: A Mini-Review
The caudate-putamen, nucleus accumbens core and shell are important striatal brain regions for premotor, limbic, habit formation, reward, and other critical cognitive functions. Striatal-relevant behaviors such as anxiety, motor coordination, locomotion, and sensitivity to reward, all change with fluctuations of the menstrual cycle in humans and the estrous cycle in rodents. These fluctuations implicate sex steroid hormones, such as 17β-estradiol, as potent neuromodulatory signals for striatal neuron activity. The medium spiny neuron (MSN), the primary neuron subtype of the striatal regions, expresses membrane estrogen receptors and exhibits sex differences both in intrinsic and synaptic electrophysiological properties. In this mini-review, we first describe sex differences in the electrophysiological properties of the MSNs in prepubertal rats. We then discuss specific examples of how the human menstrual and rat estrous cycles induce differences in striatal-relevant behaviors and neural substrate, including how female rat MSN electrophysiology is influenced by the estrous cycle. We then conclude the mini-review by discussing avenues for future investigation, including possible roles of striatal-localized membrane estrogen receptors and estradiol.
INTRODUCTION
Sex differences in brain structure and function have been described at all levels of biological analysis, from differences in neuronal gene expression to the output of the nervous system, behavior (McCarthy, 2010;Forger, 2016;Arnold, 2017;Grabowska, 2017). Sex is a compelling biological variable that must be considered from single neuron analysis all the way to clinical trials. The striatal regions, including the caudate-putamen and nucleus accumbens core and shell (Figure 1A), are sensitive to biological sex and sex steroid hormone fluctuations and signaling in both animals and humans. Although striatal sex and hormone-specific differences have long been documented, the mechanisms by which hormones and sex influence caudate-putamen and accumbens physiology remain active research areas. In this mini-review, we first describe the known sex differences in the physiology of the output neuron of the striatal brain regions, the medium spiny neuron (MSN), in prepubertal rats. We then broaden the discussion to address aspects of how the menstrual cycle in adult female humans and estrous cycle in adult female rats influences striatal-relevant behaviors, and feature select studies providing mechanistic insight. This includes recent data demonstrating that the estrous cycle modulates MSN physiology. We then end the mini-review by presenting two challenge hypotheses for future investigation, namely, the possible roles of striatallocalized membrane estrogen receptors and neuroestrogen production.
CAUDATE-PUTAMEN AND NUCLEUS ACCUMBENS CORE MSNs EXHIBIT SEX DIFFERENCES BEFORE PUBERTY
MSNs (or alternatively, spiny projection neurons) consist of ∼95% of striatal neurons (Kemp and Powell, 1971;Graveland and DiFiglia, 1985;Gerfen and Surmeier, 2011) and are the major efferent projection neurons. MSNs do not exhibit gross sex differences in soma size or neuron density (Meitzen et al., 2011), and the overall volume of the striatal brain regions does not robustly differ between males and females (Wong et al., 2016). MSNs do exhibit functional electrophysiological properties that differ by striatal subregion and developmental period ( Table 1). Before puberty, sex differences are present in both intrinsic and synaptic properties of MSNs that is specific to striatal region in rats. Here we define intrinsic properties are those being related to single action potential properties such as threshold, multiple action potential properties such as action potential firing rate as evoked by excitatory current injection, and passive membrane properties such as input resistance. All of these properties are unified in that they help determine how a neuron responds to synaptic input, in other words, the input-output process of the individual neuron. Regarding synaptic properties, here we focus on properties that have been directly investigated in MSN with regards to sex, such as miniature excitatory postsynaptic currents (mEPSC), which provides insight into the strength, number, and sensitivity of glutamatergic synapse. In rat caudate-putamen, MSN excitability is increased in females compared to males, as indicated by an increased evoked action potential to excitatory current injection slope, hyperpolarized threshold, and decreased after hyperpolarization magnitude in females compared to males. There are no differences in mEPSC properties, including frequency, amplitude, and decay (Dorris et al., 2015). Conversely, in the nucleus accumbens core, mEPSC frequency is increased in prepubertal females compared to males and this sex difference exist both pre-puberty and in adults . This sex difference is organized during the postnatal critical window (P0-P1) and in females can be eliminated by postnatal 17β-estradiol (estradiol) or testosterone exposure . Estradiol is a type of estrogen, which binds to estrogen receptors. Testosterone can either bind to androgen receptors or be metabolized via the enzyme aromatase into estradiol to in turn act on estrogen receptors. Prepubertal recordings from nucleus accumbens shell did not show any sex differences in MSN electrical properties (Willett et al., 2016), however environmental influences such as stress engender sex differences in synapse markers in adult rodents (Brancato et al., 2017). Together, these studies illustrate heterogeneity of sexspecific mechanisms across the subregions of the striatum (Cao et al., 2018b). Interestingly, sex differences in MSN properties detected in prepubertal rat are different than those detected in prepubertal mouse nucleus accumbens core (Cao et al., 2018a), indicating that sex differences in the development of MSN electrophysiological properties can be species-specific or perhaps mouse strain-dependent. It is also unknown how sex differences and sex steroid sensitivity present across MSN subtypes. This question is an important avenue for future investigations, as differential sensitivity to biological sex across MSN subtypes may have important functional consequences.
THE MENSTRUAL AND ESTROUS CYCLES INFLUENCE STRIATAL-RELATED BEHAVIORS AND DISORDERS IN ADULT FEMALES
In adult female humans, the cyclical fluctuation of estradiol, progesterone, and other hormones is called the menstrual cycle and is ∼28 days long. Plasma estradiol levels peak during the follicular phase, while progesterone levels peak during the luteal phase (Sherman and Korenman, 1975). In adult female rats and mice, this cycle is called the estrous cycle and likewise features repeated hormone changes, but across a ∼4-5 day period (Cora et al., 2015). In rats, plasma estradiol levels rapidly peak during proestrus, after which progesterone levels peak, leading to ovulation and a resulting estrus phase. The diestrus phase, during which hormone levels are generally low, follows the estrus phase ( Figure 1B).
Regarding behaviors associated with the striatal regions, changes in motor coordination and severity of Parkinson's symptoms, which are controlled by the caudate-putamen, have been associated with the menstrual cycle. The luteal phase, when estradiol and progesterone are high, is associated with more coordination, manual skills, and less L-DOPA-induced dyskinesia (Quinn and Marsden, 1986;Hampson and Kimura, 1988;Hampson, 1990). These findings in menstrual cycle-related behavioral changes generalize to other movement disorders with worsening of symptoms occurring just before and during menses when estradiol and progesterone are lowest (Castrioto et al., 2010). Additionally, changes in anxiety-related behaviors and anxiety-related symptoms which are controlled, in part, by the nucleus accumbens, also occur across the menstrual cycle (Nillni et al., 2011). In general, the extent of documented changes in motor skills and cognitive functions across the human menstrual cycle differs across population characteristics and sampled tasktype (Souza et al., 2012).
DOPAMINE AND ESTRADIOL ARE PART OF THE MECHANISM UNDERLYING FEMALE CYCLE-DEPENDENT DIFFERENCES
Animal studies have provided more controlled designs and techniques to understand the mechanisms underlying these sex differences. It has long been documented that the dopamine and estrogen systems interact to influence striatal function (Yoest et al., 2018b). Here we highlight some select pieces of evidence. In female monkeys, during the luteal phase, D2 receptor availability is increased in the caudate-putamen and nucleus accumbens FIGURE 1 | Map of the striatal subregions and female hormone cycling. (A) Schematic of a coronal section of one hemisphere of the rat brain depicting the striatal subregions, including the caudate-putamen, nucleus accumbens core, and shell (Interaural ∼10.92-10.80 mm, Bregma ∼1.92-1.80 mm). Acronyms: AC, anterior commissure; Acb, nucleus accumbens; LV, lateral ventricle. The extensive afferent and efferent circuitry of the striatal subregions is not depicted in this schematic, and we refer the reader to the following articles for a review of this topic (Russo and Nestler, 2013;Scofield et al., 2016) (B) Graphical depictions of the adult female rat estrous and human menstrual cycle. Purple line indicates progesterone levels and the green line estradiol levels. Over a span of about 4-5 days, rats exhibit a diestrus, proestrus, and estrus phase. There is also a metestrus phase between estrus and diestrus (not pictured). In rats, estradiol levels peak the morning of proestrus, as progesterone levels are rising, and behavioral estrus begins roughly when progesterone levels peak. The human cycle lasts about 28 days, and exhibits a follicular and luteal phase. In humans, estradiol peaks during the follicular phase, and progesterone peaks during the luteal phase. (Czoty et al., 2009) suggesting that gonadal hormones may influence dopamine (DA) transmission and sensitivity which can promote movement coordination. In rats, females during proestrus and estrus (comparable to luteal phase in humans and monkeys) have higher extracellular DA concentrations than diestrus and ovariectomized females (Xiao and Becker, 1994). Estrous cycle-dependent changes in dopamine signaling have also been observed in mice (Calipari et al., 2017). This may be a mechanism that contributes to changes in locomotion (Becker et al., 1987) and anxiety (Marcondes et al., 2001;Sayin et al., 2014) across estrous cycle in rodents. Gonad-intact and castrated males do not differ, indicating that gonadal hormone influences on striatal release of dopamine are sex-specific (Xiao and Becker, 1994). Estradiol has been proposed as a major hormone to facilitate sex differences. Specific to the caudateputamen, estradiol promotes motor coordination (Becker et al., 1987;Schultz et al., 2009) and its enhancement of dopamine action is specific to females (Becker, 1990;Xiao and Becker, 1994;Yoest et al., 2014Yoest et al., , 2018a. The role of dopamine in regulating MSN electrical properties suggests that MSN properties would likewise differ between males, females, and across the adult female hormone cycle (Nicola et al., 2000).
CYCLICAL FEMALE HORMONE FLUCTUATIONS INDUCE SEX DIFFERENCES IN ADULT MSN ELECTRICAL PROPERTIES
Intrinsic and synaptic electrophysiological properties of MSNs of the caudate-putamen and nucleus accumbens core change with the estrous cycle (Arnauld et al., 1981;Tansey et al., 1983;Proaño et al., 2018). In the caudate-putamen, classic experiments first demonstrated that spontaneous action potential firing rates recorded in vivo increased in ovariectomized female rats exogenously exposed to estradiol compared to vehicle-exposed females and males (Arnauld et al., 1981). Later on, using in vivo extracellular recording, it was found that nigrostriatal MSNs increased spontaneous action potential generation in female rats during the phases of the estrous cycle associated with high levels of estradiol, or in ovariectomized females exposed to exogenous estradiol compared to animals with low levels of estradiol (Tansey et al., 1983). Other MSN subtypes and striatal interneurons were not tested in this study. The exact electrophysiological, endocrine, and molecular mechanisms driving these changes in electrical activity in the caudate-putamen remain to be elucidated, although this is an area of active research. More detailed data is available for MSNs in the adult female rat nucleus accumbens. In the nucleus accumbens core, during diestrus, when both progesterone and estradiol are low, MSN excitatory synaptic input properties decrease in magnitude while intrinsic excitability increases (Proaño et al., 2018). Specifically, mEPSC frequency and amplitude are decreased compared to other estrous cycle phases, while properties such as action potential rheobase, action potential threshold, input resistance, and resting membrane potential change to increase cellular excitability. Conversely, during proestrus and estrus, which are when estradiol and progesterone increase, and females are sexually receptive, excitatory synaptic input increases and intrinsic excitability decreases. mEPSC frequency and amplitude are increased compared to other estrous cycle phases, aligning with previous work examining excitatory synapse anatomy in Gray fill indicates sex and/or cycle dependent differences. Inequality signs indicate relative differences between sexes. "?" indicates complex or no evidence. a Estrous cycle stage determined directionality of sex difference and difference between female estrous stages. Gonadectomy eliminates sex differences. b This sex difference has been shown to be organized by estradiol during masculinization window. c Examination of synapse properties shows divergent evidence of sex differences in non-stressed animals, but an electrophysiological approach in adult animals has not yet been done to our knowledge (as reviewed by Cao et al., 2018b). The adult nucleus accumbens shell exhibits variable sex differences, likely indicating interactions with other environmental influences such as stress (i.e., Brancato et al., 2017). d In adult caudate-putamen, estrous-cycle induced differences in select rat medium spiny neuron action potential generation rates have been reported in vivo, but the underlying cellular electrophysiological mechanisms are not yet documented.
females in these estrous cycle phases solely compared to males (Forlano and Woolley, 2010;Wissman et al., 2012). In contrast, cellular properties such as action potential rheobase, action potential threshold, input resistance, and resting membrane potential change to decrease cellular excitability. When analyzing these properties in gonadectomized males and females, all sex differences disappear (Proaño et al., 2018). This study indicates that adult female hormone cycles are necessary to induce sex differences in adult MSN properties, including excitatory synapse function. Changes in excitatory synaptic properties are consistent with previous anatomical studies in adult rats (Forlano and Woolley, 2010;Staffend et al., 2011;Wissman et al., 2011Wissman et al., , 2012Martinez et al., 2016;Peterson et al., 2016). Whether these properties differ by MSN subtype is still unknown. Given that accumbens core MSNs exhibit divergent sex differences across development, sexual differentiation of MSNs likely occur across multiple developmental periods. Puberty may be one such period (Ernst et al., 2006;Kuhn et al., 2010;Manitt et al., 2011;Matthews et al., 2013;Staffend et al., 2014;Kopec et al., 2018).
CHALLENGE HYPOTHESIS #1: HOW DO MEMBRANE ESTROGEN RECEPTORS INFLUENCE STRIATAL NEURON PHYSIOLOGY?
Although there is ample evidence that estradiol is an important and sex-specific hormonal regulator of striatal behavior, dopamine systems, and MSN function, the exact mechanisms by which estradiol exerts its actions requires further research. An increasing body of work strongly implicates membrane estrogen receptor action. Adult female rats exclusively express membrane estrogen receptors (GPER1, membrane-associated ERα, and membrane-associated ERβ) in MSNs of the caudateputamen and accumbens (Almey et al., 2012). However, to our knowledge a thorough analysis of estrogen receptors across development, MSN subtype, and species has not been accomplished and nuclear estrogen receptors may be expressed at early developmental stages. Sex-specific differences in membrane estrogen receptor facilitation of changes in neuronal activity have been reported in other brain regions (Oberlander and Woolley, 2016;Krentzel et al., 2018). Importantly, sex differences in function can exist even when receptor expression is similar between males and females (Krentzel et al., 2018), indicating that the sex-specific sensitivity and functionality of estrogen receptors are more complicated than indicated by anatomical analyses alone. Membrane estrogen receptors are expressed both on axon terminals, MSN somas and dendritic spines (Almey et al., 2012(Almey et al., , 2015(Almey et al., , 2016, and there is evidence that estradiol has both pre-and post-synaptic mechanisms for altering dopaminergic signaling which promotes locomotion (Becker and Beer, 1986). Estrogen receptors associated in the membrane with metabotropic glutamate receptors have also been shown to facilitate locomotor sensitization to cocaine (Martinez et al., 2014), involved in drug addiction (Tonn Eisinger et al., 2018), and change dendritic spine morphology in the nucleus accumbens (Peterson et al., 2015). Application of estradiol increases dopamine (DA) rapidly in the accumbens and caudateputamen (Becker, 1990;Pasqualini et al., 1996), as well as decreases GABA production (Hu et al., 2006). This suggests that estradiol may indirectly act on dopamine signaling by first releasing inhibition from GABAergic signaling, and perhaps also directly upon dopamine-producing regions. In striatal MSNs, estradiol acting through ERα, ERβ, and mGluR rapidly decreases L-type calcium currents and phosphorylates the transcription factor CREB (Mermelstein et al., 1996;Grove-Strawser et al., 2010).
One proposed model for estradiol actions on striatal networks builds upon these and other findings, positing that estradiol binds to membrane estrogen receptors on MSNs to decrease neuronal excitation, therefore leading to less GABA release and a "disinhibition" of dopaminergic signaling either through a collateral synapse upon dopamine fibers from the substantia nigra pars compacta or the VTA (Yoest et al., 2014(Yoest et al., , 2018b. Direct evidence that estradiol rapidly acts on MSNs to decrease intrinsic neuronal excitability or excitatory post synaptic currents remains unknown, although this is an active area of research. This model also predicts that MSNs synapse upon either dopaminergic fibers from the substantia nigra pars compacta, the VTA, or perhaps tyrosinehydroxylase positive striatal interneurons. Alternatively, estradiol may potentially act on striatal interneurons, such as the cholinergic subtype, which synapses upon both dopamine terminals and MSNs (Chuhma et al., 2011). Cholinergic interneurons express membrane estrogen receptors and have been implicated in estradiol-induced shifting between hippocampal and striatal-based learning behaviors, suggesting interactions been estrogen, cholinergic, and dopaminesystems (Euvrard et al., 1979;Davis et al., 2003;Almey et al., 2012). These models are not necessarily mutually exclusive. They also do not exclude direct actions of estradiol on MSNs independent of dopaminergic signaling, perhaps instead targeting glutamatergic systems. Consistent with this speculation, glutamatergic systems have been implicated in sex differences in psychiatric diseases such as anxiety (Wickens et al., 2018).
WHAT IS THE RELATIONSHIP BETWEEN MEMBRANE ESTROGEN RECEPTORS AND THE ESTROUS CYCLE?
Gonadal hormone fluctuations related to the estrous cycle correlate with changes in both caudate-putamen and accumbens dependent behaviors and with the electrical properties of MSNs. This conclusion raises questions regarding the potential relationship between the estrous cycle and the actions of rapid estradiol signaling to modulate striatal neuron activity. To date, one study has shown that after 3 days of estradiol priming to artificially mimic estradiol-high proestrus of females, locomotion and DA release is potentiated after an acute estradiol injection and amphetamine (Becker and Rudick, 1999). This work is one piece of evidence that females may exhibit cycle-dependent rapid estradiol mechanisms. Estradiol-mediated signaling in MSNs may alter depending on estrous cycle phase, though little work has tested this hypothesis, much less uncovered the mechanistic details of how this may occur. It is unknown how cycle stage changes sensitivity to estradiol, estrogen receptor expression, and synapse functionality. However, proestrus (higher estradiol and progesterone) females exhibit more and larger dendritic spines than males (Forlano and Woolley, 2010;Wissman et al., 2011). Other estrous cycle phases were not examined. This anatomical work from Woolley and colleagues is consistent with electrophysiological findings which indicate strong sex differences during the proestrus phase (Proaño et al., 2018).
CHALLENGE HYPOTHESIS #2: DOES LOCAL PRODUCTION OF ESTRADIOL INFLUENCE CAUDATE-PUTAMEN AND NUCLEUS ACCUMBENS FUNCTION?
Another component of rapid estradiol signaling is the dynamic production of localized estradiol. Evidence of aromatase activity and fluctuations in local estradiol content have been shown across vertebrate brains (Callard et al., 1978) especially in songbirds (Saldanha et al., 2000;Remage-Healey et al., 2008Ikeda et al., 2017). Low levels of aromatase, the enzyme that synthesizes estradiol from testosterone, has been observed in processes and cell bodies of rat striatum (Jakab et al., 1993;Wagner and Morrell, 1996;Horvath et al., 1997) but a thorough analysis and comparison across subregions has not been performed. It is unknown how aromatase expression differs based on age, sex, cell compartment, or cell subtype, thus overly-definitive statements regarding striatal aromatase should be avoided. It is still speculative exactly what role aromatase plays in striatal neuron physiology. For the caudate-putamen, there is evidence that inhibition of aromatase prevents the induction of LTP in male rat MSNs (Tozzi et al., 2015) suggesting that local production of estradiol plays a role in striatal neuronal physiology. Inhibition of aromatase in the caudate-putamen of males proceeding a chemical lesion is neuroprotective (McArthur et al., 2007). To our knowledge, central administration of aromatase inhibitors has not been performed in females in studies examining striatal function.
Thus, the evidence for estradiol action in the striatal subregions is robust, but the source of that estradiol has not been directly tested in both sexes. One major question is the relationship between gonadal/peripheral vs. brain production of steroid sex hormones. The precursor to estradiol, testosterone, can increase the presence of aromatase expression and activity in rodent male brain (Roselli et al., 1984;Roselli and Klosterman, 1998), which is compelling evidence for the relationship of gonads and brain estradiol production in males. In male rats, long term testosterone exposure can influence MSN dendritic spine density (Wallin-Miller et al., 2016), and the nucleus accumbens is known to regulate the rewarding-aspects of testosterone exposure in males (Frye et al., 2002). It is unclear how castration and testosterone directly affect striatal aromatase activity and expression in males. For females, one study measuring estradiol content in both brain and blood of rodents across estrous stages found that estradiol content in the striatum was highest during late proestrus and far exceeded blood concentration (Morissette et al., 1992). However, at this point there remains a lack of corroborating evidence, especially when considered in light of the lack of differences in aromatase activity detected in other rat brain regions (Roselli et al., 1984). Continued research into how hormonal state and sex interact with possible aromatase activity is essential to grasp how steroid signaling modulates striatal neuron function.
AUTHOR CONTRIBUTIONS
AK wrote the initial manuscript draft. AK and JM revised and approved the manuscript.
FUNDING
We acknowledge NIH MH109471 to JM and P30ES025128 (Center for Human Health and the Environment). | 2018-12-12T14:03:59.021Z | 2018-12-12T00:00:00.000 | {
"year": 2018,
"sha1": "9969e517c9669fb12ade8949060397784469f698",
"oa_license": "CCBY",
"oa_url": "https://www.frontiersin.org/articles/10.3389/fncel.2018.00492/pdf",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "9969e517c9669fb12ade8949060397784469f698",
"s2fieldsofstudy": [
"Biology"
],
"extfieldsofstudy": [
"Biology",
"Medicine"
]
} |
76869 | pes2o/s2orc | v3-fos-license | Diffusion Tensor Imaging Tractography Reveals Disrupted White Matter Structural Connectivity Network in Healthy Adults with Insomnia Symptoms
Neuroimaging studies have revealed that insomnia is characterized by aberrant neuronal connectivity in specific brain regions, but the topological disruptions in the white matter (WM) structural connectivity networks remain largely unknown in insomnia. The current study uses diffusion tensor imaging (DTI) tractography to construct the WM structural networks and graph theory analysis to detect alterations of the brain structural networks. The study participants comprised 30 healthy subjects with insomnia symptoms (IS) and 62 healthy subjects without IS. Both the two groups showed small-world properties regarding their WM structural connectivity networks. By contrast, increased local efficiency and decreased global efficiency were identified in the IS group, indicating an insomnia-related shift in topology away from regular networks. In addition, the IS group exhibited disrupted nodal topological characteristics in regions involving the fronto-limbic and the default-mode systems. To our knowledge, this is the first study to explore the topological organization of WM structural network connectivity in insomnia. More importantly, the dysfunctions of large-scale brain systems including the fronto-limbic pathways, salience network and default-mode network in insomnia were identified, which provides new insights into the insomnia connectome. Topology-based brain network analysis thus could be a potential biomarker for IS.
INTRODUCTION
Insomnia is one of the most prevalent sleep disorders that is distinguished by difficulties in falling or maintaining sleep, and/or early morning awakening (Morin and Benca, 2012;Cheung et al., 2013;Morin et al., 2015;Riedner et al., 2016). Insomnia is associated with impaired daytime functioning and affects approximately one-third of the general population (Ohayon, 2002;Moore, 2012;Morin and Benca, 2012;Kronholm et al., 2015). In addition, individuals with insomnia show an increased risk for developing other psychiatric disorders. For example, nearly 40% of the insomnia patients have a comorbid psychiatric disorder, and almost all of depression patients present high risk for insomnia (Taylor et al., 2005;Kaneita et al., 2006;Ohayon and Hong, 2006;Benca and Peterson, 2008;Wulff et al., 2010;Mayer et al., 2011). More importantly, insomnia can lead to feeling fatigued, poor academic performance, working disability, drugs and alcohol abuse, suicidal thoughts and reduced quality of life (Short et al., 2013;Kronholm et al., 2015). Consequently, insomnia can negatively impact personal and public health, incur direct and indirect healthcare costs, and have a huge socio-economic impact on society (Kucharczyk et al., 2012;Moore, 2012;Lian et al., 2015). Although well-documented neuroimaging studies have investigated insomnia, the neurobiological mechanisms underlying this psychiatric disorder remain poorly understood.
Growing functional and structural neuroimaging evidence shows that widespread brain regions are implicated in the pathobiology of insomnia, including the amygdala, hippocampus, anterior cingulate gyrus, caudate nucleus, insula and the frontal areas (Drummond et al., 2004;Nofzinger et al., 2004;Riemann et al., 2007Riemann et al., , 2015Altena et al., 2008Altena et al., , 2010Spiegelhalder et al., 2013Spiegelhalder et al., , 2015Winkelman et al., 2013;Baglioni et al., 2014;Joo et al., 2014;Stoffers et al., 2014;Liu C.-H. et al., 2016;Lu et al., 2017). In particular, the functional and structural networks are strongly correlated with each other, and the structural connectivity works as a physical substrate of the functional connectivity. The functional connectivity can also affect the structural connectivity according to the brain plasticity (van den Heuvel et al., 2008;Greicius et al., 2009;Rubinov et al., 2009;Long et al., 2015). In addition, it has been widely recognized that functional interactions among different brain regions are effectively constrained by large-scale structural connections (Hagmann et al., 2008;Honey et al., 2009). However, the network-level structural deficits remain largely unknown, especially the topological alterations associated with insomnia. It is therefore essential to examine the structural substrate of interactions among distributed brain regions to understand the functional brain activation patterns in insomnia.
Diffusion tensor imaging (DTI) tractography is a robust, non-invasive method that can be utilized to reconstruct the white matter (WM) tracts of the human brain (Basser et al., 2000;Guo W.-B. et al., 2012;. When combined with a graph theoretical approach, this advanced neuroimaging technique can allow us to characterize the structural connection patterns of the human brain in vivo. Graph theoretical analysis can delineate the whole brain as a large-scale network consisting of nodes (brain areas) and edges (functional connectivity between pairs of areas; Bullmore and Sporns, 2009). Both of these methods have been increasingly used for the reconstruction of brain WM structural connectivity networks in psychiatric disorders such as post-traumatic stress disorder (Long et al., 2013), depression (Long et al., 2015), Alzheimer's disease (Lo et al., 2010), and multiple sclerosis (Shu et al., 2011). So far, there are only two studies exploring the integrity of WM in insomnia, one study demonstrated that insomnia is associated with reduced integrity of WM tracts in the anterior internal capsule by comparing the fractional anisotropy (FA) between 24 primary insomnia (PI) patients and 35 healthy controls and by performing the between-group comparisons of the WM tracts between 23 PI patients and 30 healthy controls, another study suggested that the insomnia patients had decreased integrity of WM tracts predominantly in the areas of right anterior and posterior limb internal capsule, right anterior and superior corona radiate and right thalamus (Li S. et al., 2016). As such, this pilot work focuses on revealing topology abnormalities in WM structural connectivity networks associated with insomnia.
In this study, we hypothesized that compared to the healthy subjects without insomnia symptoms (NIS) group, the healthy subjects with IS group would exhibit an altered structural topology and disrupted nodal network properties of the brain areas mainly involved in the fronto-limbic system, salience network, and default-mode network. The DTI data from IS group and NIS group were collected first. Then, we constructed the whole brain WM structural connectivity networks with 90 nodes represented by cerebrum brain areas using the automated anatomical labeling (AAL) template and corresponding edges defined as the mean FA using DTI tractography. In addition, we applied graph theoretical analysis to generate the smallworld characteristics of these WM networks, which can be used to identify the altered topological properties of brain networks in insomnia. More importantly, we examined the associations between clinical data and the altered network topologies.
Participants
The study participants comprised 92 right-handed healthy subjects (female/male: 51/41, age: 20-60 years). All participants were first screened with the Non-Patient Structured Clinical Interview for the Diagnostic and Statistical Manual of Mental Disorders (DSM-IV; SCID) by two independent experienced psychiatrists (LRT and CLT) as described in our previous study (Lu et al., 2017). All the subjects had no neurological or psychiatric disorders, such as depression, anxiety disorders, epilepsy, schizophrenia, mental retardation, or chronic pain. In addition, none of the subjects had taken any psychotropic medication in at least 2 months prior to the MRI scans.
All clinical tests were approved by the Medical Ethics Committee of Beijing Anding Hospital, Capital Medical University, the Imaging Center for Brain Research of Beijing Normal University and the Biomedical Ethics Board of the Faculty of Health Sciences at the University of Macau (Macao SAR, China) in accordance with the approved guidelines. All subjects gave written informed consent in accordance with the Declaration of Helsinki. The study groups included 30 healthy subjects with IS (age: 38.00 ± 11.85 years) and 62 healthy subjects without IS (age: 37.47 ± 11.95 years). We find no significant differences in gender, age, as well as educational level between two groups. Table 1 provides the demographic characteristics of the participants. Data are presented as mean ± SD. Adjusted HAMD score means HAMD scores after omission of sleep questions. Adjusted HAMA score means HAMA scores after omission of sleep questions. M, male; F, female; R, right; L, left; IS, healthy subjects with insomnia symptoms; NIS, healthy subjects without insomnia symptoms; SD, standard deviation; HAMD, Hamilton Depression Rating Scale; HAMA, Hamilton Anxiety Rating Scale. a The p value was obtained by two sample t-tests. b The p value for gender distribution in the two groups was obtained by chi-square test.
Insomnia Symptoms Measurements
The 17-item Hamilton Depression Rating Scale (HAMD-17) was used to measure the severity of participants' IS (Hamilton, 1967), which is based on the sum of the three items on the sleep subscale of HAMD-17. A total score greater than or equal to one indicated IS. The severity of depression and anxiety in all the subjects was also assessed using HAMD-17 and the Hamilton Anxiety Rating Scale (HAMA). In the current study, the adjusted HAMD and adjusted HAMA scores were generated by omitting the insomnia-related items to prevent a potential influence of IS from these scales on our findings (Lu et al., 2017). Clinical data for the two groups are given in Table 1.
Structural Network Construction
The structural networks were constructed for all participants. The network nodes were characterized by the brain areas divided by the AAL template (Tzourio-Mazoyer et al., 2002), whereas the network edges were defined as fiber tracts that linked with these nodes.
Definition of Network Nodes
The procedure to define network nodes was according to the previous study (Gong et al., 2009). The regions of interest (ROIs) were described in diffusion native space. In brief, each subject's T 1 -weighted image was first co-registered to the non-diffusion-weighted (b = 0 s/mm 2 ) images in the diffusion native space through a linear transformation. Then the co-registered T 1 images were non-linearly converted to the ICBM-152 T 1 -template in the Montreal Neurological Institute (MNI) space. The 12 degrees of freedom combined with nonlinear warps were applied in this step. The inverse transformation parameter was used to warp the AAL areas from MNI space to the DTI native space by using a nearest-neighbor interpolation method based on the statistical parametric mapping (SPM8) package. Using this procedure, 90 cortical and subcortical brain areas were generated (45 for each hemisphere, see Supplementary Table S1).
WM Tractography
The following steps were carried out to reconstruct the wholebrain WM tracts. The distortions were corrected for the effects of eddy current with an affine alignment of the diffusion-weighted images to the non-diffusion-weighted images based on the FMRIB Diffusion Toolbox (FSL) 1 . Subsequently, the diffusion tensor matrix was generated on a voxel-by-voxel analysis, which was further diagonalized to generate three eigenvalues and associated eigenvectors. The diffusion tensor models were calculated using the linear least-squares fitting algorithm at the voxel level by using the Diffusion Toolkit (Wang et al., 2007). The DTI fiber tracking procedures were carried out in diffusion native space based on the Fiber Assignment by Continuous Tracking (FACT) method through the Diffusion Toolkit (Wang et al., 2007). All of the path tracing in the dataset terminated if either the FA of each voxel did not exceed 0.2 or the tracking angle was greater than 45 degrees (Shu et al., 2011).
Definition of Network Edges
To determine the brain network edges in the native diffusion space, an ROI i and ROI j were considered to be linked through an edge where at least one fiber was present between them (Gong et al., 2009). The connections were weighted by the mean FA values of fibers that connected these two ROIs to depict the connectivity strength between ROI i and ROI j.
Threshold Selection
Each correlation matrix was thresholded into a set of undirected binary networks by applying a threshold of sparsity S, which was computed as the ratio of the total number of existing edges with the all possible total number of edges. Then the number of nodes and edges of these normalized resulting networks were the same which made it possible to explore the between-group differences in regard to network topological organization (Bullmore and Bassett, 2011). Each connectivity matrix was thresholded over a threshold range of 0.1 ≤ S ≤ 0.2 with intervals 0.01 repeatedly. The minimum threshold (S = 0.1) was determined according to the criterion that the averaged degree of all network nodes at each thresholded should be larger than 2log(N), in which N was 90 here denotes the total number of nodes. All individual networks reached 90% full connections at the minimum threshold. The maximum threshold (S = 0.2) was computed by obtaining the individual network topological cost without being thresholded, and then selected the minimum sparsity threshold (Long et al., 2013).
Small-World Properties
To measure the small-world properties of constructed structural brain networks, we first produced 100 random networks by using a Markov-chain algorithm and each random network has the same number in regard to nodes, edges and degree distribution with a real brain network (Liao et al., 2010). A real brain network can be regarded as a small world network only if it satisfied the conditions of both γ > 1 and λ ≈ 1 (Watts and Strogatz, 1998), or sigma σ = λ/γ > 1, which indicated that a small-world network possess a higher clustering coefficient and a similar path length as compared with a random network (Humphries et al., 2006;Liu et al., 2008). Typically, we scaled the characteristic shortest path length L p and the clustering coefficient C p of the constructed structural networks with the averaged L random and C random of all 100 random networks (i.e., the normalized characteristic path length, λ = L p L random and the normalized clustering coefficient, γ = C p C random ), where L random and C random denote the averaged characteristic shortest path length and the averaged clustering coefficient of 100 generated random networks, respectively.
Network Metrics
To evaluate the nodal properties of cortical and subcortical brain regions in structural networks, three key measurements were computed: the nodal degree Deg i , the nodal efficiency E i , and the nodal betweenness BC i . Additionally, the global efficiency E glo and local efficiency E loc were used to define the network efficiency (Achard and Bullmore, 2007). We first computed the six global network properties C p , L p , λ, γ, E glo , and E loc , and three regional nodal parameters Deg i , E i and BC i . Furthermore, we computed the area under the curve (AUC) of each parameter, which denoted an integrated index for the topological organization of brain networks (Zhang Z. et al., 2011;Liu F. et al., 2016). Detailed information about the network properties is provided in the Supplementary Materials.
Differences in the Network Properties
To show the differences of the global network topological properties between the two groups, nonparametric permutation tests (5000 iterations, p < 0.05, uncorrected) were carried out for each network topology over the threshold of 0.1 ≤ S ≤ 0.2 with intervals of 0.01 and the AUCs (Bullmore et al., 1999) of each network topology. In addition, nonparametric permutation tests (5000 iterations, p < 0.05, uncorrected) were also carried out on the AUCs of each regional nodal property to determine whether there were significant group distinctions between groups. Before performing the permutation tests, a multiple regression analysis was conducted to regress out the gender, age, educational level, adjusted HAMA score and adjusted HAMD score as dependent variables to exclude the influence of the depression and anxiety and the independent variable is the AUC of each network metric. A value of p < 0.05 was considered uncorrected significant for multiple regression analysis.
Correlations between the Network Properties and Insomnia Scores
We also examined the relationships between the global and nodal network metrics with the insomnia scores in the IS group. Pearson's correlation analysis was performed with age, gender, educational level, adjusted HAMA score and adjusted HAMD score as unconcerned confounding factors, the AUCs of each network property as an independent variables and the insomnia scores in the IS group as dependent variables. An exploratory threshold matching value of one divided by the number of nodes (1/90 = 0.011) was adopted as a significant threshold for falsepositive correction for all analyses.
Demographic Data and Clinical Variables
Demographic information and clinical variables from both the IS and NIS groups are provided in Table 1. We found no significant differences between the two groups regarding age (t (91) = 0.201, p = 0.841), gender (χ 2 (1) = 0.532, p = 0.466), and educational level (t (91) = −1.889, p = 0.065). However, both the original HAMD scores and the adjusted HAMD scores exhibited significant differences between the IS and NIS groups (p < 0.001, Table 1). The original HAMA scores and the adjusted HAMA scores also differed significantly between groups (p < 0.001, Table 1). In addition, the original and adjusted HAMD and HAMA scores in IS group are higher than that in NIS group. FIGURE 1 | Group comparison of global network topological properties (Cp, Lp, Sigma, Gamma, Lambda, E glo and E loc ) between the IS and NIS groups (5000 permutations, p < 0.05, uncorrected). The small-worldness suggests a small-world topology for brain networks of both the IS and NIS groups. The error bar represents the standard deviation (SD). Cp, clustering coefficient; Lp, characteristic path length; E glo , global efficiency; E loc , local efficiency; IS, healthy participants with insomnia symptoms; NIS, healthy participants without insomnia symptoms.
Group Differences in Global Network Properties
Statistical analysis was performed to detect distinctions in the global organization of brain structural networks between the IS and NIS groups. Both groups showed prominent smallworld properties for all threshold values from 0.1 to 0.2 (Figure 1), suggesting that the small-world architecture of the human brain is robust to brain aberrations or disorders (Achard et al., 2006). Importantly, the IS group exhibited an increased local efficiency and a decreased global efficiency in the anatomic brain networks as compared with NIS group (Supplementary Table S2), indicating an insomniarelated shift in the topology toward regular networks. However, no statistically significant differences between groups were revealed in regard to the measures of global properties of structural networks (Figure 1 and Supplementary Figure S1, Supplementary Table S2). Table 2 provides the results of statistical comparisons of the nodal properties (nodal betweenness centrality, nodal degree and nodal efficiency) between the IS and NIS groups (p < 0.05, uncorrected). In comparison with the NIS group, the IS group showed significantly stronger nodal betweenness centrality over two brain regions (the right inferior occipital gyrus (IOG.R) and the right temporal pole: middle temporal gyrus (TPOsup.R)), as well as significantly larger nodal degree in one region (the IOG.R) and significantly higher nodal efficiency over two regions (the left anterior cingulate gyrus (ACG.L) and left superior frontal gyrus, medial (SFGmed.L); Figure 2 and Table 2). In addition, the subjects in the IS group exhibited decreased nodal efficiency in the orbital part of left middle frontal gyrus (ORBmid.L; Figure 2 and Table 2).
Insomnia Was Associated with Nodal Structural Connectivity Topology of Areas Involved in Salience and Default-Mode Networks
Multiple linear regression analyses revealed no significant correlations between the global network topology and insomnia scores in the IS group. However, the nodal efficiency in the left insula (INS.L) of the salience network showed a negative correlation with insomnia scores (p < 0.011, false positive correction; Figure 3 and Table 3). In addition, the nodal betweenness centrality and the nodal degree of the right postcentral gyrus (PoCG.R) exhibited significant negative correlations with insomnia scores (p < 0.011, false positive correction; Table 3). Furthermore, the nodal betweenness of the right precuneus (PCUN.R) in the default-mode network also showed significant negative correlations with insomnia FIGURE 2 | Brain areas with altered nodal betweenness centrality, nodal degree and nodal efficiency in IS group. Group comparisons were based on permutation tests (5000 permutations, p < 0.05, uncorrected, controlling for the age, gender, educational level, adjusted HAMA score and adjusted HAMD score). Colored brain areas displayed the significantly aberrant nodal network properties in IS group. The red and blue represent significantly increased and decreased nodal topology in IS group as compared with NIS group, respectively. The more detailed information were showed in Table 2 scores (p < 0.05, uncorrected; Figure 3 and Table 3). Importantly, the nodal betweenness centrality of the left heschl gyrus (HES.L; Table 3) and the nodal degree of the right insula (INS.R; Figure 3 and Table 3) showed significant positive relationships with insomnia scores (p < 0.011, false positive correction). In particular, the nodal network properties from several brain regions were also significantly related with insomnia scores, including the right middle temporal gyrus (MTG.R), IOG.R, left superior parietal gyrus (SPG.L), and right HES (HES.R; p < 0.05, uncorrected; Table 3).
DISCUSSION
This study examined the topology organization of WM networks in insomnia using DTI tractography and graph theory analysis. We discovered that: (1) both the two groups exhibited optimized small-world organization with respect to their WM structural networks; (2) the IS group showed a lower global efficiency and a higher local efficiency than that of the NIS group, illustrating an insomnia-related shift of the topology towards regular networks; (3) the IS group manifested altered nodal brain structural network properties (nodal betweenness centrality, nodal degree and nodal efficiency) in fronto-limbic pathways including the SFGmed.L, the ORBmid.L and the ACG.L; and (4) the salience and default-mode networks showed correlations with insomnia scores. The insomnia scores were negatively associated with the nodal efficiency of the INS.L of the salience network and the nodal betweenness centrality of the PCUN.R of the default-mode network. These findings revealed large-scale topological organization substrates of the structural network, which could provide novel tools for better understanding of the neural circuitry that underlies insomnia. Our results were not explained by gender, age or educational level, which were controlled for during the group comparison analysis. The current results also removed the influence of adjusted HAMD scores and adjusted HAMA scores, suggesting that insomnia rather than depression or anxiety is associated with an altered topology of the fronto-limbic and salience network and default-mode network structural connectivity.
The small-worldness is an important topological property that demonstrates two fundamental organizations of functional segregation as well as functional integration. Functional segregation is associated with specialized processing within densely interconnected brain regions, and functional integration characterizes the ability of information communication between distributed brain areas (Rubinov and Sporns, 2010;Bullmore and Bassett, 2011). Interestingly, we demonstrated that both the healthy participants with IS and those without IS showed economic small-world topology with respect to their large-scale brain WM structural connectivity networks.
However, despite the common small-world topology organizations, the IS and NIS groups exhibited significant differences in nodal topological characteristics. Specifically, increased nodal properties in the IS group were identified in Group comparisons: permutation tests (5000 permutations, p < 0.05, controlling for the age, gender, educational level, adjusted HAMA score and adjusted HAMD score). Data are reported as mean ± SD for the AUC of the nodal network properties over the range of 0.1 ≤ S ≤ 0.2 with an interval of 0.01. IOG, inferior occipital gyrus; TPOsup, temporal pole: middle temporal gyrus; ORBmid, middle frontal gyrus, orbital part; SFGmed, superior frontal gyrus, medial; ACG, anterior cingulate gyrus; IS, healthy participants with insomnia symptoms; NIS, healthy participants without insomnia symptoms. L, left; BC, nodal betweenness centrality; Deg, nodal degree; E nodal , nodal efficiency. * Reported results are significant for p < 1/90 based on false positive correlation for multiple comparisons.
FIGURE 3 | The Pearson correlation between the AUC of the nodal network properties with insomnia scores in healthy participants with insomnia symptoms (p < 0.05, uncorrected, controlling for the age, gender, educational level, adjusted HAMA scores and adjusted HAMD score). The AUC of each nodal topology was computed over the range of 0.1 ≤ S ≤ 0.2 with an interval of 0.01. The red color stands for the positive correlation while the blue color reveals the negative correlation. For more detailed information see Table 3. L, left; R, right; AUC, area under the curve; BC, nodal betweenness centrality; E nodal , nodal efficiency; INS, insula; PCUN, precuneus.
mainly the frontal and temporal lobes (e.g., the SFGmed.L and the TPOsup.R) compared with the NIS group ( Table 2). In addition, the nodal network properties in MTG.R in the IS group also showed significant correlations with insomnia scores. The frontal regions are known to play important roles in memory, executive functions and emotion processing (Stuss and Alexander, 2000;Baddeley, 2003). A whole-brain voxel-based morphometry (VBM) study on 24 insomnia patients revealed reduced volume in the left orbitofrontal cortex (Altena et al., 2010). Drummond et al. (2013) demonstrated that individuals with insomnia exhibited decreased activations in the frontal regions when performing working memory tasks. In addition, Li C. et al. (2016) suggested that the insomnia patients had decreased amplitude of low-frequency fluctuation (ALFF) values in the left orbitofrontal cortex and right middle frontal gyrus during resting-state.
Another ALFF study showed significantly decreased amplitudes in the prefrontal cortex and default-mode network sub-regions (Zhou et al., 2016). The temporal lobe structures were recognized to be responsible for disturbed sleep or dyssomnia (Van Sweden, 1996). A recent study highlighted that individuals with poor sleep quality showed increased rates of atrophy within the frontal, temporal and parietal regions (Sexton et al., 2014). Another study showed decreased local coherence in the right temporal, parietal and frontal lobe regions in obstructive sleep apnea (OSA) patients (Santarnecchi et al., 2013). More importantly, a DTI study observed widespread WM integrity alteration, which includes axons linking brain structures within the limbic system, frontal and temporal cortices in OSA (Macey et al., 2008). Furthermore, studies based on VBM approaches demonstrated that the gray matter concentration was significantly decreased in both the cortical and subcortical brain regions, including the fronto-parietal cortices, temporal lobe and anterior cingulate cortex in OSA patients compared to healthy volunteers (Joo et al., 2010;Torelli et al., 2011). However, these discrepancies in frontal and temporal lobe analyses could be caused by the different subjects and imaging methods used (e.g., fMRI, EEG, structural MRI and DTI).
The present method was the first to use the DTI tractography to inspect the small-world changes in WM network in insomnia. Our findings indicated that the alterations of WM structural connectivity in frontal and temporal regions could influence information communication and functional integration for insomnia. In addition, increased nodal properties in the IS group were found in the default-mode network region, including the ACG.L. The ACG is the important part of the limbic system, which is implicated in regulating cognitive and emotional processing (Bush et al., 2000). The ventral ACG is also part of the default-mode network (Margulies et al., 2007). The anterior cingulate areas have extensive connections with the INS, prefrontal cortex, amygdala, hypothalamus and brainstem (Margulies et al., 2007;Cersosimo and Benarroch, 2013).
Recently, there has been increasing evidence obtained using positron emission tomography and glucose metabolism from fludeoxyglucose shows that the ACG plays an essential role in the regulation of normal sleep, including between sleep and wake, during sleep deprivation, as well as across sleep stages in humans (Braun et al., 1997;Nofzinger et al., 1997Nofzinger et al., , 2002Thomas et al., 2000). In addition, using VBM, Winkelman et al. (2013) have found increased rostral ACG in PI patients compared to good-sleeper controls. Using graph theoretical analysis, our findings indicated WM alterations of the structural connections in the ACG. The increased nodal efficiency in the ACG in insomnia may reflect a compensatory response to repetitive sleep disturbance. The INS is thought to be a key hub of the salience network (Kelly et al., 2012;Tahmasian et al., 2016) and is responsible for the detection of salience, making decision, emotion judgment, attention modulation, motor/sensory processes and cognition regulation (Menon and Uddin, 2010;Cauda et al., 2012;Uddin, 2015). It can be divided into two core parts: one is the anterior insula which is linked with the frontal and parietal cortex, ACG, and limbic regions. It is mainly responsible for salience detection and other emotional processes. Another is the posterior insula which is associated with the sensorimotor, temporal, premotor and posterior cingulate regions. It plays a critical role in perception processing, emotion regulation, interoception and sensorimotor integration (Cauda et al., 2011(Cauda et al., , 2012 Huang et al. (2012) reported decreased functional connections mainly between the INS and the amygdala as well as between the thalamus and striatum in PI.
Our study also showed that decreased nodal efficiency of the left INS was associated with increased insomnia scores. However, Chen et al. (2014) showed an increased activation in the anterior INS with the salience network in female insomnia patients. Recently, Li et al. (2014) found that the PI group exhibited strong connectivity between the right INS and the bilateral superior parietal lobe. In the current study, we discovered that the increased right INS in the nodal degree was related with increased insomnia scores. These discrepant results may due to a result of different sample size of the insomnia patients, gender distinction, potential confounding variables that were controlled for in different studies, or methodological differences. Our findings regarding different changes in the left INS and right INS may provide new evidence that they play different roles in information processing in insomnia. We demonstrated that the INS could be an important neural marker for the hyperarousal pathophysiology underlying insomnia. Taken together, our findings suggest that IS may disrupt the role of the INS in maintaining the functions of alertness and cognitive processing.
Some limitations need to be considered. First, we did not use the Pittsburgh Sleep Quality Index or Duke structured interview to measure the IS. We applied the three-item sleep subscale based on HAMD-17 instead since it is better associated with sleep diaries (Manber et al., 2005). Second, the falsepositive correction (1/number of regions) was used in this study, which was not as conservative as the false discovery rate (FDR) correction. Third, we divided the whole brain into 90 sub-regions based on the AAL atlas to construct the brain large-scale structural network. However, previous studies suggest that different parcellation strategies may result in distinct network topological properties (Fornito et al., 2010;Sanabria-Diaz et al., 2010). Therefore, it is necessary to apply a more precise parcellation strategy to provide the information for the brain network topology alterations in insomnia. Fourth, the voxel size of DTI data was not isotropic in the present study which may cause underestimate FA values in brain regions with crossing fibers (Oouchi et al., 2007), which may influence the structural connectivity network. Finally, future studies should employ a high b-value diffusion-weighted acquisition sequence and streamline tractography to estimate structural connectivity and model WM architecture.
CONCLUSION
We applied DTI tractography combined with graph theory approaches to explore the abnormalities of topological organization in WM structural networks of subjects with IS. Both the healthy subjects with IS and those without IS showed small-world organization. However, the insomnia group showed altered regional network properties in the fronto-limbic system. The salience and default-mode networks were also strongly linked with insomnia. Our results demonstrated a disrupted WM network integrity and thus provided structural insights into the insomnia connectome. Importantly, certain structural networks can provide important implications for understanding the brain structural connectome in insomnia.
AUTHOR CONTRIBUTIONS
F-ML, JD, C-HL, H-FC, M-XH and ZY conceived and designed the experiments. C-HL, S-LL, L-RT and C-LT acquired the data, which F-ML, HC and ZY analyzed. F-ML, JD, TAC, Y-TX and ZY wrote the article, which all authors reviewed and approved for submission. | 2017-11-30T18:03:28.608Z | 2017-11-30T00:00:00.000 | {
"year": 2017,
"sha1": "bdf6948bee4b13288ff8baad9a2f53c565a32e84",
"oa_license": "CCBY",
"oa_url": "https://www.frontiersin.org/articles/10.3389/fnhum.2017.00583/pdf",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "bdf6948bee4b13288ff8baad9a2f53c565a32e84",
"s2fieldsofstudy": [
"Psychology"
],
"extfieldsofstudy": [
"Psychology",
"Medicine"
]
} |
216352595 | pes2o/s2orc | v3-fos-license | Phytochemical Screening and Antibacterial Assay of extracts of different parts of Moringa oleifera Lam
The increasingly large numbers of bacteria that are developing resistance against orthodox antibiotics pulls attention of researchers toward herbal antimicrobial molecules in hope that they may provide useful leads into anti-infective drug with least side effects. Several secondary metabolites were isolated from plant which includes terpenoids, steroids, alkaloids, tannins, benzophenones, coumarins and flavonoids. These new herbal phytochemical substances can also serve as templates for producing more effective drugs through semi-synthetic and total synthetic procedure.
Introduction
The increasingly large numbers of bacteria that are developing resistance against orthodox antibiotics pulls attention of researchers toward herbal antimicrobial molecules in hope that they may provide useful leads into anti-infective drug with least side effects. Several secondary metabolites were isolated from plant which includes terpenoids, steroids, alkaloids, tannins, benzophenones, coumarins and flavonoids. These new herbal phytochemical substances can also serve as templates for producing more effective drugs through semi-synthetic and total synthetic procedure. best friend, marango, Mulangay, Sajna or Ben oil tree. Moringa species have been utilized by Traditional medicine practitioners in curing various diseases as it has medicinal values (Rathi et al., 2004). It is considered as Miracle tree as all the parts of the plant are useful for human. It is also known as multipurpose plant due to its various functions almost every parts of the plant are exploring by many in different industries. Some of the valuable uses of Moringa oleifera are medicine, nutrition, water management, livestock feed, biodiesel and landscaping among others (Bennet, et al., 2003).
Many part of the plant such as leaves, fruits, roots were used as vegetables (Siddhuraju, et al, 2003).This plant has plethora of Minerals and vitamins (Latida, et al.,2013). Different studies have showed that various parts of Moringa oleifera such as stem, flowers, bark, roots as well as seeds have antimicrobial activities (Lockett,et al.,2000 andFahey,et al,2005). Moringa oleifera have been considered as antimicrobial agent after the discovery of several Antimicrobial bioactive components with inhibitory activity against many pathogenic microorganisms (Fozia, et al.,2012). leaves are known to have biological properties due to presences of useful phytochemicals compounds such as saponins, flavonoids, tannins and other phenolic compounds that have antimicrobial properties (Bako, et al.,2010).The seed pods (fruits) of the Moringa oleifera tree are one of the most nutritive and useful parts of this versatile plant.
The root and bark of young trees are considered rubefacient, stomachic carminative, vesicant and abortifacient. The flowers and roots contain an antibiotic that is highly effective in the treatment of cholera. Torres-Castillo, et al.,2013 stated that the antioxidant activities of Moringa plant extracts is due to the presence of polyphenolic compounds, which can act as great antimicrobial agents in a quantum of crucial phytoconstituents such as tannins, saponins, alkaloids, steroidal aglycones, reducing sugars, terpenoids, and so on, that act as a cardiac and circulatory stimulants, possess anti-tumour, antipyretic, anticonvulsant, antiinflammatory (Lockette et al, 2000), antiulcer, antispasmodic, antidiabetic, diuretic, antihypertensive, cholesterollowering, antioxidant, antifungal, abortifacient, antibacterial (Anwar and Rashid, 2007 (Renitta, et al.,2009). Moringa leaves have been reported to contain more vitamin A than carrots, more calcium than milk, more iron than spinach, more vitamin C than oranges, and more potassium than bananas, and that the protein quality of Moringa leaves rivals that of milk and eggs.
The solo aim of the present study is an attempt to explore the potentiality of Moringa oliefera leaves, flower and bark (Methanol, Petroleum ether, Ethyl acetate & Aqueous) extracts Phytochemical & Antibacterial angle. The Antibacterial activity of above extracts was evaluated by using gram positive & gram negative strain of human pathogenic bacteria.
Collection of plant material
The plant material of Moringa oleifera (Fresh leaves, flower and bark) used in study were collected from botanical garden of A.N.College, Patna, Bihar, India.
Preparation of plant extracts
Fresh material of the plant samples were collected, cleaned, washed, air dried and homogenized to a fine powdered using mechanical stirrer and stored in airtight bottle. 25gm of collected powdered form of leaves, flower, bark weighed and extracted with Ethanol, Methanol, Petroleum ether, ethyl acetate and aqueous extracts by soaking the powdered materials in 200 ml of solvents & run in Soxhlet. The process of extraction continues until solvent becomes transparent. Extract was dried by rotary evaporator at 40 o C for 36 hours. Extract yield were weighed and mentioned in table 1. The extracts were kept in sterile bottles under refrigerated condition until use.
Identification, selection and maintenance of test bacteria
Four bacterial cultures of Escherichia coli, Streptococcus pneumonia (Gram negative Bacteria) Staphylococcus aureus and pseudomonas aeruginosa (Gram positive Bacteria) were selected for antibacterial assay. These clinical bacteria were brought from laboratory of Microbiology Department of Patna Medical Hospital. The isolates were cultured and identified by using microscopy (by Gram staining procedures), colony examination and biochemical tests (BPC, 1994). The subculture was maintained on nutrient broth at 37°c and were cultured on nutrient agar to perform Antibacterial activity.
Antibacterial activity
Antibacterial activity of selected plant parts extracts was carried out by using agar disc diffusion method in nutrient agar media with little modifications. About 25 ml of media was poured into each Petri plate. After solidification of media, the bacteria were inoculated by spreader on the surface of the plates. 6 mm disc of Whatman No. 1 filter paper were loaded with 25 µl of plant different solvent extract along with positive control and negative control and air dried.The discs were placed in the medium and after incubation at 37°C for 24 hrs, zone of inhibitions was observed and measured in mm. (Murray, 2009).
Phytochemical screening
The selected parts (Leaves, Flower, Bark) of different extracts of Moringa oleifera Lam were dried, finely powdered and extracted, used for this study (Table 1). The results of the phytochemical analysis revealed varying constituents of these extracts. Alkaloids, glycosides, terpenoids, saponins and tannins were detected in all extracts of leaf but aqueous extract only contain flavonoid and steroids. (Table 2). Flower extract is devoid of steroids, terpenoids and tannins (Table 3) while bark extract does contain steroids (Table 4).
Antibacterial testing
The In case of pseudomonas, ethyl acetate showed better result than other solvent (12mm). Petroleum ether and ethyl acetate extract was most active against Streptococcus pneumonia. Moringa leaf was active against pathogenic bacteria as compared to flower and bark. Glycosides Many researchers have proven that Moringa oleifera parts such as bark, stem, flowers, fruit, roots as well as seeds have antimicrobial activities (Fahey, et al.,2005(Fahey, et al., : locket, et al.,2000. Moringa oleifera have been considered as bactericidal agent after the discovery of several antibacterial constituents with inhibitory activity against pathogenic bacteria. Finding of new antimicrobials that would be effective against drug resistant strains, phytochemical screening and antibacterial activity, investigation of the M. oleifera selected parts extract were carried out. It is suggested that this plant drug would have enormous health benefits with little side effects that is common with the synthetic drugs. During the course of this research, investigations have founded many phytochemicals bioactive compound with promising antibacterial activity. Various researchers reported antimicrobial activity of Moringa oleifera against variety of pathogens including S. aureus, S. albus, S. pyogenes P. aeruginosa, Salmonella gallinarum, B.subtilis and E. coli Kumar, et al.,2012).Antimicrobial activity from the seeds of Moringa oleifera were assayed for the evaluation of antimicrobial activity against bacterial strains (Pasturella multocida, Escherichia coli, Bacillus subtilis and Staphlocuccus aureus), which was evaluated by Amer Jamil et al., (2008).
Phytochemicals are present in virtually all plant tissues of Moringa oleifera e.g. leaves, roots, stem and fruits . Presence of phytoconstituents like alkaloids, flavonoids, tannins, saponins are responsible for antibacterial activity . Alkaloids, phenols, flavonoids and glycosides have a number of biological activities and strong antibacterial potentials (Robbers et al., 1996). Alkaloids have exhibited promising activity against H. pylori (Hadi and Bremner, 2001) and a number of other bacterial strains (Sinha et al., 2001;Saeed and Sabir, 2001;Khan et al., 2001;Kren and Martinkova, 2001) Similarly, a few glycosides have presented with antibacterial activities The antibacterial potential of terpenoids have been documented.
Terpenoids are bioactive molecules which are a part of plants' defence mechanisms as phyto protectants (Morrissey and Osbourn, 1999).Studies have shown that the antimicrobial potential of M. oleifera leaves extract may be attributable to the presence of an array of phytochemicals.
This study revealed that Moringa oleifera is highly potential medicinal plant of multiple use. It has high antibacterial property and effective secondary metabolite make them more suitable for newer and safer nutraceutical and pharmaceutical products. Moringa oleifera parts could be good source of drugs that maty be used against bacterial infection if it is found effective and non-toxic in animal trial. Detailed study is needed to investigate the active compounds present in these plant parts having antibacterial activity that may help us to design more effective chemo therapeutic agent to heal bacterial infections.The result strengthens the scientific data base. | 2020-04-02T09:22:13.469Z | 2020-02-10T00:00:00.000 | {
"year": 2020,
"sha1": "3152171b7746847ef1bc81f986075218edee7d69",
"oa_license": null,
"oa_url": "https://www.ijcmas.com/9-2-2020/Nishu,%20et%20al.pdf",
"oa_status": "GOLD",
"pdf_src": "Anansi",
"pdf_hash": "1f258769f3c4b6fef7f1c59ded8f6b3834113c86",
"s2fieldsofstudy": [
"Medicine",
"Chemistry"
],
"extfieldsofstudy": [
"Biology"
]
} |
56398870 | pes2o/s2orc | v3-fos-license | http://section.iaesonline.com/index.php/IJEEI/index A Review on Emotion Recognition Algorithms Using Speech
Info 2017 In recent years, there is a growing interest in speech emotion recognition (SER) by analyzing input speech. SER can be considered as simply pattern recognition task which includes features extraction, classifier, and speech emotion database. The objective of this paper is to provide a comprehensive review on various literature available on SER. Several audio features are available, including linear predictive coding coefficients (LPCC), Mel-frequency cepstral coefficients (MFCC), and Teager energy based features. While for classifier, many algorithms are available including hidden Markov model (HMM), Gaussian mixture model (GMM), vector quantization (VQ), artificial neural networks (ANN), and deep neural networks (DNN). In this paper, we also reviewed various speech emotion database. Finally, recent related works on SER using DNN will be
INTRODUCTION
Speech emotion recognition (SER) is one of the topics in speech processing that has been continuously researched. The initial start of that is simple speech recognition dates back from the late fifties [1]. In today's world, SER has shown to be quite a research hotspot, as indicated by the growth of publication papers in each year. Figure 1 shows the rough estimation of IEEE published papers that are related to SER. The data was analyzed from IEEE Explore. The aim of SER system is to extract the emotion from the unknown input speeh [2]. While each individual may have their own abstract emotional state, generally emotions can be grouped into a universal category of happiness, anger, surprise, fear, sadness as well as neutral. Some other researchers have their own categories, for example the database utilized in [3] categorized emotions into ten types, namely joy, acceptance, fear, surprise, sadness, disgust, anger, anticipation, neutral, and others. Although the classification of emotion might differ, the objective of SER is still the same, which is to extract emotional state. In [4], it is stated that SER is more or less a pattern recognition system. Figure 2 shows the typical speech emotion recognition (SER) system.
Figure 2. Typical Speech Emotion Recognition System
The application of SER can be targeted to several sectors. In banking, an auto caller equipped with SER may assist in detecting the emotion of the customer, generating custom responses based on the result [5][6][7]. In education, an e-learning portal with SER can detect the emotions of the user such as frustration and stress, determining whether the studying is conducive or not and give appropriate countermeasures [8]. Yet another application is in transportation, where in the near-future that vehicles are capable of auto-driving, the system can take over the steering wheel in the case where an unhealthy amount of emotion is detected from the driver [9].
REVIEW ON AUDIO FEATURES EXTRACTION
In this section, various audio features used in SER are reviewed, including linear predictive coding coefficients (LPCC), Mel-Frequency Cepstral Coefficients (MFCC), and Teager energy operator (TEO). The extraction process goes through three steps. First, the pre-emphasis is a filter used to emphasize on high frequency baud by increasing its amplitude and decreasing the amplitude of lower frequency. In speech, typically the higher frequency holds more important information to extract, while lower frequency might be mingled with noise. It should be noted that in modern speech recognition systems the pre-emphasis has lost its importance and replaced by channel normalization in the later steps, but for the sake of simple but effective methods, a high-pass filter is sufficient. Secondly, the frame blocking and windowing is a process to decompose the speech signal into short speech sequences called frames to conduct speech analysis. There are several windows that can be utilized such as the rectangle window, triangular window, but the Hamming window is often chosen as it softens the edges created due to framing, again emphasizing on simplicity. Third is the feature extraction. According to [1], speech features can be categorized into four groups, including namely continuous, qualitative, spectral, and TEO-based features, as shown in Figure 3.
Linear Predictive Coding Coefficients (LPCC)
Linear predictive coding (LPC) is a digital method for encoding an analog signal [10]. The way LPC works is that it predicts the next value of a signal based on the information it has received in the past, forming a linear pattern. The main objective of LPC to obtain a set of predictor coefficients that will minimize the mean squared error, . The formula used to obtain the LPC coefficients is: where [ ] is a frame of the speech signal and p the order of the LPC analysis. LPC encoding generally gives satisfactory quality speech at a lower bit rate and supplies pinpoint approximations of speech parameters. Although LPCC can be considered one of the more traditional features of speech, LPC has contribute to the overall recognition of emotion. In [11], they used LPCC as one of their features and achieved 86.41% recognition.
Mel-Frequency Cepstral Coefficients (MFCC)
The Mel-frequency cepstral coefficients (MFCC) is one of the most popular audio feature [12,13]. It is a representation of the speech signals where a feature called the cepstrum of a windowed short-time signal is derived from the FFT of that signal. Afterwards the signal goes to the frequency axis of the melfrequency scale using a log based transform, and then decorrelated using a modified Discrete Cosine Transform [14].
The steps to extract MFCC features, including pre-emphasis, frame blocking and windowing, FFT magnitude, Mel filterbank, log energy, and DCT as explained in [13]. MFCC utilizes the mel-scale, which is tuned to the human's ear frequency response. Due to this, MFCC has been proven to be invaluable in the speech recognition field, and has been attempted to be integrated with emotion recognition [15]. According to [1], Spectral audio features such as MFCC is best suited for a N-way classifiers.
Eager Energy Operator (TEO)
The Teager Energy Operator (TEO) was proposed by Herbert M. Teager and Shushan M. Teager in 1983. In their article, they argued that the speech model at that time was inaccurate due to its linear finite characteristics, and proposed a model that involves a nonlinear process. Later in another article, they generated a plot that implies the energy creating the sound, but the algorithm was not specified [16]. The works is further extended in [17] and Teager Energy Operator has since been defined for both real and complex continuous signals. TEO can be defined as TEO has been used in various speech signal applications. In [16], formants of vowels are tracked using TEOs. In SER, TEO features are used by [18] to make their system more robust in noisy environment. Moreover, TEO-based features are suitable to detect the stress level of emotion [1].
Summary of Various Audio Features
The features to be extracted are various, but they can be grouped into 4 distinct groups, namely continuous, qualitative, spectral, and TEO-based features. These features can be used as a sole determinant, but often they are used in combination to generate a more distinguishable pattern for the system. Table 1 shows the strength and weaknesses of various audio features. We selected MFCC due to its suitability for Nbased classifiers and DNN. Moreover, many researches have used MFCC as the audio features. So that, our proposed system could be benchmarked with other research. LPC on its own has is not as reliable, as seen that it is often combined with other feature extraction methods.
MFCC
Tuned in a scale that is suitable for the human ear. Alongside with LPCC, is considered one of the standard features extracted, even more-so in SER.
Best suited for N-way classifiers.
MFCC being in spectral form is sensitive towards noise.
TEO
Nonlinear approach, which is for suitable for speech. Superior detection in stress-levels of emotion.
More complicated computations as compared to LPC.
REVIEW ON CLASSIFIERS
After the SER system extracts the desired features from the audio speech data, the next step is to pass the data on to the classifier. The primary job of the classifier is to determine the unrevealed emotion of the user by using a set of defined algorithms and functions. Usually these classifier evaluations are performed using a single database or dataset, under one language. Up until now, there has been no agreed standard of which classifier is the best, but many have been evaluated to achieved better recognition. The ones that are most commonly used classifier are: GMMs, HMMs, SVMs ANNs as well as k-NN [1]. In this section, the three most popular classifiers HMM, GMM and VQ are discussed in brief and compared with the classifier that is used in this project, Deep Neural Network DNN, which is an extended version of ANN.
Hidden Markov Model (HMM)
The Hidden Markov Model (HMM) consist of the first order markov chain whose states are hidden from the observer. This means while that the observer cannot directly examine the internal behavior of the model as it remains hidden, the the data's temporal structure is recorded by these states. HMM can be considered as statistical models that describe the sequences of events [2]. To express this in mathematical terms, for modeling a sequence of observable data vectors, 1 , ⋯ , by an HMM, we assume the existence of a hidden Markov chain responsible for generating this observable data sequence. Let be the number of states, , = 1, ⋯ , be the initial state probabilities for the hidden Markov chain, and , = 1, ⋯ , , = 1, ⋯ , be the transition probability from state to state . Assuming the true state sequence is 1 , ⋯ , the likelihood of the observable data is given by HMM is also a sequential generating probabilistic model, which means that the classifier acts on the assumption that neighboring frames are closely related. While this is valid for speech signal frames, there are better alternatives due to its assumption and algorithm complexity [19].
Gaussian Mixture Models (GMM)
The Gaussian mixture model (GMM) uses alternate generating probabilistic model, which implies that for a particular word we can form multivariate Gaussian density models that represents all the frames [19]. Similar to HMM, GMM can be expressed in mathematical terms. Let ( ) be the -th frame of the isolated word . The probability of generating the frame Let ( ) using GMM can computed as follows: where is the number of mixtures, is the probability of the th mixture, and is the multivariate Gaussian density function with mean vector and covariance matrix. Compared to HMMs, GMM are superior in training and testing due to their efficiency in modeling multi-modal distributions as a whole. GMMs are used in SER when global features are the main focus. But due to this feature, GMMs are not suited when the user would like to model the temporal structure.
Vector Quantization (VQ)
Vector quantization (VQ) is a process of mapping feature vectors of test utterance to the best matching feature vectors of the reference models [20]. As compared to other techniques such as HMM, VQ boosts is its low computational burden due to its straightforward approach. The efficiency is due to its nature of using compact codebooks for reference models and codebook searcher [21]. While the basic VQ appears to be convenient, because the vectors are jumbled up, VQ does not take into account the temporal evolution of the signals.
Artificial Neural Network (ANN) and Deep Neural Network (DNN)
The term artificial neuron network (ANN) is a term commonly used for a system that imitates the flow of the neuron. Information is received from the input and flows from one node to another, until it reaches the output. Through this process, the system will learn about the input given. Three branches of ANN will be discussed, including feedforward neural network, deep neural network and convolutional neural network.
The feedforward neural network is the first type of neural network developed. The process is the most basic one of all: the data is forwarded through an input layer to a single hidden layer, then to the output layer. In feedfoward, there are no loops or cycles. In the feedforward neural network, there is the input layer, a hidden layer, and an output layer. A deep neural network expands the possibilities by adding more layers in the hidden layer segment [22] [23]. An interesting characteristic of DNNs is that they can learn high-level invariant features from raw data. Convolutional neural network (CNN), as shown in Fig. 4, is inspired by the visual cortex, where cells are activated according to their sub-regions. Applying that to ANN, the CNN information in the neurons are connected to their sub-regions first, before passing to the next layer. Some sub-regions may overlap. This contrasts with other neural network architectures where each neuron is independent [25]. While CNNs are highly sophisticated and can be used for SER, it is specifically suitable for image processing and recognition, due to the convolutional layer.
The classifier is the algorithm that determines how these features are manipulated and translated into emotion recognition. Common classifiers are HMM, SVM, GMM, and ANN. DNN is used, a more sophisticated version of feedforward ANN. Table 2 shows that ANN boosts deep potential for pattern recognition, provided that more layers are supplied. The weakness of inconvenience when adding emotion can be simply solved by consolidating all initial parameters at the start. This claim is further supported in Table 3, where using DNN may generate more accurate recognition compared to other classifiers.
REVIEW ON SPEECH EMOTION DATABASE
To complete the process of SER, the system requires a database for training and testing. An emotion database generally consists of various audio recordings that are labeled their appropriate emotion. For this section, the discussion will be directed towards the number of databases used, the method of obtaining the dataset, the variety of emotions categorized, as well as the challenges that most researches have in obtaining these databases.
Usually a single SER system will rely only on a single database, to reduce data variance due to external factors such as different accents. While most systems are supported by one database, there are some researches that utilizes more, such as by [26], that have used the Berlin emotional speech database (EMO-DB) in combination with the German FAU Aibo emotion corpus (FAUAEC). With that said, these databases are still only using one language; German. As previously mentioned, there are external factors that can affect the speech features that are extracted.
The closest attempt of integrating multiple databases was performed by [27] by using 6 standard databases (AVIC, DES, EMO-DB, eNTERFACE, SmartKom, SUSAS) in a cross-corpora and multilingual evaluation experiment. An alternative is using a database that has already integrated multiple languages, such as the INTERFACE corpus, which supports English, Slovenian, Spanish, and French.
Another aspect to consider is how these speech emotion data are obtained. One may debate that true authentic emotion can only be captured at the moment, but spontaneous speech is difficult to record. To ensure proper speech processing, the system requires better audio quality. This is simply not feasible to attain without proper sound recording setup and environment. Therefore, the most used method is for professional or experienced actors to express the emotion through acting, then labeling each speech segment on its appropriate category. The EMO-DB and LDC Emotional Prosody Speech and Transcripts are two examples of an actor-based database. Generally, this is conducted under ideal conditions (ie: in a studio with minimum noise interference).
Another interesting method of collecting data is by collecting the speech from existing media, such as from movies, television recording, etc. While the source can be still considered as a "professional actor", the method of collecting the data differs from the first but maintains the general quality of audio. This however is met with the problem of copyright of fair usage. An example of a research that utilizes this method is by [28]. Finally, there are researches that collect their data from non-professional actors. These databases are generally self-made from the local environment. But while a home-made database creation may be more convenient for the researcher, it becomes difficult to benchmark the results with other papers.
There are variations of emotions that are categorized. The German Database for example, groups the emotion into anger, boredom, disgust, fear, happy, neutral, and sad. The more emotions category the database has, the more challenging it is for the SER system to achieve high accuracy. To solve this, some researches such as [3] merges and omits certain emotions with similar attributes, eg, the emotion of "disgust" and "anger". and focuses on those emotions with distinct variations.
There are various other factors to consider when choosing the appropriate database such as number of actors, language, ethnicity, word utterance or whole sentence, but one factor that has been a deterrent for some young researchers is the fact that some databases are obscured by a pay wall. This leads to either creating their own database or using open-source databases available. Table 3 shows various databases along with the audio features and classifiers that are used by other researchers.
RELATED WORKS AND PROPOSED SER SYSTEM
Fortunately, SER is a topic that is abundant in papers these recent years. In 2017 only, there are more than 150 papers published that are related to SER, which covers different angles of approaches, new combination of features to be processed, implementation of a variety of algorithms, optimization of results. A brief sample of research methodologies conducted in 10 papers can be observed in Table 3. Table 4 shows additional closely related papers, i.e. SRR using DNN. Although many researches have been conducted on SER using various audio features, classifiers, or database, however, there is still a need to further improve the accuracy and processing time of an SER system. With the large amount of emotional utterance, more variation of emotion classification should be possible. While it leaves more room for future researches to improve, the best recognition rate is only 57.9% using ELM-DNN, Based on Table 4, we proposed SER system as shown in Figure 5. The raw audio received from the EMO-DB is labeled into their respective emotions. These audios are then inserted into a temporary storage for feature extraction. The next step is feature extraction using MFCC. Finally, the extracted features are classified using DNN. The performance evaluation of the proposed system will be discussed in our next paper.
CONCLUSION
This paper has presented a comprehensive review on the emotion recognition using speech analysis and the design of SER system. A typical SER consisted of at least feature extraction, classifier, and speech emotion database. From the critical literature review, of the various audio features we selected MFCC due to its popularity and suitability, while deep neural network was selected as the classifier due to its higher accuracy if more data is available. A comprehensive and popular emotion database, EMO-DB, was selected. Further research includes implementation of the proposed SER system using Matlab and performance evaluation and benchmarking. | 2019-02-04T15:23:10.236Z | 2018-03-11T00:00:00.000 | {
"year": 2018,
"sha1": "c99a2c6cb683629635737b4a8189ea4ca76fe7d6",
"oa_license": "CCBY",
"oa_url": "http://section.iaesonline.com/index.php/IJEEI/article/download/409/263",
"oa_status": "HYBRID",
"pdf_src": "ScienceParsePlus",
"pdf_hash": "981217bdcb1b7eba19275351337fdc98eaefea12",
"s2fieldsofstudy": [
"Computer Science"
],
"extfieldsofstudy": [
"Computer Science"
]
} |
55944003 | pes2o/s2orc | v3-fos-license | PERSPECTIVES FOR DEVELOPMENT OF RURAL TOURISM IN THE AREA OF NOVI SAD4
Novi Sad is the second largest city in the Republic of Serbia and the administrative center of the Autonomous Province of Vojvodina. On its surroundings there is a perspective area for development rural tourism. So far there have been launched some initiatives to develop rural tourism. These initiatives in conjunction with other elements of tourist supply can increase tourist offer. Rural tourism development should be based on the rich pension and out-of-pension tourist offer. What appears as a decisive factor in gaining competitive advantage in the tourism market is the opportunity to develop a wide variety of up-market tourist attractions. Presence of natural and human (anthropogenic) sources must be a marketing and management leading to achieve certain results. In this regard is the important role of the Tourism Organization of Novi Sad, local administrations and the private sector. The article emphasizes importance of public-private partnerships and management approach as a basis for gaining competitive advantage at the tourist market. Also, the paperwork provides an overview of resources that represent the potential for future rural tourism development. Expectations are that the complementary development with other forms of tourism, rural tourism will contribute to the overall economic development of this area.
PERSPECTIVES FOR DEVELOPMENT OF RURAL TOURISM IN THE AREA OF NOVI SAD 4 Abstract
Novi Sad is the second largest city in the Republic of Serbia and the administrative center of the Autonomous Province of Vojvodina. On its surroundings there is a perspective area for development rural tourism. So far there have been launched some initiatives to develop rural tourism. These initiatives in conjunction with other elements of tourist supply can increase tourist offer. Rural tourism development should be based on the rich pension and out-of-pension tourist offer. What appears as a decisive factor in gaining competitive advantage in the tourism market is the opportunity to develop a wide variety of up-market tourist attractions. Presence of natural and human (anthropogenic) sources must be a marketing and management leading to achieve certain results. In this regard is the important role of the Tourism Organization of Novi Sad, local administrations and the private sector. The article emphasizes importance of public-private partnerships and management approach as a basis for gaining competitive advantage at the tourist market. Also, the paperwork provides an overview of resources that represent the potential for future rural tourism development. Expectations are that the complementary development with other forms of tourism, rural tourism will contribute to the overall economic development of this area.
Introduction
Novi Sad is the capital of the Autonomous Province of Vojvodina and the second largest city in the Republic of Serbia. Located on the north part of Serbia, with its whole natural and geographic position belongs to the Pannonian plain. Rich natural and geographic resources located in Novi Sad surrounding makes that it is possible to invest in rural areas and further develop them. An important role has to be given to the plan of the sustainable tourist development which is the basic point and condition of any further planning and activity in rural tourism.
The aim of paper work is to research possibilities for sustainable tourist development of rural areas in the Municipality of Novi Sad and pay attention to the influence of rural tourism and total social growth exerted on the ecological processes and the quality of the environment itself.
Material and method
The subject of the article is the status and conditions for the development of rural tourism in the area of Novi Sad. The aim is to point out the potential strategic directions for the future development of the tourist destination Novi Sad in the context of sustainable development. In this way, obviously great potential for further development of tourism would be a practical sense realized. Former policy undifferentiated marketing did not give results. The strategy of market focus, integrated marketing, with a clear specifying tourism aspects, with the consistent implementation of the basis on which should insist in future development. Methods that used in this paper are: inductive-deductive method, qualitative method, comparative method.
Result and discussion
The fast scientific, technical and technological progress that was immanent to all industrial revolutions brought about an enormous use of all natural resources. The availability of the resources, as well as all the factors of production, taking into account such a model of an accelerated development, has been slowly reduced by time. This is particularly so for the whole mankind.
The model of rapid industrial development itself has exhausted the main generic forces and factors and has brought about serious disturbances of the natural environment, i.e. its ecological pollution. The term sustainable development appeared at the beginning of the eighties in the 20 th century. It took into account the establishment of positive relations between the human needs for a better quality of life, the economic development and the disturbed environment.
In the Republic of Serbia 10% of the territory is protected by the Law. Ecological Network for Serbia now contains 101 ecologically important areas, which are deposited on a proposal of the Bureau, i.e. the Ministry of Agriculture and Environmental protection.
Novi Sad as a tourist destination has fairly well preserved natural environment. Fruška Gora should be mentioned here, in the first place. In 1961 it has been proclaimed a national park. Later, it has been recognized as a territory with special natural, cultural and historical values and sights. Consequently, by adopting a corresponding law and space plan it has become known. It belongs to one of five national parks in Serbia. There are 25 400 ha protected by the Government.
As a specially highly preserved, in terms of ecology, the areas in the Municipality of Novi Sad, which are highly valued for its further sustainable development, are as follows: 1) Natural Park Fruška Gora.
2) Protected natural assets -Nature park (Begečka jama, Tikvara, Panonija) 3) Protected natural assets -Nature reserve (Kovilj -Petrovaradin swampy). It has been proclaimed some 71 nature reserve in Serbia, so far, including total area of 84000 hectares. There are special nature reserves in an area of more than 100 hectares. 4) International important bird areas -IBA (Marsh of Kovilje, Fruška gora, The Danubian lumber section). Serbia has 35 regions which are important bird areas and which satisfy tstrict requirements of "IBA criteria", respecting all the rules and regulations established by the organisation "Bird Life International".
All these areas testify that there is a very high quality of a preserved ecological system, i.e. the natural environment on the whole and high level maintenance and respecting of all ecological standards. If further tourist development is required then all the elements for preservation of these standards should be considered.
It is indisputable that the anthropological resources of Novi Sad have high quality and rich in their historical background. Keeping in mind that, at one side, the problem of tourism has not been paid great attention to, so far (or the attention it has desserved) as well as "so and so" preserved anthropological resource, on the other side, as an imperative for further planning of its tourist development, there should be taken into account the maintenance of all the segments of its rich cultural and historical background.
This is the condition for its further tourist development. This means certain investments in them for further usage.
The division of the cultural and historical background could be made in the following way: 1. Cultural and historical entities.
-Fruška gora with its monasteries and other sights -Great number of villages on whole territory of Municipality -Farms "ranch"-salaši as a specific feature of Vojvodina region.
Important places and works with monument and artistic features.
Particular places in all of the above mentioned environment entities with an important cultural and historical background are numerous and call for a special attention. Many places have a very important role, not only in history of Serbia, but Europe, too. Many of them have the characteristic of the cultural monuments with rare artistic, historical and aesthetic values.
Folklore background.
The cultural and ethnic wealth of many nations and nationalities which coexisted for centuries in the region of Vojvodina is immeasurable. It can be the topic of tourism special interest from the Western Europe, America, Japan, etc. Ethnic contents, as the investigations undertaken have already shown, look like something exotic to the tourist from these countries.
Manifestation values.
Various manifestations, typical of Vojvodina and the customs of its nations and nationalities (its inhabitants) do and may do enrich the cultural contents of numerous rural areas.
Archeological findings.
The localities on Fruška gora, Petrovaradin, etc. speak about the tempestuous history of the people living here in the previous centuries.
Thanks to the large number of natural and social resources, there are great opportunities for the development of special interest tourism. In this sense there is opportunity for complementary development of rural tourism with a tourism of special interests.
Tourism on the Danube River
The Danube is second largest European river waterway. In terms of transport and trade it has become even more important by digging the Rajna-Majna-Danube channel. With all its length it is a navigable river, 588 km or 13.5% of its total waterway. Follow, what makes it possible to use its natural and geographical advantage, keeping in mind the sustainable development of the environment. The development of the nautical tourism accompanied by all sport program and recreation on water etc., has become the natural outcome of the above mentioned. There are many beaches on the river banks that make it possible to develop restaurant management as well as other accompanying services, during the stay on the river. Also, it is possible to develop ethno village near the river which can attract attention of cruising tourists. In this village is possible to supply all characteristic rural products to tourists.
Agritourism supply
Thanks to the natural ecological and environmental characteristics, the rural areas are very interesting and promising ones for the development of agritourism. Adequately built up cottages for rest in the countryside, characterized by silence and tranquility, are real oases for the people living in highly urban industrial centers, widespread in domestic and foreign markets. In the past, the development of this form of tourism has been declaratively supported, but, recently some new ideas have been recovered. Thus, in the course of 2004, aiming at promoting Serbia as a transit destination on the way to the Olympic games in Athens, two typical farms (ranch -"salash") were built. They had all the characteristics of a life and customs in the region of Vojvodina (typical farms No. 84 and 137). They were published in the tourist map of Serbia. Numerous villages in Vojvodina are the base for further planning keeping in mind new tendencies in the west (the so-called "return to the origins") the idea of healthy food, old customs and crafts and an ever growing popularity of typical ethno-contents such as the music, folklore, naive painting, etc.
Hunting and fishing
Vojvodina has a long tradition in hunting, but it also cares for its fauna. In the past twenty-five years, the well-known hunting ground have unfortunately been neglected, such as: Plavna, Morovoć, Karakuša, Karadjordjevo, Sombor woods, Apatinski bogland, Desert Delibato Subota, and probably the largest hunting area of Fruška gora.
Once, Fruška gora used to be the highest level hunting area for the diplomats because of its natural beauties, it offers the ideal conditions for the settlement of game, especially wild boars, roe deer, small game. With relevant laws and regulations watching out for the sustainable development and respecting all ecological standards, with corresponding investments in this area, Fruška gora may become a great tourist potential.
Photo safari
The diverse animal and vegetable world of Vojvodina, from orchard to conifers, from roe deer and rabbits to deer and eagles, make it an interesting region for the tourist who want such things. The most different plant species prove that the nature is intact, where all nature lovers may enjoy themselves. Vojvodina owns rich domiciles of birds, with very rare species. Some of them are: black stork (Ciconia nigra), swan (Cуgnуs olor), white-tailed eagle (Haliaeetуs albicilla), black kite (Milvуs nigranis), night heron (hуcticorax), great white heron (Egretta alba) and small white heron (Egretta garzetta).
Monastery tourism
Fruška gora, with its seventeen monasteries, has a great potential for the development of this form of tourism. The monasteries there are the cultural, historical and religious precious stone, often called "second Serbian Holy Mountain -Serbian Mount Athos". Due to long-time non-investment and neglect of them, these monasteries, such as: Beočin, Basenovo, Divša, Gregateg, Jazak, Krušedol, Kuvezdin, Mala Remeta, Velika Remeta, Novo Hopovo, Staro Hopovo, Petkovica, Rakovac, Privina glava, Šišatovac, Panek and Ravanica, which have been burned and devastated in course of their history, should be adapted and reconstructed. This should be the priority task of the government both in terms of culture and religion.
Wine tourism
The districts characteristic for the upbringing of grapevine and wine production, record significant income from numerous tourists who, at the time of grape picking, come to attend many wine festivals. The income is acquired both regarding the sale, i.e. the wine consumption, as well as regarding the expenses that tourists have during their stay at a given destination. Sremski Karlovci has an important potential for the development of this form of tourism. In Vienna they have protected name of rose wine well-known as "Karlovački tovjan" and among all Srem vines, the wines from Karlovac, which are made of raisins have acquired a good reputation (especially black vines). Black grapes are used to make famous "Karlovački ausbruch (i.e. Juice flowing from the grapes themselves), "Cipar wine", "Tropf Vermut", "Plenaš" and ordinary vermouth.
The events of the traditional grapes, picking holding in Sremski Karlovci in autumn every year. This is followed by the other festivities there. Quality wines from this region have been awarded many "flattering" rewards at many competitions held worldwide.
The production of many wine brands for which there are nature resources available and wine cellars, too, may attract a great number of tourists, both the domestic and the foreign ones.
Keeping in mind the tendencies of the tourist market to exceed the idea of mass tourism, in recent years, it has been recorded that the interest for special needs tourism has been increased. By the end of the last decade, the model of the rural development (CAP) was promoted it assumed a multifunctional character of the European agriculture and its role in the development of the economy and the whole society. Agriculture, as a primary economic branch, has far-reaching interest for complementary cooperation with all sectors of economy. The same refers to the tourism.
One of the characteristics of the modern tourist market is that the unique product are highly esteemed and that the tourists nowadays tend to run away from the uniformity that the globalization process has offered them. In this sense, the component part of the tourist offer is more frequently -local, regional or national. The role of rural house-holdings has become ever stronger and area of Municipality of Novi Sad has very respectable resources there.
The perspectives would be, as follows: 1. Informing the tourist about the tradition and the customs of the nations and nationalities, particularly in the villages representing the multiethnic communities and enriching them. This brings about wide creativity in making various programs and activities. 2. Gastronomy, i.e. the production of special local food and preparing of the "healthy food" or the organic food production (officially called so). The idea has been very popular in highly urbanized countries; recently it may be the contents of caterer and other manifestations with cookery as a subject matter. 3. Getting to know the folklore and "dances" of all nations and nationalities. It's quite logic, then to organize many manifestations that could fulfill the cultural program in the course of a year, in rural areas.
4. Getting to know old crafts and tools. During a long historical development of human society there were many crafts and tools which once used to have an important role in the rural households. Unfortunately, they had been forgotten and abandoned long time ago. They are especially interesting for the tourist of highly developed urbanized industrial countries and significantly enrich the tourist offer. However, these crafts may survive by common efforts only. Many organizations which make business or plan to do it in the rural areas, have a task of encouraging the local population, to organize them and help them supply raw material to dispose of goods and make an additional income for their families. 5. Folk arts and crafts. Rich multiethnic conditions make it possible for enriched and various folk arts and crafts, which may be very attractive to the tourists almost every region can boast with its typical product that appeared as a work of diligent hands of the local residents. Folk arts and crafts is the privilege of the residents of different rural areas, who invest their time, skill and talent. The products of folk arts and crafts may become basic point for the development of a special branch of the economy in rural areas. 6. Cultural and sports performances. They enrich and improve the variety of the tourist offer. An important role in promoting some of the above offers might be given to the private sector, i.e. a small business.
Partially made tourist programs should be supported, particularly fiscally, but also support in sale, since both of them could enrich the forms of non-pensions offers. This would also contribute to an efficient presentation and market valorization of the anthropological and other resources. This would further contribute to additional employment in the private sector without any significant initial investments. Elaboration of a high-quality program of stay in the village should not be left to the local resource fullness. It should be a serious topic of analysis if the development and the effects of this form of tourism are expected. The quality of the services offered, as many investigations carried out confirm, is one of the decisive factors that the tourists quote when they grade their stay in a certain tourist destination. This gives a chance for the residents in these regions, to expert their creative work.
Conclusion
Rural tourism in Serbia has begun to develop since the seventies of the twentieth century. The initial phase is characterized by uncontrolled access without clear market policy. The new millennium has entered a phase of "dedicated development". The relevant state authorities allocate adequate resources to improve the development of rural tourism in some areas.
With its natural and social resources in the area of Novi Sad is a very promising area for the development of rural tourism. With regard to the forms in the area of Novi Sad, it is possible to develop all kinds of rural tourism.
Expectations are that rural tourism could accelerate overall economic development and prevent negative trends plaguing the rural areas (depopulation, migration to urban centers, aging population, reducing macro-economic indicators, etc. Various rural areas offer the basis for further planning and action in this field. The access should be planned and selective. The program of the stay of the tourist in all areas should be planned and justified at all levels of organized tourist activities (both vertical and horizontal). A wide range of forms of tourism of special interest is a great chance for us here and it should be emphasized in the future. So, the plan of the sustainable development should be given full respect consistently. This is in the interest of both the host and the tourist coming from the developed and highly industrialized countries who have recently been interested in sojourn in the above regions. | 2018-12-07T18:34:55.403Z | 2015-01-01T00:00:00.000 | {
"year": 2015,
"sha1": "2c5d274113fadb1c47c37c6a84caee9034ceed6b",
"oa_license": null,
"oa_url": "https://scindeks-clanci.ceon.rs/data/pdf/0350-137X/2015/0350-137X1504069V.pdf",
"oa_status": "GOLD",
"pdf_src": "Anansi",
"pdf_hash": "2c5d274113fadb1c47c37c6a84caee9034ceed6b",
"s2fieldsofstudy": [
"Geography",
"Economics",
"Business"
],
"extfieldsofstudy": [
"Business"
]
} |
24495434 | pes2o/s2orc | v3-fos-license | Which factors are associated with global cognitive impairment in Wilson's disease?
Background Patients with Wilson's disease (WD) present cognitive impairment, especially in executive functions. Which other factors might be associated with global cognitive decline in these patients remains unclear. Objective To assess which factors are associated with worse performance on a global cognitive test in patients with WD. Methods Twenty patients with WD underwent cognitive assessment with the following tests: the Mini-Mental State Examination (MMSE), Dementia Rating Scale (DRS), verbal fluency test, brief cognitive battery, clock drawing test, Frontal Assessment Battery, Stroop test, Wisconsin card sorting test, Hopper test, cubes (WAIS) and the Pfeffer questionnaire. MRI changes were quantified. Patients with poor performance on the DRS were compared to patients with normal performance. Results Nine patients had a poor performance on the DRS. This group had lower educational level (9.11±3.58× 12.82±3.06) and a greater number of changes on MRI (9.44±2.74× 6.27±2.45). The presence of hyperintensity in the globus pallidus on MRI was more frequent in this group (66.6% vs 9.0%), with OR=5.38 (95% CI 0.85-33.86). Conclusion Global cognitive impairment was prevalent in this sample of patients with WD and was associated with low educational level, number of changes on MRI and MRI hyperintensity in the globus pallidus.
INTRODUCTION
W ilson's disease (WD) is a rare genetic condition described over 100 years ago, characterized by the accumulation of copper in many organs, including the central nervous system. 1 The cognitive symptoms present in the first recorded patients were not studied in depth for some time. 2 The first studies assessing cognitive performance described changes in memory and executive functions in patients with neurological symptoms of WD, 3,4 raising doubts as to whether the motor symptoms were in fact responsible for the cognitive impairment. 3 Patients with only hepatic symptoms did not exhibit cognitive abnormalities. 5 In 2002, a study found that patients with lesions restricted to the basal nuclei displayed cognitive changes compared to healthy controls. 6 Recently, it has been noted that patients with neurological symptoms but no depressive or anxiety disorders, present cognitive impairment, mainly in executive functions. Furthermore, the number of deficits on cognitive tests was associated with the intensity of brain Magnetic Resonance Imaging (MRI) changes, especially hyperintense signals on T2 images and atrophy. 7 A study of 12 patients with WD sought to correlate the topographies of hyperintensities on MRI with cognitive profile and suggested that changes in the putamen are related to worse performance on cognitive tests. 8 However, the small number of patients and the fact that only descriptive analysis of the data was performed limit this finding.
Nevertheless, it is known that the intensity of changes on MRI and the severity of motor symptoms are related to cognitive changes. 7 Not all patients with WD exhibited deficits on the cognitive tests. Normal overall performance on cognitive tests can be observed in more than 50% of WD patients. 3,7 It is not clear which factors are related to cognitive symptoms, such as disease duration, medications used or the specific topography of T2-hyperintensity changes.
In the present study, data previously published by us was reevaluated, comparing a group of patients with WD that exhibited deficits on global cognitive tests against cognitively unimpaired WD patients.
METHODS
The methodology used to select the patients, as well as the cognitive evaluation and neuroimaging protocols, has been described in detail elsewhere. 7 A brief description is given below.
Patients. Twenty patients with WD were selected based on clinical history, physical examination, serum ceruloplasmin levels (<20 mg/dl), 24-h urinary copper excretion (>100 mcg/24 h) and ophthalmologic examination by slit lamp. All patients were followed at a Movement Disorders Clinic from September 2006 to October 2007. The patients were undergoing regular treatment for at least one year, without neurological decline. Patients with severe dysarthria or anarthria that could impair speech comprehension, as well as those with motor disorders which could prevent them from performing written tasks, as well as clinical signs of hepatic encephalopathy, depression/anxiety according to the Goldberg scale, were excluded. 9 All patients signed a consent form and the study was approved by the institution's Ethics Committee.
Neurological evaluation. All patients were evaluated by the lead researcher of the study (NAFF) using the motor symptoms scale (MSS) 7,10 which assesses 13 items (bradykinesia, rigidity, postural instability, tremor, dystonia, chorea, athetosis, cerebellar disturbances, dysarthria, dysphagia, walking difficulties, psychiatric disturbances and other alterations), each graded from 0 to 3 (absent, slight, moderate and intense). Hence, the total score on the scale ranges from 0 to 39.
Cognitive evaluation. All patients were submitted on different days to a two-stage cognitive evaluation, each lasting around 40 minutes.
The first session was applied by the same examining neurologist (NAFF) and included the following tests: Mini-Mental State Examination; 11 a memory test of figures, 12 clock drawing; 13 verbal fluency tests (animals and FAS); CERAD naming test; 14 Stroop test 15 and the Frontal Assessment Battery (FAB). 16 The patients' relatives also answered the Pfeffer Functional Activities Questionnaire. 17 The second evaluation session was performed by an experienced neuropsychologist (CSP), with a similar duration to that of the first evaluation, involving the following tests: Mattis Dementia Rating Scale (DRS), 18 Wisconsin Card Sorting Test (WCST) and the Hooper and Cubes tests (subscale of Wechsler intelligence scale). DRS performance parameters. In order to define presence of global cognitive changes, education-adjusted DRS total scores were used, namely: <126 for individuals with 0-4 years of education;<130 for those with 5-12 years; and <136 for subjects with an educational level of ≥13 years. These cut-off scores were based on the
RESULTS
Impaired scores on the DRS were observed in nine out of the 20 patients evaluated, with a median of 123 points for the impaired group and 140 points for the group with no impairment. Comparison between the clinical, cognitive and neuroimaging features of the overall sample are depicted in Table 1, together with those of the two subgroups of patients. The group with DRS total score deficits performed significantly worsein comparison to the other subgroup on the Initiative/Perseveration and Conceptualization subscales, with medians of 29 vs. 37 and 32 vs. 38, respectively. No statistical difference was found for the other subscales.
Patients with impaired total DRS scores had higher frequencies of hyperintensity signal in the globus pallidus and in the mesencephalon. This observation led to separate assessment of those individuals with hyperintensity signal changes in the globus pallidus and mesencephalon, and comparisons to patients with no signal change in these topographies. The patients with hyperintensity signal in these regions had higher scores on the MSS, displayed global cognitive deficits (MMSE and DRS), changes in executive function (verbal fluency -animals and FAS), as well as changes in the learning phase of the memory test (globus pallidus) ( Table 2).
The neuroimaging factors related to poor global cognitive performance were determined by calculating the OR for each feature. The presence of hyperintensity signal in the globus pallidus (significance) and in the mesencephalon (tendency) were associated with a worse global performance ( Table 3). The former association remained after correcting for education, but significance was lost when correcting for neurological and MRI scores. The hyperintensity signal in the mesencephal on did not have a statistically significant association.
DISCUSSION
We observed that 45% of our sample had low total score on the DRS (below the 10th percentile). This rate is slightly higher to that found by Medalia in 1988. 3 When compared to the group with normal performance, there was no difference in relation to age or time of symptom onset. This suggests that after at least one year of followup and continuous treatment, the occurrence of cognitive decline is not related to the duration of symptoms.
The use of D-Penicillamine also did not differ between the two groups. There are no previous studies assessing the role of specific treatments on cognitive symptoms in WD. The use of D-Penicillamine was associated with an improvement in the MRI parameters compared to Zinc, but without impact on motor changes. 20 In order assess the impact of better treatment and disease duration on the occurrence of cognitive symptoms, a prospective study would be required in which patients with no previous treatment and cognitive evaluation prior to the study were selected and retested after a one-year follow up.
The population with worse cognitive performance tended to have higher MSS scores. The intensity of motor symptoms is associated to the quantity of altered cognitive tests 7 and can also influence performance on some tests. 3 We used tests with lower motor performance demands and excluded patients with serious motor problems. The small number of patients in each group might have prevented statistical significance from being reached in this case.
The tendency for a greater number of men with cognitive impairment has not been reported in previous studies. Our finding is probably related to the fact that men were less educated than women in our sample (9.82 vs. 12.78 years, respectively; p=0.031). Indeed, education was the only sociodemographic difference between the two groups. We used education-adjusted cut-off scores for the DRS, nevertheless, the population with lower educational level showed greater impairmenton the test. Low educational level is related to a greater risk for dementia, 21 probably due to a smaller cognitive reserve. 22 In the present study, no difference was observed in neuroimaging scale score for the group with ≤11years of education compared to those with > 11 years of schooling on the full MRI scale (7.77 vs 7.57 p=0.877), or for hypersignal (2.77 vs 2.43 p=0.877) or hyperintensity signal plus atrophy (3.77 vs 3.14 p=0.757). Furthermore, no correlation was detected between the number of years of education and scores on the MRI full scale (r 2 : -0.303 p=0.194). These findings suggest that for the same number of neuroimaging changes, individuals with less education are more susceptible to worse cognitive performance.
Cognitive performance in the population with impairment on the DRS was lower, especially in the executive function tests. The subscales with lowest performance were Initiation/Perseveration and Conceptualization, both associated with executive functions. This observation is in agreement with a previousstudy. 23 Changes in response time, 23 inhibitory control, 7 selective attention 24 and working memory 23 have been reported in patients with WD presenting neurological symptoms. The absence of difference on the Stroop test might have occurred because this is a common deficit in patients with WD, 7 even in cases whose performance on other cognitive tests is normal. Memory deficits may be seen in patients with WD, more on encoding than in the delayed recall, 3,6,7 as was seen in our sample. The normal performances on tests related to praxis and visuospatial abilities confirm that these are not domains affected in patients with WD. 3,23 The pattern of preferential involvement of the executive functions and memory (learning) suggests dysfunction in cortico-subcortical circuits of the frontal lobe. In 2002, Seniow et al. reported that patients with lesions restricted to the basal nuclei also showed cognitive changes. 6 A previous study seeking to correlated cognitive performance with MRI changes suggested that putamen involvement might be associated with a higher number of changes in cognitive tests. 8 In the study, only 12 patients were evaluated, three of which had normal MRI, thus limiting the study to nine patients and preventing the use of any statistical evaluation. For this reason, the results were only descriptive.
In our sample, all 20 patients presented changes on brain MRI. 7 The putamen was the topography most frequently affected (sign changes), but without difference in frequency between the groups with and without cognitive impairment. When we compared the seven patients with no putamen changes to the remaining, 13 no statistical differences were found for any of the tests except the MSS, suggesting that changes in this topography impact motor more than cognitive performance.
In our study we found that the topographic association between cognitive and T2 hyperintensity signal changes had a tendency to be associated with involvement of the mesencephalon and attained statistical significance in the globus pallidus. Patients with changes in these topographies had worse performance on executive function tests, such as fluency and learning memory, but also had more severe motor symptoms and changes on MRI. Correcting for these factors decreased statistical significance, which persisted only after the adjustmentfor educational level.
Few studies have evaluated the role of the globus pallidus in executive functions or in global cognitive performance. One study entailing deep brain stimulation (DBS) in patients with Parkinson's disease indicated that this would be a safer surgical site than the subthalamic nucleus for preventing cognitive dysfunctions. 25 A study of patients with generalized dystonia observed cognitive worsening, mainly on digit span, errors on attention tests and worse recall in the Rey Auditory Verbal Learning Test, after placement DBS probes into the internal globus pallidus. 26 More recently, observing pathological studies in patients with Huntington's disease that had a similar cognitive profile to our sample, 7 a linear correlation was found between atrophy of the external globus pallidus and its most ventral part with global cognitive dysfunction in these patients. 27 This result, taken together with our findings, suggests that the globus pallidus plays more than merely motor roles in the basal nuclei and that changes in this topography are associated with worse cognitive performance.
The small number of patients in our group might have limited the assessment of our data and prevented attainment of statistical significance on some tests. The fact that we were unable to define which parts of the globus pallidus and the mesencephalon were more involved also prevented a better correlation between cognitive performance and neuroimaging changes.
Despite these limitations, our study confirms that global cognitive impairments occur in a considerable proportion of patients with WD. These changes are associated, as previously reported, with severity of neurological impairments 7 low educational level, severity of hypersignal changes on MRI, andthe occurrence of these MRI changes in the globus pallidus.
Author contribution. Norberto Anizio Ferreira Frota: design of the study, analysis of the data and intellectual contribution to the written manuscript. Egberto Reis Brabosa: design of the study, analysis of the data and intellectual contribution to the written manuscript. Paulo Caramelli: design of the study, analysis of the data and intellectual contribution to the written manuscript. Claudia Sellitto Porto: contribution to this manuscript in design of the study and analysis of the data. Leandro Tavares Lucatto: contribution to this manuscript in design of the study and analysis of the data. Carlos Alberto Buchpiguel: contribution to this manuscript in design of the study and analysis of the data. Carla Rachel Ono: contribution to this manuscript in design of the study and analysis of the data. Alexandre Aluizio Costa Machado: contribution to this manuscript in design of the study and analysis of the data. Support. This research was supported in part by the CAPES (Coordination of Improvement of Higher Education Personnel). | 2018-04-03T01:07:08.987Z | 2016-10-01T00:00:00.000 | {
"year": 2016,
"sha1": "9c83425155a3691621a4899f4056cc5a72853bbe",
"oa_license": "CCBY",
"oa_url": "http://www.scielo.br/pdf/dn/v10n4/1980-5764-dn-10-04-00320.pdf",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "68baf41f6b8f27644c1c91d51e518de9db6b7b2d",
"s2fieldsofstudy": [
"Medicine",
"Psychology"
],
"extfieldsofstudy": [
"Psychology",
"Medicine"
]
} |
10269328 | pes2o/s2orc | v3-fos-license | What’s that you’re eating? Social comparison and eating behavior
People seem to have a basic drive to assess the correctness of their opinions, abilities, and emotions. Without absolute indicators of these qualities, people rely on a comparison of themselves with others. Social comparison theory can be applied to eating behavior. For example, restrained eaters presented with a standard slice of pizza ate more of a subsequent food if they thought that they had gotten a bigger slice of pizza than others (i.e., had broken their diets), whereas unrestrained eaters ate less. Social influences on eating such as modeling and impression formation also rely on comparison of one’s own eating to others. Comparing one’s food to others’ meals generally influences eating, affect, and satisfaction.
Background
Research shows that when we eat with someone else, we are likely to eat similarly to that person, modeling or mimicking what and how much the other person eats (see e.g., [1] for a review). This effect is welldemonstrated and reasonably well-known, but it turns out that food eaten by other people has many effects on us. For example, research has demonstrated: the trustinducing effects of eating the same food as a stranger with whom we share a meal [2]; that we eat more [3] and enjoy it more when eating with other people [4]; and that even infants associate people who go together in ways such as wearing the same clothes or interacting in positive ways with eating the same foods [5].
One interesting aspect of this work is the implication it suggests that people eating with others are extremely sensitive to what the others are eating. Obviously, the people in the studies just described, even infants, were noticing and reacting to the foods served not only to themselves, but to their eating companions. This social comparison of how others' meals compare to one's own food is what this paper will discuss.
Main text
The social psychology literature on what is called Social Comparison shows that people compare themselves to others on a wide variety of dimensions. We want to assess the correctness of our opinions, abilities, and emotions, and so we look to how others act in order to ascertain where we fall [6]. Sometimes we make comparisons with people who are in a better position than we are, "upward comparisons" (e.g., [7]), in order to determine how attainable these better positions are, although this often leaves us feeling badly when it seems we cannot achieve these higher levels. We also compare ourselves to those in less advantageous positions, "downward comparisons" (e.g., [8]), which allow us to feel good about ourselves or at least better. It turns out, we do this sort of comparing with others for our food consumption as well as these other factors.
We even use our food consumption, and our knowledge that everyone compares each other on this dimension, to affect how others perceive us. For example, Mori, Chaiken, and Pliner [9] demonstrated that when women want to make a positive impression on others, they eat less than when they are not concerned with impressing someone. Vartanian and his colleagues [10] reviewed a whole literature that examines the judgments people make about others based upon how much they eat. Moreover, it is not just amount of food that changes when someone wants to make a good impression. A woman eating with a man eats lower calorie foods than one eating with another woman, and the caloric intake of women in a group eating together is negatively correlated to the number of men in the group [11]. If we use food intake to influence how others view us, clearly we expect others to attend to what we eat as much as we concentrate on their consumption.
The large literature on social modeling of intake also reflects comparing one's own intake with someone else's. In the case of modeling of intake, as mentioned earlier, people tend to eat more with someone who eats a lot and less with someone who eats minimally (e.g., [1]). Again, this clearly reflects attentiveness to other people's intake. Social comparison is rarely mentioned in discussions of factors influencing eating behavior, but we contend that it is actually one of the most critical social influence factors. Social comparison actually seems to underlie both impression management using food and social modeling of intake levels, as they both rest on an assumption that we compare our food intake to each other when we eat together. The individual may even rely on social comparison when eating alone, by comparing one's own intake to a social norm of what is appropriate to eat (e.g., [12]). So other than straight social facilitation, the main social influences on eating actually stem from social comparisons.
Social comparison can occur both on amounts and types of food eaten as well as on dimensions related to food and eating such as body weight or physique. Eating behavior following both types of social comparisons may be affected. When young women are induced to compare themselves to fashion models (by showing them images from fashion magazines in a way that encourages comparison), they tend to feel worse about themselves and their eating is affected (though differently for chronic dieters/restrained eaters and unrestrained/nondieters) [13,14].
There is some, though limited, direct evidence of social comparison effects on eating that do not underlie further social effects such as impression management or modeling behavior, and are not related to other dimensions such as body image, but are simply comparisons of our own and other's eating that have effects on how we eat and how we feel about it. In a study we published several years ago [15], we showed that our female college student participants were attending closely to, and comparing, their own portion of food and that being offered to another participant. We served everyone in our experiment a standard slice of pizza from a popular local pizza chain. But what the different groups of students saw being given to a supposed second participant in a different room varied.
Bearing in mind what we've learned as parents through our own children's reactions to the sizes of portions served to them and their siblings (especially portions of dessert!), we led the study participants to believe that their portion might not be quite the same as what another participant received. Some students saw that the other person was getting a slice 1/3 smaller than they got, and some saw a slice 1/3 larger going to the other participant (and the control group didn't see another slice, just their own). In fact, of course, there was no other participant, everyone got the same sized slice of pizza, and our interest was only in how our students reacted to what they thought someone else was getting compared to what they themselves were given.
We also assessed which of the students were restrained eaters (that is, chronic dieters who worry about their eating and weight, but tend to break their diets under many provocations), and which were unrestrained eaters (people not concerned about dieting or weight), as these types of individuals generally differ in their reactions to what they (and perhaps others) eat [16,17]. Restrained eaters are prone to overeating when they believe that they have already "blown" their diets for the day [18,19], so if they see themselves as having overeaten on the pizza, they could be more likely to eat more food subsequently. Unrestrained eaters, on the other hand, regulate their intake more based on internal sensations and social norms, and thus would be likely to compensate and eat less after what they saw as a large meal [20].
In fact, this is exactly what the data showed, with a significant interaction between restraint and "pizza size" for amount of cookies eaten when we asked them to taste and rate cookies in a subsequent "perceptual taste test." There was no effect of thinking that they had received a smaller slice on the eating of either group, and all students rated their slice as "just the right size" when they saw it as smaller than the other person's. But when they thought they had eaten a "larger than normal/desirable" slice, restrained eaters went on to eat more cookies (as they tend to do when they believe that they have broken their diets), while unrestrained eaters "regulated their intake" and reduced their cookie consumption after what they thought was a large slice of pizza. Interestingly, unrestrained eaters were happiest when they got the smaller slice (which was when restrained eaters felt the worst), and restrained eaters were happier when they got the largest slice (probably because now they were being forced by us to eat a lot, and didn't have to feel guilty about eating a lot, which appears to be what restrained eaters actually would prefer to be able to do!).
What this study showed most clearly was that all participants, restrained and unrestrained alike, were comparing their food portion to what others were getting, and changing their behavior and how they felt about their food and their eating according to this comparison. Remember, they all got exactly the same-sized slice of pizza! In a recent as yet unpublished replication study from our laboratory (Polivy J, Herman CP, Teeft T. Eating behavior as a function of perceived portion inequity. In Preparation), we gave students the opportunity to, in effect, correct the inequity in the size of the portions we gave them. After they ate their slice of pizza (which half of them believed was smaller than someone else's slice, versus the control participants who did not have a comparison slice present), we made more pizza available. Male and female restrained eaters, and female unrestrained eaters who were led to believe that they had received the "smaller" slice compensated for having been "short-changed" by eating more of the "extra" pizza available to them than did their respective control groups, who did not feel they had gotten less initially.
Leone, Herman, & Pliner [21] made participants believe that they had just eaten either twice as much or half as much as a (non-existent) partner in the study ate. This manipulation was intended to induce either a positive social comparison (I ate less than she did and thus performed better on the eating task) or negative comparison (I ate more than she did and thus performed worse on the eating task) between the participant and the putative partner she was going to meet later in the experiment. The main dependent measure was how the participants felt about this supposed other student. The outcome was exactly as social comparison theory predicted, in that the participants felt worse when they ate more than the other person than if they "won the competition" by eating less. Moreover, they then distanced themselves from the other person by rating her as a less desirable work partner, less likeable, and less similar to themselves.
We wondered how far these comparisons extend. Is it only size of portion or amount eaten that others attend to, or are other aspects of the meal being compared? Looking further at the literature, a study by Just and Wansink [22] varied the price of a food and examined the effect of different pricing on amount eaten. Giving people a "deal" by giving them a half-price coupon for however much they ate actually led them to eat less pizza (2.95 slices) than those paying full price (4.09 slices), but they liked the cheaper pizza more. So changing the price for some changed their reaction to the food, making them like their bargain more, while eating less of it.
In a conceptual replication of our own pizza slice studies (Polivy J, Herman, CP, Garmenova Y. The effects of hedonic contrast and restraint status on food ratings and consumption. In Preparation), we gave all participants a vegetarian submarine sandwich, but told some that another participant was getting a different meal. We found that participants given the submarine sandwich ate very differently depending on what they believed another participant was getting to eat. Restrained eaters who believed that the other person was getting a more desirable meal (pizza) and they were getting a "worse" meal ate less of their sandwich than the other participants did. Both restrained and unrestrained eaters, however, ate more of the sandwich than the control group did when they thought the other person was given a less preferred meal (plain cheese sandwich), and thus they were getting the "better" meal. In addition, the restrained participants reported liking the submarine sandwich less when they thought another participant was eating pizza (and thus their sandwich was worse) than if they thought that they were both eating the same sandwich; however, it was only the unrestrained eaters who increased their ratings of the "better" meal when they thought the other person was getting a plain cheese sandwich.
Social comparisons during eating occasions thus have important effects on the eaters. Comparing our food to what others are getting to eat affects both how much we like our own meal and how much of it we eat. When we make downward comparisons of our own food versus what others are eating, (i.e., the other person's food is worse than ours is) we seem to value our meal more highly, which promotes and increases both our ratings and our consumption of it, whereas upward comparisons have the reverse effect. Feeling worse about our meal makes it less desirable and thus we eat less of it. Moreover, if we are able to compare our food to that of others, and emulating them is both desirable and "attainable" (i.e., we want to be able to eat more and we actually can increase our portion size to match that of another eater), we will change our intake accordingly, especially if we are dieters who are highly concerned with how much we and those around us are eating.
Why are we so concerned with what others eat? Apparently the same processes that push us to make upward and downward comparisons of ourselves with others on so many other attributes apply equally well to eating. Our basic drive to evaluate ourselves by comparing ourselves to others, especially similar others like our peers (according to Festinger) extends to what and how much food we eat. Our food comparisons also affect how we feel about co-eaters who are getting more or better food, or reduced food amounts and quality, as Leone et al. showed. This makes sense on many levels (e.g., [17]). If we are eating less food than others, this can make us physically weaker and less fit to survive. If we are getting lower quality/less preferable food, this may also reflect or affect our social status. The king and his court eat more desirable foods than do the peasants. We don't want to think of ourselves as having lower social status than our peers, so we need to compare and make sure we are eating as well as they are. And if we are happier with our meals and enjoy them more, then our mood is elevated. It is sad but true that people like to feel they are getting a better deal than others or doing better than their peers, and eating is no exception to this.
Thus this social comparison of others' eating to our own is the basis for many of the phenomena that we think of as social influences on eating [23]. The reason we model another's intake is because we compare ourselves to the other, and want to maintain our status in the comparison. If our goal is to eat as much as we can without eating to excess (as Herman, Roth, & Polivy, [24], have argued), then we can "win" in the social comparison game by modeling our intake on another eater and eating just a little less than s/he does. Indeed, the modeling literature seems to show just this-we eat more with an augmenting model than with no model or one who eats minimally, but generally intake is lower than the augmenting model ingests. If, on the other hand, the model is eating a minimal amount, people reduce their intake, but may eat a bit more than the model, perhaps to show that while they are appropriately limiting their intake, they "win" by getting a little more than the low-intake model. In either case, our modeling of the intake of other eaters reflects our social comparison with their behavior, and presumably serves a social comparison goal.
Similarly using food for impression management is also based on social comparison. We assume others are watching what we eat, so that we can impress the other observing our eating with our femininity by eating minimally or masculinity by eating more heartily (and, of course, avoiding quiche and other "lady foods"). Thus, social comparison is the requisite basis for at least these two forms of social influence on eating.
Conclusion
To summarize our thesis, we know from the vast social comparison literature that such comparisons are important for our identity and our evaluations of ourselves. It appears from the present research that we are also very careful to compare our own plates to what others are eating, suggesting that this also provides valuable information. By watching others eat, we learn what and how much to eat, how much we like what we are eating or would prefer something else, how we feel about our own consumption compared to others' intake, and how we feel about other people who eat similarly or differently from us. Understanding these effects could be helpful in designing treatments for overeating/obesity. For example, having obese people eat with others can be helpful if the others are eating smaller portions. On the other hand, however, those inclined to overeat may be encouraged to do so when eating with others who consume large amounts. Treatment might be aimed at making potential overeaters aware of these influences so that they can plan their own eating without being swayed by others who are eating larger amounts. Eating disorder patients, who are exquisitely sensitive to what those around them are eating, seem to be motivated to eat less than anyone else, and thus it might not be helpful to have them eating with other patients with whom they can compare intake.
As comparing ourselves to others can change not only how we feel but how much we eat, clearly social comparison with respect to eating is worth further study. | 2017-06-27T20:23:59.377Z | 2017-04-27T00:00:00.000 | {
"year": 2017,
"sha1": "83bf060e79c68fceda6a3a91298c78ea9868dd81",
"oa_license": "CCBY",
"oa_url": "https://doi.org/10.1186/s40337-017-0148-0",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "83bf060e79c68fceda6a3a91298c78ea9868dd81",
"s2fieldsofstudy": [
"Psychology"
],
"extfieldsofstudy": [
"Psychology",
"Medicine"
]
} |
242563785 | pes2o/s2orc | v3-fos-license | History
The Department of History offers students the opportunity to work intensively in the classroom and with individual faculty to discover the richness and complexity of history. Undergraduates begin with general courses, but progress quickly to courses that explore topics in depth and provide experience in researching, analyzing, and writing about the past. Graduate students work independently and with faculty advisors on reading and research in their fields of interest, while departmental seminars bring them together to discuss their research, forging a collegial intellectual culture. The department emphasizes European history, United States history, and the histories of Africa, Latin America, and China. Faculty and students participate in a variety of interdisciplinary programs, including Africana Studies, East Asian Studies, Latin American Studies, Judaic Studies, Museums and Society, the Program for the Study of Women, Gender & Sexuality, and International Studies.
Graduate Programs
The graduate program prepares professionally motivated students for careers as research scholars and college and university teachers. Hence it is designed for candidates who want to proceed directly to the Ph.D. degree, who have developed historical interests, and who are prepared to work independently. Within the areas of European history, American history, and the histories of Africa, Latin America, and China, the department emphasizes social/economic and intellectual/cultural history. Although diplomatic and political history are not emphasized, attention is given to the social, economic, and cultural bases of politics.
The program is organized around seminars rather than courses, credits, or grades. AS.100.781 The Seminar-AS.100.782 The Seminar and satellite seminars in European, American, and Comparative World History bring together students, faculty, and invited scholars from outside the university to discuss their research work. These departmental seminars create a lively intellectual community in which graduate students quickly become contributing members. The combination of flexibility, independence, and scholarly collegiality offered by the Hopkins program gives it a distinctive character.
Students select four fields (one major and three minor) and make their own arrangements with professors for a study program leading to comprehensive examinations at the end of the second year. Those arrangements may include taking a seminar in the field. One, and exceptionally two, minor field may be taken outside the Department of History. Students have maximum flexibility in the construction of individual plans of study, as well as the opportunity to work closely with several professors.
Admission and Financial Aid
In judging applications, the department puts particularly heavy emphasis on the quality of the student's historical interests and prior research experience. Each applicant must submit a sample of written work. Ordinarily no candidate for admission is accepted whose record does not indicate an ability to read at least one foreign language.
The department accepts only those students who plan to work in the specific fields of the faculty, and each student is admitted only with the approval of a particular professor. Applicants should indicate the proposed field of specialization at the time of application. With the concurrence of a new faculty advisor, students may, of course, later change their major professor.
The department normally provides full fellowship support for all admitted students including both tuition and a stipend. Students are encouraged to apply for external support if eligible.
Programs
• History, Bachelor of Arts (https://e-catalogue.jhu.edu/arts-sciences/ full-time-residential-programs/degree-programs/history/historybachelor-arts/) • History, Bachelor of Arts/Master of Arts Five-Year Barcelona Program (https://e-catalogue.jhu.edu/arts-sciences/full-time-residentialprograms/degree-programs/history/history-bachelor-arts-master-fiveyear-barcelona-program/) • History, Bachelor of Arts/Master of Arts Four-Year Program (https:// e-catalogue.jhu.edu/arts-sciences/full-time-residential-programs/ degree-programs/history/history-bachelor-arts-master-four-yearprogram/) • History, Minor (https://e-catalogue.jhu.edu/arts-sciences/full-timeresidential-programs/degree-programs/history/history-minor/) • History, PhD (https://e-catalogue.jhu.edu/arts-sciences/full-timeresidential-programs/degree-programs/history/history-phd/) For current course information and registration go to https://sis.jhu.edu/ classes/ A first-of-its kind seminar hosted by the Program in Racism, Immigration, and Citizenship, this course explores the practice of composition for professional writers. It considers the "light" and "dark" sides of clear, direct scholarly writing and intentional, academic obfuscation, respectively. Attendees will also learn strategies and potential hazards that accompany the written description of power in the Humanities and Social Sciences. Drawing on key works in classic and contemporary social theory of religion and secularity as well as historical, ethnographic, and sociological monographs, this course investigates some scholars' answers to the question of why we might want to take "religion in modernity" as an object of study (or not), what kinds of roles and importance religion (or various institutions, impulses, practices, and ideas connected to major faith traditions) has/have arguably enjoyed in an arguably global modernity often imagined as intrinsically secular, whether and how it matters that the category of religion itself may be a modern invention intertwined with specifically Christian-European and European imperial and colonial projects, whether and how we should take "secularism" or "secularity" as our object of study no less than or more than religion, what special kinds of research agendas and assumptions the empirical study of 'religion' and its workings and significance in modern political and cultural life might demand, what sorts of scholarly value it might add, and how the answers to those questions change when we look to a global present which is sometimes framed as post-secular. A more theoretically and comparatively oriented first part of the course will give way to focused attention on historical, As the use of military force to resolve disputes between nations becomes less plausible in most regions of the world, the struggle for influence intensifies. Among the results has been the rise to global fame of the concept of 'Soft Power', in theory a means to turn a country's attributes and achievements into a lever for gaining advantage in international competitions of all sorts. Google lists 176m references to the term (11/1/13), China has invested in it heavily and consciously. Even nations such as Russia and Iran are using soft power language and tools. During the Syrian crisis, the term was everywhere. But the course will suggest that the land which gave birth to the term -the US -is still the one which enjoys the greatest advantages in this contest, since the most significant form of soft power leverage over time is the one which most successfully proposes models of modernity. No matter how much weaker the appeal of America's military, its banks, its politics compared to their heyday, America's products, icons, technologies, universities, media industries, personalities etc can still produce forms of presence and innovation which the rest of the world must reckon with.The course offers an historical perspective on this dynamic. Specifically it focuses on the great variety of models of modernity the US has produced over time and still can, and how the world has come to terms with them (including militant rejection). The course in its early stages is European in focus. Soon it opens out to other regions of the globe, especially Asia. So often the imperative of innovation that the US brings has encountered waves of anxiety about relations between the state and its citizens, between national communities and the market, between generations, genders, ethnic groups and religions. Efforts to understand 'soft power' and the outcomes of the world's encounter with the American version : these are the central issues of the course.
SA.200.734. Kissinger Seminar: Contemporary Issues in American Foreign Policy and Grand Strategy. 4 Credits.
What is America's purpose in international affairs? What are the major challenges in U.S. foreign policy? What is the future of American power in a changing global system? This course examines these and other critical issues in U.S. foreign policy and global strategy. We will study the opportunities and dilemmas the United States confronts in dealing with terrorism and the Islamic State, great-power competition vis-a-vis Russia and China, the threat of nuclear proliferation and "rogue states, " and other issues from international economics to transnational threats. We will consider whether America can maintain its international primacy, and what alternative strategies it might pursue in the future.
SA.840.706. Middle Power Diplomacy. 4 Credits.
International relations scholarship pays close attention to the Great Powers, and concern over failed states. With the formation of the G20, there is a multilateral forum where Great Powers and the Rising Powers of Brazil, Russia, India, and China can shape the global agenda. Yet in every era and every stable international order there is an important role for Middle Powers -countries whose capacity to foster or disrupt order leads them to "punch above their weight" in international relations.
AS.362.111. Introduction to African American Studies. 3 Credits.
This is the gateway class to the study of African American life, culture, politics and history in the United States and the Caribbean. African American Studies is a multi-disciplinary field of study that includes history, social sciences, literature and the arts. This academic discipline is often taught under parallel terms emphasizing related geographies and identifying concepts: Black Studies, Afro-American Studies, Africana Studies, Pan-African Studies and African Diaspora Studies. Unlike every other modern academic discipline in the college, African American Studies was founded because of a social and political revolution. The class has two purposes, operating in tandem: (1) provide students with a generous historical, political and cultural overview of the lives of African descendants in the western hemisphere, but principally in North America; (2) explicitly address the problem of regularized systemic inequality in American society as a response to and an attempt to dominate a core nugget of identity difference that is the operative mechanism in black protest, resistance and revolt. This is a difference that includes, but is not limited by or reducible to morphology, culture, history, and ontology. We accept as an operating principle that an inquiry into an enslaved group of nonwestern human beings marked by difference cannot rely solely on the western episteme for its excavation. Thus, we will examine a body of diverse evidence during the semester, works of literature, history, sociology, political science, music and film. The course requirements include essays, examinations, and presentations. Area: Humanities, Social and Behavioral Sciences
AS.362.112. Introduction to Africana Studies. 3 Credits.
This course introduces students to the field of Africana Studies. It focuses on the historical experience, intellectual ideas, theories, and cultural production of African-descended people. We will consider how people of the black diaspora remember and encounter Africa. We will explore, too, how such people have lived, spoken, written, and produced art about colonialism and enslavement, gender and mobility, violence and pleasure. This course will be thematically organized and invite you to center your own stories about black people within your understanding of the modern world and its making. Classics Research Lab: The Baltimore Casts Project will continue work begun in Fall 2020 researching a remarkable collection of plaster casts of classical Greek and Roman sculptures, created ca. 1879 for the Peabody Institute's art gallery. Such cast collections were a highly valued cultural resource in Europe and North America, produced for major museums, academic institutions and wealthy individuals. Because of the technical process of the cast formation, based directly upon the ancient sculptural surface, cast collections brought contact with the actual ancient artifacts into temporally and spatially distant contexts-including the burgeoning urban space of 19th century Baltimore. In Spring 2021, the Lab will continue archival/field research on the cast collection's context, content, formation, and usage by the people of Baltimore, and its eventual disbanding. We will also begin construction of the virtual exhibition that reassembles the collection's member objects, charting their biographies and current locations. A major dimension of the lab's research is contextualizing the casts in Baltimore of the mid 19th to mid-20th centuries, considering different forms of access and restriction to ancient culture that were forming throughout the city and its diverse population, including who truly had access to the cast collection in Mount Vernon, and in which capacities. Area: Humanities
AS.040.630. Classics Research Lab: The Baltimore Casts Project.
Classics Research Lab: The Baltimore Casts Project will continue work begun in Fall 2020 researching a remarkable collection of plaster casts of classical Greek and Roman sculptures, created ca. 1879 for the Peabody Institute's art gallery. Such cast collections were a highly valued cultural resource in Europe and North America, produced for major museums, academic institutions and wealthy individuals. Because of the technical process of the cast formation, based directly upon the ancient sculptural surface, cast collections brought contact with the actual ancient artifacts into temporally and spatially distant contexts-including the burgeoning urban space of 19th century Baltimore. In Spring 2021, the Lab will continue archival/field research on the cast collection's context, content, formation, and usage by the people of Baltimore, and its eventual disbanding. We will also begin construction of the virtual exhibition that reassembles the collection's member objects, charting their biographies and current locations. A major dimension of the lab's research is contextualizing the casts in Baltimore of the mid 19th to mid-20th centuries, considering different forms of access and restriction to ancient culture that were forming throughout the city and its diverse population, including who truly had access to the cast collection in Mount Vernon, and in which capacities. In this course, students will engage with select topics in Korean history from premodern and modern times and examine how the past has been represented through various forms of film and literature. This will be combined with readings of academic articles to allow students to gauge the distance between scholarship and cultural expressions of history. Through this, students will be introduced to the highly contested and often polarizing nature of Korean history and the competition surrounding historical memory. This course explores the interlocking political and historical dimensions of personal experience, an account of ourselves and our relations ("the quest for competitive advantage between groups, individuals, or societies") that points us in the direction of what "is 'common' to the whole community." What does it mean for people who are not the chief actors or theoreticians of political movements to construe the record of their experience as an act of political intervention, an aid in our total understanding of the structure of popular belief and behavior? Furthermore, what happens when attempt to historicize and critique these recorded experiences? The class asks its members to focus closely on an episode of autobiographical experience as both an historical fossil and tangible politicized moment, particularly the places where race, gender and economic power are visible. By producing a "critical discourse of everyday life-by turning residual, untheorized everyday experience into communicable experience… one can reframe ostensibly private and individual experiences in terms of a collective struggle." To help our investigation we will read and analyze closely memoirs, many of them from the African American experience. We function partly as a writers' workshop and partly as a critical review. The final goal of the seminar is a polished 20-25 page autobiographical essay. Area: Humanities Writing Intensive
AS.060.633. Biography and African American Subjects from the 19th and 20th Centuries.
This course will read through contemporary biographical treatments of prominent 19th and 20th century African American writers to explore the prominent ideological predispositions as well as the structure of archival sourcing in the creation of life-writing on black subjects. Students will make research trips to the Library of Congress, the University of Delaware, Morgan State University and other local archives for instruction in research methodology and the collection of primary source materials. Student final projects will use primary archival sources to intervene in debates about the interpretation of historical subjects and historical events. Area: Humanities Writing Intensive
AS.060.644. Oceanic Studies & the Black Diaspora.
In this course, we take up Hester Blum's blunt observation that "the sea is not a metaphor" in order to consider the visions and hopes black writers have associated with the sea, as well as the despair and trauma transatlantic slavery has left "in the wake, " to quote Christina Sharpe. Area: Humanities Writing Intensive
SA.710.707. Politics of Protest in Europe and Eurasia. 4 Credits.
This class provides students with an in-depth exploration of the motivations behind, strategies of, and societal changes produced by various instances of collective mobilization across Europe. Some of the main questions we seek to answer throughout this course are: Along what lines of grievance do social movements form? Why do people choose to protest collectively given threats of reprisal? What explains the rise in support for populist outreaches by far-right parties in Europe's most democratic countries? By examining a wide variety of movements, from labor mobilizations such as Poland's Solidarity to ethnic nationalist campaigns by groups such as the Basques and the Kurds, we use comparative analysis to identify points of convergence and divergence across cases. We explore how mobilization strategies spill across borders in "waves" of protest, such as those prefacing the collapse of the Soviet Union. We also investigate how developments in media and technology affect protest outcomes -and when they don't, such as the "Twitter Revolution" that failed in Moldova. Students will gain both empirical insights into particular cases across Europe as well as the conceptual tools used by scholars of comparative politics to analyze the puzzling but highly topical questions above.
SA.710.737. Writing for Policy: A workshop on the journal, Survival: Global Politics and Strategy'. 4 Credits.
This seminar/workshop might also be titled "Writing and Editing for Policy Debate." Following short lectures and class discussion of fiction and non-fiction models for good writing, students will participate, in real time, in a 'shadow editorial process' putting together two issues of the bi-monthly journal, Survival: Global Politics and Strategy. More than half of class time will therefore be organized as editorial meetings where students, under the direction of the instructor (the Editor of Survival), will participate in all aspects of the process: commissioning articles, evaluating submissions, editing accepted copy, writing essays with an eye to publication, and laying out the issue. In addition, each student will meet with the instructor in 4 or 5 half-hour tutorial sessions to go over the student's written work.<a href="http://www.sais-jhu.edu/resources/ administrative-offices/how-access-course-syllabi-and-evaluations" target="_blank">Click here to see evaluations, syllabi, and faculty bios</a>
SA.710.763. Movement Towards European Unity. 4 Credits.
This course represents an introduction to the historical development of the European Community and the European Union. That said, the perspective I adopt is grounded more solidly in political science rather than history. My argument is that European integration can be explained as a function of three types of variables: Ideas, events, and ?unintended consequences?. The analytic claim is that European integration started and is perpetuated to shore up the weaknesses of individual nationstates and of the national state system. In other words, the course is grounded on a set of very specific (and very controversial) arguments and interpretations. These must be examined carefully, critically, comprehensively. They must be challenged. And, if necessary, they must be refuted. The material surveyed in the course should help you do all those things and more. Johns Hopkins invented the modern hospital along with modern medical education. This seminar will explore the history of the hospital from its monastic origins to its current form, with particular attention to how hospital design has reflected and reinforced ways of thinking about health, disease and medical treatment. We will also consider specialized hospitals and clinics, for the mentally ill, for particular diseases, for women and children, among other topics. Area: Humanities, Social and Behavioral Sciences
AS.001.102. FYS: Japanese Robots. 3 Credits.
Japan is a world leader in biomimetic robotics. Japanese society enthusiastically embraces robotic nurses, robotic guides, robotic waiters, robotic pets, and even robotic girlfriends. What are the origins of the Japanese love of robots? What role did robotics engineers play in creating the image of loveable robots? What societal fears do Japanese robots assuage and what hopes do they foster? In the course of the semester, students will learn about the evolution of Japanese robotics, and explore the implications of this evolution to humans' relationship with robots. While learning about Japanese robots, students will acquire skills necessary for college-level education, including how to write an email to a professor, how to organize and manage digital tools, how to navigate the information resources, and how to develop, complete, and present research projects. This course will equip students with skills essential to their success in college and beyond. Area: Humanities, Social and Behavioral Sciences
AS.001.103. FYS: When Worlds Collide -Science Goes Global. 3 Credits.
In this First-Year Seminar, we will explore instances of contact between different world cultures and pre-modern and modern science (16th-20th c.). The premise of the course is the understanding that in addition to the cultural, religious and political negotiations that took place during cross-cultural encounters, science also underwent a similar process. We understand science expansively, as the study of nature and the production of knowledge about it embedded in a particular cultural context. The historical episodes we will discuss are selections of instances where agents of the West-missionaries, explorers, businessmen, colonists, scientists-established prolonged contact with non-western cultures and engaged in conversations about their worldviews. Some cases considered include Jesuits in the Chinese imperial court, Spanish missionaries among the Maya, and English explorers in the Pacific islands. Area: Humanities, Natural Sciences
SA.630.740. Risk in International Politics and Economics. 4 Credits.
This is a course on social science research methods as they apply to decision-making under conditions of uncertainty. In other words, it looks at how the skills of a social scientist can be put to use in the 'real world'. The course begins by looking at how decision makers anticipate future events, it explores what evidence they consider and what they ignore, and it looks at the standard models they apply in projecting the future based on the present. The case studies applied in this early part of the course focus on seemingly straightforward economic and financial questions. The problem is that most of the predictions that were made in these areas ended in disaster. Hence the course turns to explore the bias that is built into estimates of the future to understand whether the problem lies in the way the world works or in how we try to understand it. It introduces students to a conceptual vocabulary based on systems theory to make it easier to build more complex relationships into the analysis. And it explores the unintended consequences of policy decisions. Here the case studies move from economics to politics and from crisis to stagnation. This does not offer much of an improvement. Therefore the course makes a third analytic turn to bring the dynamics of human interaction more firmly into focus. It looks at negotiation, communication, and culture as possible sources of error or misunderstanding. The case studies focus on conflict, terrorism, and popular protest. By the end of the course students have a better grasp of where their predictions are likely to falter. They will also understand why such predictions must nevertheless be made. Risk in the international political economy derives from decision-making under conditions of uncertainty. The problem is that uncertainty is inevitable, but decisions must be made regardless of this.
AS.010.208. Leonardo da Vinci: The Renaissance Workshop in the Formation of Scientific Knowledge. 3 Credits.
How does a notary's son trained as a painter come to claim expertise in the construction of machines and acquire knowledge of the principles of optics, human anatomy, the flight of birds, the dynamics of air and water?
The course will focus critically on the myth of Leonardo's singularity and explore his achievements with regard to the artisanal culture of his time, as well as the problems of authority in the recognition of artisanal knowledge as scientific discovery. Area: Humanities
AS.010.212. Mirror Mirror: Reflections in Art from Van Eyck to Velázquez. 3 Credits.
Explores the different ways Early Modern painters and printmakers incorporated mirrors and optical reflections into their works for the sake of illusion and metaphor, deception and desire, reflexivity and truth-telling. Connecting sense perception and ethical knowledge, embedded mirror images often made claims about the nature of the self, the powers of art, and the superiority of painting in particular.
AS.010.252. Sculpture and Ideology in the Middle Ages. 3 Credits.
This lecture course will offer a selective, thematic exploration of the art of sculpture as practiced in the Middle Ages, from the fall of the Roman empire in the 4th century CE to height of the Gothic era. The primary concern will be to analyze sculpture in all of its forms -monumental free-standing, architectural, liturgical, and commemorative -as the primary medium utilized by patrons, both private and corporate, to display political messages to an ever growing public. Area: Humanities
AS.010.325. Blood, Gold, and Souls: The Arts of the Spanish Empire. 3 Credits.
From the sixteenth through the eighteenth centuries, visual forms and practices linked such far-flung places as Mexico City and Naples, Manila and Lima, Cuzco and Antwerp, Quito and Madrid: all cities in the Spanish Empire. This course is conceived as a voyage, moving city by city to explore objects that connected Spain's vast holdings. We will investigate how the Spanish Crown and the Catholic Church used visual strategies to consolidate political power and instill religious faith across the world; and, alternatively, we will consider how local conditions, concerns, and resistance reshaped those efforts. This course surveys a diverse range of artistic production: religious paintings and sculptures; maps used for imperial surveillance; luxury goods crafted from shimmering feathers, ceramics, ivory, and precious metals; urban design and architecture from the ports of Europe to the highland outposts of the Andes; ephemeral cityscapes for civic performances. In examining such materials, students will be introduced to the art historical methods and theoretical concerns used to study a wide diversity of objects within an imperial frame. Area: Humanities
AS.010.329. Building an Empire: Architecture of the Ottoman Capitals, c. 1300-1600. 3 Credits.
Centered on modern-day Turkey and encompassing vast territories in Asia, Africa, and Europe, the Ottoman Empire (1299 -1923) was the longest lived and among the most powerful Islamic states in history, with an artistic tradition to match. This course explores the functional and symbolic role that architecture played during the empire's formative centuries, when three successive capital -Bursa, Edirne, and Istanbul -served to visualize the sultans' growing claims to universal authority. With reference to mosques, palaces, tombs, and other categories of architecture, the course will examine the buildings in their artistic, social, and political contexts. Themes to be addressed include patronage and audience, architectural practice and the building trade, ceremonial and ritual, topography and urban planning, and the relationship of Ottoman architecture to other traditions. Area: Humanities
AS.010.330. Art of the Caliphates: Visual Culture and Competition in the Medieval Islamic World. 3 Credits.
Despite its modern-day association with a fringe extremist movement, the term "caliphate" was traditionally used to describe the Muslim world at large, the political and spiritual ruler of which bore the title of caliph. The original Islamic caliphate was established in the seventh century as a vast empire centered on the Middle East and extending deep into Africa, Asia, and Europe. It soon broke apart into a series of competing powers, until in the tenth century, three rival dynasties-the Baghdad-based Abbasids, the Spanish Umayyads, and the Fatimids of North Africa-each claimed to be the rightful caliphate. This course will examine how these fascinating political developments and conflicts played out in the realm of art and architecture between the seventh and thirteenth centuries. As well as palaces, mosques, and commemorative buildings, the course will look at media ranging from ceramics and metalwork to textiles and illustrated manuscripts, with many of the artifacts being viewed firsthand in local museum collections. These works will be considered in relation to such themes as patronage, audience, ceremony, and meaning. Particular attention will be paid to how the various caliphates-both in emulation of and competition with one another-used visual culture as a powerful tool to assert their legitimacy. Area: Humanities
AS.010.338. Art and the Harem: Women's Spaces, Patronage, and (Self-)Representation in Islamic Empires. 3 Credits.
Long characterized in the Western imagination as exotic realms of fantasy, harems in Islamic tradition served as private domestic quarters for the women of elite households. This course explores the harem -as an institution, a physical space, and a community of womenfrom various art-historical perspectives, considering such topics as the harem's architecture, the agency of its inhabitants as patrons and collectors, the mediating role of eunuchs in the harem's visual and material culture, and the ability of harem women to make their mark through public artistic commissions. Our case studies will address a range of Islamic geographical and chronological contexts, though we will focus on the empires of the early modern period and, above all, the famous harem of the Ottoman sultans at the Topkapi Palace in Istanbul.
In challenging popular misconceptions, the course will also look at the wealth of exoticizing imagery that the harem inspired in Western art, which we will consider through Orientalist paintings at the Walters Art Museum and illustrated rare books at Hopkins itself. Area: Humanities
AS.010.403. Art and Science in the Middle Ages. 3 Credits.
This course investigates the intersections of art and science from the Carolingian period through the fourteenth century and the historical role images played in the pursuit of epistemic truths. Science -from the Latin scientia, or knowledge -in the Middle Ages included a broad range of intellectual pursuits into both the supernatural and natural worlds, and scholars have classified these pursuits in various ways (i.e. experimental or theoretical science, practical science, magic, and natural philosophy).
A particular focus of this seminar will be placed on the assimilation of Greek and Islamic scientific advances in cartography, cosmology, and optical theory into the Latin theological tradition. This seminar investigates the complex relationships between image and relic in the later Middle Ages. While the relic was usually hidden from view in lavishly decorated containers made before 1200, visual access to the relic was key for the conception of later medieval and early modern reliquaries. We will address aesthetic and material aspects of reliquaries, with a focus on translucent qualities of enamel, rock crystal, and reversed glass. Another emphasis is set on late medieval paintings with relic depositories, either in the frame or hidden in the wooden panel itself. We will discuss formal qualities of reliquaries, techniques of their making, iconography and questions about their authenticity. Those issues will be investigated by raising also larger theoretical and historiographic questions.
SA.400.746. Health Systems and Policy in Developing Countries. 4 Credits.
A good health system delivers quality services to all people, when and where they need them. Components of a strong health system include a robust financing mechanism; a well-trained and adequately paid workforce; reliable information on which to base decisions and policies; and well-maintained facilities and logistics to deliver quality medicines and technologies. However, many countries in the developing world have weak health systems, badly in need of strengthening and reform. This course offers a practical introduction to major issues, policies and practices related to health systems and health policy in a developing country context. The course combines two perspectives. First, students will apply principles related to health systems strengthening and reform to develop a framework to strengthen and rebuild health systems in fragile states. Second, students will learn about and apply key insights from economics to understand health behaviors and health care markets, and to inform the design of health policy in low and middle-income countries. Students are expected to be comfortable reading articles that evaluate health system interventions as well as applied economics papers and think through the logic and implications of economic theory (without complicated statistics or math). Substantive preparation and class participation are expected.
SA.400.807. Introduction to Public Health for Development Practitioners. 4 Credits.
This course offers a practical introduction to major issues, policies and practices of public health, and examines the role of health in development. The course teaches critical public health skills such as epidemiology, burden of disease studies, rapid assessments and outbreak investigations, enabling students to understand the basic tools of public health and to analyze strengths and weaknesses in public health studies. Furthermore, this course examines major public health topics of concern to development, including HIV/AIDS, malaria, neglected tropical diseases, maternal and child health, water and sanitation, and emerging diseases. This training will enable development practitioners act on the ground and in development institutions to improve global health. This course is designed as both a stand-alone primer on public health for those working in development, and as a foundation course for more advanced study of global health issues.<a href="http://www.saisjhu.edu/resources/administrative-offices/how-access-course-syllabi-andevaluations" target="_blank">Click here to see evaluations, syllabi, and faculty bios</a>
SA.610.700. International Political Economy of Emerging Markets. 4 Credits.
This course examines the relationship between politics and international economics in developing countries, with a focus on the emerging market economies. Throughout the course, we critically evaluate different political science theories of foreign economic policymaking in emerging markets. The course begins with an overview of theories of international political economy. The second section of the course focuses on developing countries' embrace of economic globalization over the past thirty years. We examine different political reasons for why emerging market and developing countries have liberalized foreign trade, removed barriers to foreign investment, and reduced the state's role in the domestic economy since the 1980s. The final section of the course of the course explores how globalization has impacted emerging market economies, and considers how governments in these countries have dealt with the new challenges that have emerged in this era of economic globalization.
SA.610.702. Political Economy in the Shadow of Conflict. 4 Credits.
This is a research seminar organized around key ongoing debates in international relations, such as the role of institutions, audience costs, leaders, bargaining, reputation, interdependence, and ideas. The course will emphasize critical engagement of the empirical evidence presented in favor of theoretical arguments, encouraging students to devise rigorous new ways to test their observable implications. Can bargaining theory help us understand the outbreak, as well as the termination of, international conflict? Has growing economic integration among states changed the nature of military conflict? Are certain economic interest groups more prone to support military expansion than others? Do democratic institutions enable states to better signal their resolve to adversaries? By the end of the course, students will be able to recognize, engage, and develop their own taste for theoretical arguments, as well as present the most compelling empirical evidence for or against them.
SA.610.735. Risk in International Politics and Economics. 4 Credits.
The purpose of this course is to help students work through the challenge of understanding risk in international political and economic relations. That challenge is both methodological and substantive. Students will have to tackle 'how' we understand and 'what' we understand at the same time. Along the way, they will have to consider those things we cannot understand or anticipate with any meaningful degree of precision. They will have to deal with the 'uncertainty' that lies beyond the boundaries of 'risk'. The subject matter is open-ended. Virtually every aspect of politics or economics can be cast in terms of risk and uncertainty, no matter whether we look to the future or reflect upon the past. Therefore, the course builds on a thematically structured, case study approach. Each week introduces a new principle that is useful in understanding risk; each week provides cases that illustrate the usefulness of that new principle. Moreover, as our understanding of risk becomes more sophisticated, the cases become more complex. The ultimate goal is to be able to analyze matters of risk and uncertainty as they manifest around decisions taken by leaders in government or business in the real world. Prerequisite(s): Students may not register for this class if they have already received credit for SA.600.735.
SA.610.770. Comparative Political Economy. 4 Credits.
This course is intended to bridge the gap between economics and politics as taught at SAIS. First examines some of the main "currents" in the literature and familiarizes the student with different variants of political economy. Presents an overview of the classical liberal, Marxian/Polanyian and Keynesian understandings of the economy, each of which serves as both a primer to political economy and as an introduction to the main contemporary approaches. Then engages with what many scholars argue is the major approach in comparative political economy: rational choice theory. By contrast, the next section looks beyond the rationalist tradition to the nowadays somewhat neglected historical tradition. Building on the historical tradition, next examines institutionalist approaches, explaining institutional change and stability over time through path dependence and earlier arrangements. Concludes with more social constructivist understandings of political economy, emphasizing the powerful role of economic ideas in the evolution of economic policymaking over time.
AS.194.201. Jews, Muslims, and Christians in the Medieval World. 3 Credits.
The three most widespread monotheisms have much more in common than is generally portrayed: a common founding figure, a partly shared succession of prophets, closely comparable ethical concerns and religious practices, a history of coexistence and of cultural, religious, social and economic interaction. This course will focus on a number of key texts and historical events that have shaped the relationships between Jews, Muslims, and Christians during the Middle Ages and contributed to their reciprocal construction of the image of the "other." The geographical center of the course will be the Mediterranean and the Near and Middle East, a true cradle of civilizations, religions, and exchange. Area: Humanities, Social and Behavioral Sciences AS.194.202. Never Forget: Muslims, Islamophobia, and Dissent after 9/11. 3 Credits.
In partnership with the social justice organization Justice for Muslims Collective, this community-engaged course and oral history project will explore how diverse Muslim communities navigated and contested belonging and political and cultural agency amidst state-sponsored violence and national debates on race, gender, citizenship and national security after 9/11 and during the ongoing War on Terror. Through history, ethnography, first-person narratives, film, fiction, and online resources, students will learn about the impact of 9/11 on American Muslim communities. This includes cultural and political resistance to imperialism, racism, and Islamophobia as well as to intersectional inequities within Muslim communities that were intensified in the context of Islamophobia. Students will learn about community activism and organizing from JMC, and complete a participatory action research project with the organization. This project is an oral history archive that will address gaps in the documentation of movement histories when it comes to early organizing against War on Terror policies by Muslim communities and communities racialized or perceived as Muslim.
Students will be trained to record stories of resistance among leaders who organized and responded at the local and national-level in the Greater Washington region, to support the building of an archive that will shape a wide variety of future organizing and advocacy efforts.
AS.194.230. African-Americans and the Development of Islam in America. 3 Credits.
Muslims have been a part of the American fabric since its inception. A key thread in that fabric has been the experiences of enslaved Africans and their descendants, some of whom were Muslims, and who not only added to the dynamism of the American environment, but eventually helped shape American culture, religion, and politics. The history of Islam in America is intertwined with the creation and evolution of African American identity. Contemporary Islam in America cannot be understood without this framing. This course will provide a historical lens for understanding Islam, not as an external faith to the country, but as an internal development of American religion. This course will explicate the history of early Islamic movements in the United States and the subsequent experiences of African-Americans who converted to Islam during the first half of the twentieth century. We will cover the spiritual growth of African American Muslims, their institutional presence, and their enduring impact on American culture writ large and African-American religion and culture more specifically. Area: Humanities, Social and Behavioral Sciences
SA.810.705. Public Opinion as a Driver for Policymakers: Analytical Tools and Illustrative Case Studies. 4 Credits.
A key driver in any democracy, public opinion determines who will govern and which policies will be likely to succeed. Contrary to general beliefs that public opinion is highly ephemeral, both practice and scientific evidence show that public opinion is a stable, measurable, and ultimately predictable phenomenon. To explore the issue both conceptually and in practice, the course will first offer a review and discussion of relevant literature on the subject and then analyze concrete case studies exploring the uses and misuses of public opinion and polling by political and policy stakeholders. Likely case studies will include primarily Latin American examples, such as the 2002 Lula election, but also extra-regional cases, such as the 2008 Obama election and the Arab Spring, among others. The final objective is to develop a critical eye when analyzing public policy and political problems.
SA.860.781. States, Revolutionaries & Terrorism. 4 Credits.
Looks at the evolution of terrorism as a tool of political expression and conquest of power. Surveys doctrines and actions of anarchists, Russian Nihilists, Social Revolutionaries, as well as nationalists and fascist movements. Reviews Leninist and Maoist models of political subversion and their avatars in the national liberation movements and urban guerillas of the 1960s and 1970s. Draws on cases from the Middle East and North Africa, including Irgun, Lehi, EOKA, FLN, Fatah, PFLP, ANO and ASALA.
SA.860.784. Behavioral Sociology of Conflict. 4 Credits.
This course combines approached from social psychology and social history to examine stratification and conflict within and between groups.
Challenging the assumption of rationality in human behavior, it explores the role of drives, cognitive biases, culture, religion, beliefs and identity systems in social phenomena. After a theoretical overview, it looks specifically at the evolution of identity systems and the manifestation of identity-based conflict during the period of modernization and globalization, and explains xenophobic responses to the emergence of a global, modern identity.
AS.211.202. Freshman Seminar: A Thousand Years of Jewish Culture. 3 Credits.
This course will introduce students to the history and culture of Ashkanzi Jews through their vernacular, Yiddish, from the settlement of Jews in German-speaking lands in medieval times to the present day. Particular emphasis will be placed on the responses of Yiddish-speaking Jews to the challenges posed by modernity to a traditional society. In addition to studying a wide range of texts-including fiction, poetry, memoir, song, and film-students will learn how to read the Yiddish alphabet, and will prepare a meal of traditional Ashkenazi dishes. No prior knowledge of Yiddish is necessary for this course. Area: Humanities
AS.211.217. Freahman Seminar: From Rabbis to Revolutionaries: Modern Jewish Identities. 3 Credits.
Many Jews in the modern period abandoned the traditional religious way of life, but continued to identify strongly as Jews, and even those who remained committed to tradition had to adapt. Through the prism of the Yiddish language, the vernacular of Eastern European Jewry, this course will explore different ways in which Jews reacted to historical developments and embraced political and cultural movements of their time, from the founding of modern Yiddish theater in Romania, to the creation of a Jewish autonomous region in the far east of the Soviet Union, to the development of avant-garde poetry in New York. In addition to studying a wide range of texts-including fiction, poetry, memoir, song, and film-students will learn how to read the Yiddish alphabet, and will explore food culture by preparing a meal of Eastern European Jewish dishes. No prior knowledge of Yiddish is necessary for this course Area: Humanities
AS.211.265. Panorama of German Thought. 3 Credits.
This course introduces students to major figures and trends in German literature and thought from the sixteenth to the twentieth century. We will pay particular attention to the evolution of German political thought from the Protestant Reformation to the foundation of the German Federal Republic after WWII. How did the Protestant Reformation affect the understanding of the state, rights, civic institutions, and temporal authority in Germany? How did German Enlightenment thinkers conceive of ethics and politics or morality and rights? How do German writers define the nation, community, and the people or das Volk? What is the link between romanticism and nationalism? To what degree is political economy, as developed by Marx, a critical response to romanticism? How did German thinkers conceive of power and force in the wake of World Wars I and II? What are the ties that bind as well as divide a community in this tradition? We will consider these and related questions in this course through careful readings of selected works. Area: Humanities Writing Intensive
AS.211.328. Berlin Between the Wars: Literature, Art, Music, Film. 3 Credits.
Explore the diverse culture of Berlin during the heyday of modernism. During the Weimar Republic, Berlin became a center for theater, visual arts, film, music, and literature that would have an outsize impact on culture throughout the world and the twentieth century. The thinkers, artists, and writers drawn to interwar Berlin produced a body of work that encapsulates many of the issues of the period: the effect of the modern city on society; "the New Woman"; socialist revolutionary politics; the rise of the Nazis; and economic turmoil. While learning about interwar Berlin's cultural diversity, we will take a special look at works by Jewish writers and artists that engage with the question of ethnic, religious, and national identity in the modern world, specifically in the context of Berlin's rich Jewish history and the rise of anti-Semitism in the interwar period. All readings will be in translation. Area: Humanities
AS.211.329. Museums and Identity. 3 Credits.
The museum boom of the last half-century has centered largely around museums dedicated to the culture and history of identity groups, including national, ethnic, religious, and minority groups. In this course we will examine such museums and consider their long history through a comparison of the theory and practice of Jewish museums with other identity museums. We will study the various museological traditions that engage identity, including the collection of art and antiquities, ethnographic exhibitions, history museums, heritage museums, art museums, and other museums of culture. Some of the questions we will ask include: what are museums for and who are they for? how do museums shape identity? and how do the various types of museums relate to one another? Our primary work will be to examine a Who were the witches? Why were they persecuted for hundreds of years? Why were women identified as the witches par excellence? How many witches were put to death between 1400 and 1800? What traits did European witch-mythologies share with other societies? After the witch-hunts ended, how did "The Witch" go from being "monstrous" to being "admirable" and even "sexy"? Answers are found in history and anthropology, but also in medicine, theology, literature, folklore, music, and the visual arts, including cinema. What is personal memory? This course offers both an in-depth journey through Proust's Recherche and a way of tracing major scientific questions about the formation of memory in connection with autobiography and medical history.The process of human remembering --with its counterpart, forgetting --has emerged over the last thirty years as an extraordinarily rich field of investigation as well as of creative endeavors in the arts. Poised between literature and science, this course offers both an in-depth introduction to Proust's ground breaking modern work on human time, A la recherche du temps perdu, and an investigation into a modern history of memory (a history that unfolds in the nineteenth and early twentieth century, and has made a surprising return in our contemporary understanding of remembrance). That Proust's petite madeleine should have turned, in recent years, into the magical token of autobiographical recollection and provided, at the same time, an immensely productive clinical and neuro-scientific model of how memory works serves as our point of departure. That human memory is an experience and not merely a biological function --its existence depending on language --will be our running thread. Proust's book, filled with immensely learned and complex descriptions of mnemonic processes, serves as our case-study.Proust's investigations into remembering reveal fascinating aspects of the 19th century advances into the psychology and nosography of memory. These will in turn prompt us to read his work in light of present controversies in scientific research, as for example on the construction of memory, on "body-memory, " the interface between cognition and emotion, and the mind/brain debate. As it prompts many questions on the relation between fiction and experience, this journey through major themes of Proust's quest for memory will invite a broader reflection on the relation between literary and philosophical investigations. Requirements:Short oral presentation and final research paper. Taught Dante's Divina commedia is the greatest long poem of the Middle Ages; some say the greatest poem of all time. We will study the Commedia critically to find: (1) What it reveals about the worldview of late-medieval Europe; (2) how it works as poetry; (3) its relation to the intellectual cultures of pagan antiquity and Latin (Catholic) Christianity; (4) its presentation of political and social issues; (5) its influence on intellectual history, in Italy and elsewhere; (6) the challenges it presents to modern readers and translators; (7) what it reveals about Dante's understanding of cosmology, world history and culture. We will read and discuss the Commedia in English, but students will be expected to familiarize themselves with key Italian terms and concepts. Students taking section 02 (for 4 credits) will spend an additional hour working in Italian at a time to be mutually decided upon by students and professor. Area: Humanities Writing Intensive
AS.215.290. Latin American Critical Perspectives on Colonialism: From the 'World Upside Down' to the 'Coloniality of Power'. 3 Credits.
This course, taught in English, examines how indigenous and local (postcolonial) intellectuals in Latin America responded to the ideology and practices of Spanish Colonialism in the earliest post-conquest years (1532), continued to battle colonialism during the period of the wars of independence, and finally arrived at the production of an analysis that shows how modernity is but the other face of colonialism. Among key works to be discussed are Guaman Poma's illustrated sixteenth-century chronicles, D.F. Sarramiento's _Civilization and Barbarism_ (1845) The consumption of alcohol is one of the oldest known human practices. Almost every culture has some type of mind-altering beverage that influences and shapes many facets of society. This course is a crosscultural examination of the power and significance of alcohol in the ancient world. From the Neolithic to the Classical symposium to the Egyptian festival, the importance of communal drinking-alcohol or otherwise-is a uniting factor across the ancient world. This class will unpack the impact and significance of alcohol across a wide-range of ancient cultures, and examine what the study of alcohol might reveal about ancient societies. This includes alcohol as medicine, its religious and ritual functions, alcohol as a community unifier (and divider) and identity builder, and its practical and economic uses. Stories of conflict over religion and law proliferate in contemporary American news media. Perhaps even more frequent in recent years are the stories from the Middle East concerning attempts at using law to advance a particular religious agenda. Such patterns are ubiquitous throughout human history. While the circumstances and details vary, law and ritual always shape human societies in remarkable ways. In this course, we will examine the ways in which societies utilize law and ritual to shape social values, customs, and perspectives. We will study law and ritual not simply as cultural artifacts, but as ideological tools used by individuals and groups to advance agendas, compel behaviors, and otherwise influence such social forces as power, status, gender, and resources. We will use ancient Israel as our test case. The texts of the Hebrew Bible offer us a view into a long history of focus on both law and ritual within one society. These texts were preserved because they were socially useful in a variety of contexts. Yet, the long history of legal and ritual texts in the Hebrew Bible also gives us insight into how such traditions evolve and change in different social conditions. While law and ritual may shape society, they are likewise often shaped by it. Students should be able to take these broad considerations from ancient Israel and apply them to other social settings in both discussion and writing by the end of this course. For over 5,200 years humans have used writing as a record for political, administrative, social, religious, and scholarly pursuits. Over millennia diverse scripts have been written, inscribed, carved, impressed, and painted on a variety of objects such as papyrus, stone, ivory, clay, leather, wax, rope, paper, metal, bone, wood, and other mediums. Today, the practice of writing has primarily shifted to the digital world. Computers are often the preferred way for people to "write." In this course students will be invited to critically examine relationships between scribes, craftsmen, writing, and materials. The goal of the course is for students to recognize how writing has shaped religious and political movements, and aided bureaucratic endeavors from the invention of writing around 3200 B.C. to the present day. In the first part of the semester we will explore the emergence of writing in Egypt, Mesopotamia, China, and Mesoamerica. In the second half of the course students will explore how the act of writing transitioned from hand written manuscripts, to printed books, and now digitized texts. We will explore the way that computers and social media have changed the way that people interact with writing. The seminar will include lecture, discussion, museum fieldtrips, and experimental archaeology labs to investigate and engage with the materiality of clay cuneiform tablets, Egyptian papyrus, Roman wax writing boards, and more! Area: Humanities Students are invited to examine critically the history of Black artists exhibiting within American museums. With the help of BMA staff, class will develop interpretation for an installation to accompany a major retrospective of artist Jack Whitten that considers the "canon" of art history as a site of ongoing negotiation between taste-makers, artists, dealers, and critics, as well as art institutions that include the market and the museum. Students will take advantage of archives at the BMA, the Library of Congress and Howard University. Students will help select the artworks and themes for the show; research individual participants in the social networks that facilitated the success of some artists over others; and research the biographies of individual artworks -some that have entered the canon and some that should. M&S Practicum. CBL Course. While developments in biomedicine and health care have led to the eradication, cure and management of many human health problems, disease, illness and health have also been the focus for aggressive social controls and population management. The technologies and practices of disease control and health management have been foundational to some of the most aggressive structures of oppression in recent history such as the Jewish Ghetto, the Concentration Camp, the South African Township and techniques of segregation. This course seeks to explore how epidemics and disease control are linked to larger questions of power, state craft and international dynamics. This course asks how have outbreaks of infectious disease shaped social and political action? How do societies respond to outbreaks and why? What do epidemic moments tell us about global structures of power and the dynamics of control? Drawing on historical cases including plague during the European Renaissance and before, the HIV/AIDS Pandemic and the West African Ebola Outbreak of 2013-2016, this course will introduce students to the history and practices of disease control as well as important theoretical perspectives by which to understand the sociological and historical effects of disease and the responses to them. Students will engage sociological concepts such as biopolitics, social construction of disease and illness and biosecurity and produce a final research paper examining the outcomes and responses to an epidemic event to show mastery of the topics covered in the course. Area: Social and Behavioral Sciences Writing Intensive
SA.790.716. Politics, Religion and Violence in South Asia. 4 Credits.
Whether manifested by the vexed Babri Masjid issue in India, the rise of Islamist parties in Pakistan and Bangladesh or the influence of Buddhist monks on the civil war in Sri Lanka, religion dominates many political debates throughout South Asia. This course analyzes the impact of religion (especially Hinduism, Islam, Buddhism and Sikhism) on policy-and the impact of politics on the transformations of the faiths themselves. Views sectarian conflict (whether based on religion or caste) through the lenses of anthropology and political science.<a href="http:// www.sais-jhu.edu/courses/south_asia_studies.html#SA790716" target="_blank">Click here to see evaluations, syllabi, and faculty bios</a> "This, therefore, is the praise of Shakespeare, that his drama is the mirror of life." Samuel Johnson's judgment applies particularly well to Shakespeare's account of politics. This course will explore how Shakespeare depicts the acquisition of power, its exercise, and its voluntary or forcible relinquishment. Through a close reading of whole plays and selected scenes and speeches it will examine political education, intrigue, conspiracy, coups, demagoguery, politically motivated assassination, the theater of violence, rhetoric, insurrection, the launching of war, civil-military relations, and ghosts, among other topics. Combines asynchronous lectures and discussion with close reading of texts, analytic memos, and assignments such as the composing of a contemporary soliloquy. Prize Teaching Fellowship seminar. Triangulating feminist psychoanalysis and theories of embodiment and subjectivity with art criticism and case studies of artistic practice (primarily painting), this course comparatively investigates the routes modernism takes after the Second World War and decolonization (1945/1947). We will be interested in specific postcolonial and postwar contexts where modernism in the domain of the visual arts was mounted as a feminist project. Each week will pair readings that establish conceptual frameworks with close analyses of works by specific artists, including those represented by the Library's Special Collections and the Baltimore Museum of Art. Texts include Freud, Spivak, Butler, Irigaray, Kristeva, and Mahmood. Area: Humanities Writing Intensive
Study of Women, Gender, & Sexuality
For current faculty and contact information go to http://history.jhu.edu/ people/ | 2019-09-14T03:05:38.692Z | 2019-05-08T00:00:00.000 | {
"year": 2019,
"sha1": "81784861b3a5d2bb83dc22f1b4bb99df247e95fe",
"oa_license": "CCBY",
"oa_url": "https://doi.org/10.1163/9789004400467_016",
"oa_status": "HYBRID",
"pdf_src": "MergedPDFExtraction",
"pdf_hash": "af41475b8281610dd7f02a254981435991c84bc5",
"s2fieldsofstudy": [
"Sociology",
"Education"
],
"extfieldsofstudy": []
} |
267739997 | pes2o/s2orc | v3-fos-license | Overexpression of thioredoxin-like protein ACHT2 leads to negative feedback control of photosynthesis in Arabidopsis thaliana
Thioredoxin (Trx) is a small redox mediator protein involved in the regulation of various chloroplast functions by modulating the redox state of Trx target proteins in ever-changing light environments. Using reducing equivalents produced by the photosynthetic electron transport chain, Trx reduces the disulfide bonds on target proteins and generally turns on their activities. While the details of the protein-reduction mechanism by Trx have been well investigated, the oxidation mechanism that counteracts it has long been unclear. We have recently demonstrated that Trx-like proteins such as Trx-like2 and atypical Cys His-rich Trx (ACHT) can function as protein oxidation factors in chloroplasts. Our latest study on transgenic Arabidopsis plants indicated that the ACHT isoform ACHT2 is involved in regulating the thermal dissipation of light energy. To understand the role of ACHT2 in vivo, we characterized phenotypic changes specifically caused by ACHT2 overexpression in Arabidopsis. ACHT2-overexpressing plants showed growth defects, especially under high light conditions. This growth phenotype was accompanied with the impaired reductive activation of Calvin–Benson cycle enzymes, enhanced thermal dissipation of light energy, and decreased photosystem II activity. Overall, ACHT2 overexpression promoted protein oxidation that led to the inadequate activation of Calvin–Benson cycle enzymes in light and consequently induced negative feedback control of the photosynthetic electron transport chain. This study highlights the importance of the balance between protein reduction and oxidation in chloroplasts for optimal photosynthetic performance and plant growth.
Introduction
Plants have various mechanisms for surviving in everchanging environments.One such mechanism is redox regulation in chloroplasts, which reversibly modulates the reduction and oxidation states of target proteins and thus their enzymatic activities.The key protein involved in redox regulation is thioredoxin (Trx).In plant chloroplasts, Trx reduces its target proteins through the reducing power generated by the light-driven photosynthetic electron transport chain and transmitted from ferredoxin via ferredoxin/thioredoxin reductase (Buchanan 1980;Buchanan et al. 2002).Trx transfers reducing power to target proteins through a dithiol-disulfide exchange reaction using a pair of cysteine residues in the active site of WCGPC.Most Trx target proteins are inactivated in their oxidized form and activated in their reduced form.Trx target proteins are involved in several chloroplast functions, including the Calvin-Benson cycle, ATP synthesis, antioxidant system, and chloroplast biogenesis (Yoshida and Hisabori 2023).Therefore, the redox regulation system may regulate various functions in chloroplasts in response to changes in light environment by linking photosynthetic electron transfer reactions and metabolic reactions.
Trx in chloroplasts is classified into five subtypes, Trx-f, -m, -x, -y, and -z (Lemaire et al. 2007;Serrato et al. 2013).The subtypes differ in molecular properties, such as redox potential, protein surface charge, and target-recognition residues, which determine their target protein selectivity (Collin et al. 2003;Le Moigne et al. 2021;Toivola et al. 2013;Yokochi et al. 2019;Yoshida et al. 2015).For example, Calvin-Benson cycle enzymes are mainly reduced by Trx-f and Trx-m (Michelet et al. 2013;Yoshida et al. 2015).Trx-m may also be involved in the control of cyclic electron transport around photosystem (PS) I (Okegawa and Motohashi 2020).Our findings in a recent study also indicate that redox regulation is physiologically essential for plants.Arabidopsis expressing the chloroplast NADP-malate dehydrogenase variant, which was modified into a stable active form by deleting the redox-regulated cysteine, showed growth inhibition under fluctuating light conditions (Yokochi et al. 2021a).Thus, redox regulation is significant for its ability to appropriately oxidize the enzyme and switch its function off under dark or limited light conditions, which supports optimal plant growth.Despite this important function, the mechanisms of the oxidation side of the redox regulation system have remained unclear.
Recently, we found that Trx-like2 (TrxL2) and atypical Cys His-rich Trx (ACHT) are responsible for oxidizing Trx target proteins (Yokochi et al. 2021b(Yokochi et al. , 2019;;Yoshida et al. 2018Yoshida et al. , 2019a, b), b).TrxL2 and ACHT are classified as Trx-like proteins and have active site sequences similar to Trx, which are WCRKC and WCG/ASC, respectively.They are characterized by a higher redox potential and a higher efficiency in reducing 2-Cys peroxiredoxins (2-Cys Prx) than typical Trx (Dangoor et al. 2009(Dangoor et al. , 2012;;Yokochi et al. 2019;Yoshida et al. 2018).The 2-Cys Prx uses reducing equivalents to reductively detoxify hydrogen peroxide (H 2 O 2 ).Therefore, TrxL2 and ACHT are expected to continuously oxidize proteins under light conditions where H 2 O 2 is produced as a byproduct of photosynthesis (Asada 2006).
In this study, we therefore aimed to clarify the physiological role of ACHT2 by characterizing the detailed phenotypes of ACHT2-overexpressing plants.Our data provide important insights into the physiological consequences of the imbalance in the protein redox states during photosynthesis.
Measurement of fresh weight and chlorophyll content
Fresh weight was measured using the aboveground portion of the plants.The chlorophyll content in rosette leaves was determined as the sum of the contents of chlorophyll a and b after extraction with 80% (v/v) acetone, as described in a previously published method (Porra et al. 1989).
Determination of the light-dependent protein redox state in vivo
Plants grown at a light intensity of 60 µmol photons m −2 s −1 were dark-adapted for 8 h and then irradiated.The plant leaves were harvested at the indicated times and frozen in liquid nitrogen.The redox states of the proteins in plant leaves were determined as described previously (Yoshida et al. 2014).The anti-FBPase and anti-CF 1 -γ antibodies were prepared as described previously (Konno et al. 2012;Yoshida et al. 2014).The anti-RCA antibody was commercially procured (catalog no.AS10-700, Agrisera, Vännäs, Sweden).
Measurement of photosynthetic parameters
Plants grown at a light intensity of 60 µmol photons m −2 s −1 were dark-adapted for 8 h, after which Fv/Fm, Y(II), and NPQ were measured using a Dual-PAM-100 spectrometer (Walz, Heinz, Germany).The time courses of Y(II) and NPQ were measured with actinic red light at 60 µmol photons m −2 s −1 for 8 min, while recovery in darkness was recorded for 8 min.Saturating pulses of red light were applied at 6000 µmol photons m −2 s −1 at 0.4-s durations.Y(II) and NPQ were calculated on the DUAL-PAM-100 software using previously applied equations (Kramer et al. 2004).
Extraction and quantitative analysis of xanthophyll cycle pigments
Plants grown at a light intensity of 60 µmol photons m −2 s −1 were dark-adapted for 8 h, irradiated at 60 µmol photons m −2 s −1 for 30 min, and retuned to dark conditions.Leaves were detached at the indicated times and frozen in liquid nitrogen.Pigments were extracted by grinding 20-30 mg leaves in liquid nitrogen, and the resulting leaf powder was suspended with 600 µL 100% acetone.Quantitative analysis of the xanthophyll cycle pigments (violaxanthin, antheraxanthin and zeaxanthin) was performed using HPLC as previously described (Muller-Moule et al. 2002).
Overexpression of ACHT2 induced growth defects
In our previous study, we obtained four transgenic Arabidopsis plants (ACHT2-TF1 to ACHT2-TF4; "TF" denoting "transformed") with various levels of ACHT2 overexpression (Yokochi et al. 2021b).Of these, ACHT2-TF1 and ACHT2-TF4 showed high expression of ACHT2 (ACHT2-TF1, 25-fold of WT; ACHT-TF4, 16-fold of WT).In the present study, we renamed ACHT2-TF to ACHT2-OE; ACHT2-TF1 to ACHT2-OE1; and ACHT2-TF4 to ACHT2-OE2.We then analyzed the fresh weight, chlorophyll content, and maximal quantum yield of PSII (Fv/Fm) in ACHT2-OE plants under various light intensities (20, 60, and 650 µmol photons m −2 s −1 ) (Fig. 1).The fresh weights and Fv/Fm of ACHT2-OE plants were lower than those of WT under all light conditions.Notably, the phenotypic changes in ACHT2-OE plants were more significant under higher light conditions; for example, the fresh weight of ACHT2-OE plants were less than 5% of that of WT.These results show that the effects of ACHT2 overexpression on the growth phenotype may be correlated with light intensity.Plants grown at a light intensity of 60 µmol photons m −2 s −1 were used for subsequent experiments.
Overexpression of ACHT2 altered reduction levels of Trx target proteins
Next, we examined the redox status of certain Trx target proteins in ACHT2-OE plants under light conditions.Changes in the redox states of Trx target proteins CF 1 -γ, FBPase, and Rubisco activase (RCA) were determined by thiol modification using 4-acetamido-4'-maleimidyl-stilbene-2,2'-disulfonate.
Figure 2a and b show the reduction levels of Trx target proteins at steady-state photosynthesis, measured after 30 min irradiation of moderate or high light (60 or 650 µmol photons m −2 s −1 ).In ACHT2-OE plants, the reduction levels of CF 1 -γ, FBPase, and RCA were lower than those in WT under both light conditions.Especially, the reduction levels of FBPase and RCA were largely lowered under high light conditions (about 15% and 40% of that of WT, respectively).Figure 2c and d show the reduction patterns of Trx target proteins during the dark-to-light (60 µmol photons m −2 s −1 ) transitions.In WT, CF 1 -γ, FBPase, and RCA were reduced to saturated reduction levels after 1-2 min light irradiation.In ACHT2-OE plants, the reduction levels of FBPase and RCA were saturated at lower levels.Their reduction kinetics after irradiation were apparently comparable with those of WT, although we need more detailed analyses to discuss the reduction kinetics.Taken together, ACHT2 overexpression lowers the steady-state reduction level of Trx target proteins under light conditions.
We previously found that the knockout of ACHT2 leads to the delayed oxidation of FBPase during the light-to-dark transitions (Yokochi et al. 2021b).This result is in agreement with the present finding of the impaired reduction of FBPase in ACHT2-OE plants (Fig. 2), which strongly suggests that ACHT2 acts as the major oxidation factor for FBPase in vivo.By contrast, the contribution of ACHT2 to RCA oxidation is still unclear.ACHT2 knockout did not affect the oxidation process of RCA significantly (Yokochi et al. 2021b), while ACHT2 overexpression caused the impaired reduction of RCA (Fig. 2).These results indicate that ACHT2 has an ability to oxidize RCA, but its role can be complemented by other Trx and Trx-like proteins.In line with this idea, our previous study suggested that Trx-f is the most dominant factor for RCA oxidation (Yokochi et al. 2021b).
The reduced forms of FBPase and RCA are enzymatically active (Buchanan 1980;Michelet et al 2013).It is thus possible that lower reduction levels of FBPase and RCA in ACHT2-OE plants (Fig. 2) result in suppression of the Calvin-Benson cycle.Some growth parameters, including the fresh weight and chlorophyll content, were also negatively affected in ACHT-OE2 plants (Fig. 1).These
Overexpression of ACHT2 induced high NPQ
To further assess the effects of ACHT2 overexpression, we measured the effective quantum yield of PSII [Y(II)] and NPQ in ACHT2-OE plants (Fig. 3).NPQ reflects the extent of the thermal dissipation of excess energy around PSII.In WT, Y(II) immediately decreased after light irradiation; however, it then increased and reached a steady state within 3-4 min.In contrast, ACHT2-OE plants showed lower Y(II) levels during light irradiation than WT (Fig. 3a).The NPQ value in WT is transiently increased after 1 min of light irradiation but then decreased to a low level.In contrast, ACHT2-OE plants maintained a much higher NPQ than WT (Fig. 3b).Thus, ACHT2 overexpression lowered steady-state Y(II) and enhanced NPQ induction.NPQ consists of several components, the main one being qE, which is characterized by fast induction and relaxation kinetics within a few minutes after light irradiation and returning to dark conditions, respectively (Nilkens et al. 2010;Ruban 2016).qE is induced by the protonation of PSII subunit PsbS and the conversion of pigments in the xanthophyll cycle (Li et al. 2000;Niyogi et al. 1998).In the xanthophyll cycle, violaxanthin is converted to zeaxanthin via antheraxanthin by violaxanthin de-epoxidase, which is activated by lumen acidification (Szabo et al. 2005).Under dark conditions, zeaxanthin is reconverted to violaxanthin by zeaxanthin epoxidase.To uncover why ACHT2-overexpressing plants exhibited high NPQ, we determined the composition of the xanthophyll cycle pigments (Fig. 4).Plants were collected for the HPLC analysis after 8 h of dark, after 10 and 30 min of light irradiation at 60 µmol photons m −2 s −1 , and after 90 min of returning to dark conditions.ACHT2-OE plants showed a 3-to 4-fold higher ratio of antheraxanthin and zeaxanthin to total xanthophyll pigment contents during light irradiation than WT.Thus, the high NPQ phenotype of ACHT2-OE plants must be attributed to the high deepoxidation state of the xanthophyll cycle.
Possible consequences of ACHT2 overexpression in vivo
We investigated the physiological impacts of ACHT2 overexpression, which is suggested to be a protein oxidation factor in chloroplasts.ACHT2 overexpression resulted in impaired plant growth, impaired protein reduction, lowered PSII activity, and elevated NPQ (Figs. 1, 2, 3).
When the activity of the Calvin-Benson cycle is low, the supply of energy (ATP or NADPH) exceeds its requirement in photosynthesis.The photoinhibition of PSII will likely be accelerated under such conditions (Takahashi and Murata 2005).In this case, NPQ (qE) is activated by lumen acidification induced by cyclic electron transport around PSI to protect PSII from excess light energy (Ruban et al. 2012;Szabo et al. 2005).For instance, the chemical inhibition of Calvin-Benson cycle enzymes results in high NPQ induction and slow linear electron transport (Joliot and Alric 2013).The high NPQ in ACHT2-OE plants was accompanied by a high de-epoxidation state of the xanthophyll cycle (Fig. 4), indicating increased qE levels.Hence, the lower Y(II) and higher NPQ observed in ACHT2-OE plants are caused by negative feedback regulation resulting from the decreased function of Calvin-Benson cycle enzymes due to ACHT2 overexpression.
What is the main cause of the growth defect in ACHT2-OE plants?It may be the suppression of Calvin-Benson cycle activity and the resulting decrease in photosynthetic carbon fixation.The induction of excessive NPQ (Fig. 3) and the photoinhibition of PSII (Fig. 1b) can be considered as other possible causes.Furthermore, it is also conceivable that these factors caused the growth defect in a combined manner.Further studies are needed to clarify the mechanisms underlying the growth defect in ACHT2-OE plants.Notably, Naranjo et al. (2016) used the PsbS-deficient npq4 mutant to test the involvement of excessive NPQ induction in the growth defect observed in the ntrc mutant.Accordingly, it is worth trying to cross the npq4 mutant with ACHT2-OE plants and characterize the growth and NPQ phenotypes of the resulting plants.In conclusion, this study showed that the redox imbalance in Trx target proteins in chloroplasts was caused by the enhancement of protein thiol oxidation, which decreased photosynthetic activity, ultimately leading to growth defects in plants.The protein-oxidizing pathway always functions during photosynthesis; thus, under such conditions, the redox state of Trx target proteins should be suitably balanced by the cooperative interaction between protein reduction and oxidation pathways for optimal plant growth.
more pronounced under high light conditions.Therefore, the inadequate functioning of the Calvin-Benson cycle caused by ACHT2 overexpression may at least partly account for the growth impairment in ACHT2-OE plants.
Fig. 2 Fig. 3 Fig. 4
Fig. 2 In vivo redox responses of Trx target proteins in ACHT2-OE plants.Dark-adapted plants were placed under the indicated light conditions.a Western blotting image of the detection of the redox state of CF 1 -γ, FBPase, and RCA after 8 h of dark adaptation (DA) or 30 min of light irradiation at 60 µmol photons m −2 s −1 (ML) or 650 µmol photons m −2 s −1 (HL).b Reduction levels of CF 1 -γ, FBPase, and RCA based on the signal intensities shown in (a).c Western blotting image of the detection of the redox state of CF 1 -γ, FBPase, and RCA at 0-10 min of light irradiation at 60 µmol photons m −2 s. −1 .d Reduction levels of CF 1 -γ, FBPase, and RCA based on the signal intensities shown in (c).b, d The reduction level was determined as the ratio of the reduced form to the total amount of reduced and oxidized forms.Each value represents the mean ± SD (n = 3).Red reduced form, Ox oxidized form, RI redox-insensitive splicing variant.Different letters indicate significant differences among plants (P < 0.05; one-way ANOVA and Tukey's HSD) ◂ | 2024-02-19T06:17:17.421Z | 2024-02-17T00:00:00.000 | {
"year": 2024,
"sha1": "4d1641b05e9596d6818334b699d06a60b4ff817c",
"oa_license": "CCBY",
"oa_url": "https://link.springer.com/content/pdf/10.1007/s10265-024-01519-2.pdf",
"oa_status": "HYBRID",
"pdf_src": "PubMedCentral",
"pdf_hash": "7b72490510765fc1071d6ea754d13a1c4a4d9c1a",
"s2fieldsofstudy": [
"Environmental Science",
"Biology"
],
"extfieldsofstudy": [
"Medicine"
]
} |
218803029 | pes2o/s2orc | v3-fos-license | Thermochemical method of synthesizing stemmed nanoflower TiO2/eC3N4 heterojunction structures with enhanced solar water splitting
TiO2 nanoflower array linking to stem on a Ti foil is synthesized by thermochemical digestion of titanium at 80 °C by hydrogen peroxide and hydrofluoric acid solution. TiO2 nanoflower comprised of the anatase TiO2 which encased Ti metal core as seen by transmission electron microscopy (TEM), x-ray Photoelectron Spectroscopy based depth profiling, x-ray diffraction (XRD) analysis, and energy dispersive x-ray based elemental mapping. The TEM, selected area electron diffraction, and XRD analysis of air annealed TiO2 nanoflower show presence of anatase (101) and anatase (200) crystals of about 35 nm size. The Photoelectrochemical activity in water splitting is assessed for heterojunction formed by the TiO2 nanoflower with exfoliated carbon nitride (eC3N4), and the same is compared with heterojunction of TiO2 nanotubular array and eC3N4. It was found from linear sweep voltammetry and electrochemical impedance spectroscopy that the synthesized stemmed-nanoflower TiO2 offers superior PEC activity towards water splitting when used in heterojunction with eC3N4 as compared to that of TiO2 nanotube with eC3N4.
Introduction
Titania is a versatile material used in solar energy-based applications like solar cells, solar CO 2 reduction, and photoelectrochemical water splitting. TiO 2 in the anatase phase has a bandgap of 3.2 eV [1] and can be photo-excited using UV solar radiation [2]. Usually, doping of TiO 2 or fabricating a heterojunction with TiO 2 may be a necessity to lower the band gap and enhance the separation of the photo-generated electrons and holes. The modification of crystal lattice is not as straight forward as it may appear [3][4][5][6][7][8][9], and it is the reason why the Z-scheme of formation of heterojunction has taken dominance over the crystal doping in terms of the scientific research efforts. The Z-scheme theorizes that addition of an appropriate sensitizer, which forms a heterojunction with anatase TiO 2 , not only sufficiently reduces the heterojunction band gap but also improvises the separation of the photo-generated electrons and holes [10,11].
In past few years, a surge in research, regarding the use of transition metal dichalcogenides such as MoS 2 [12][13][14], 2d polymeric organic semiconductors like graphene [15], its allied variants [16][17][18], and graphitic carbon nitride (gC 3 N 4 ) [19][20][21] as the Z-scheme sensitizers, has been observed. Out of all, gC 3 N 4 has gathered attention as a metal-free photocatalytic sensitizer for heterojunction with anatase TiO 2 because of former's simple and facile preparation, appropriately placed band edges (CB at −1.3 eV and VB at +1.4 eV, vs. NHE, pH 7.0) and a smaller band gap of 2.7 eV. Even then, the use of gC 3 N 4 in its bulk deposited form suffers from weak, non-synergistic charge carrier mobility. While in bulk form, a large number of graphitic layers deposited one over another actively hinders the electron-hole separation, the exfoliated form at the juncture of anatase TiO 2 is extremely useful in eradicating the difficulty in charge carrier migration [22]. The efficiency of heterojunction depends on contact area, particle size and interface length [10]. Therefore, various attempts are being made by creating nanotubular [23][24][25], nanocolumnar [26][27][28][29], and other [30][31][32][33] structures of TiO 2 to increase the heterojunction surface area and decrease the particle size. While the nanoparticles may be synthesized energy efficiently and the deposited films can be sensitized well, these structures bring in the electronic resistance and the recombination of the photogenerated pair of electron and hole [34]. It is so because the semiconductor nanoparticle forms an interface at the conductive substrate itself, and the interface may work as the recombination surface.
On the other hand, the nanotubular structures provide a reduction in electronic resistance due to the inherent metallic core within the semiconductor walls. However, the nanotubular structure poses a threat of pore-clogging when sensitized with the 2d sensitizers [35]. Here, a branched and hierarchical stemmed-nanoflower TiO 2 not only allows more physical access to the 2d sensitizers as compared to the nanotubular arrays but also facilitates efficient charge separation through the interconnected interfaces of different characteristic lengths [36][37][38]. The stemmed structure provides larger surface area with a higher number of active sites retaining the direct electron pathways [39]. However, there are limited studies regarding the synthesis of such multi-branched, stemmed-nanoflower TiO 2 structure, which essentially contains a metallic core fixed at the metallic base inside and flower like structure outside [27,39,40]. The prominent drawbacks of the earlier works include non-branching [27], asymmetrical branching [39] and use of high temperature in TiO 2 nanostructure formation [40]. Also, earlier studies were hesitant to explore the possibility of the use of low temperature and more significantly, the recovery of the exhausted fluoride used for etching purpose.
In the present work, we report a single step, HF/H 2 O 2 solution based thermochemical method to synthesize a branched, stemmed nanoflower TiO 2 structure. Further, it is shown that the photoelectrochemical (PEC) water splitting activity of the TiO 2 /eC 3 N 4 heterojunction is superior when stemmed-nanoflower TiO 2 structure is used instead of nanotubular TiO 2 structure along with eC 3 N 4 . The electrochemical impedance spectroscopy (EIS) studies suggest that the improvement in PEC activity is due to the reduced reactance between TiO 2 nanoflower photoanode and the electrolyte. Finally, we put forth a method by which the spent HF can be re-used.
Synthesis of the titania nanostructures
The titanium metal foils (250 µm thick, 99.5%, Alfa Aesar) were cut into 20 mm × 20 mm size and cleaned through ultra-sonication, sequentially in acetone, ethanol, propan-2-ol (IPA), and deionized water (DIW). The cleaned Ti foils were placed vertically in polypropylene vials which already contained the 20 ml volume of the 100 mM solution of hydrofluoric acid (HF, 40%, Merck) in hydrogen peroxide (H 2 O 2 , 30% Fisher Scientific). After this, the polypropylene vials were kept at 80 • C for 60 h. For comparison, the TiO 2 nanotubular structure was prepared by anodization of cleaned Ti foils in an electrolyte containing 100 mM ammonium fluoride (98%, CDH) and 5% w/w DIW in ethylene glycol (C 2 H 6 O 2 , 99.5%, Fisher Scientific). All as-obtained samples were annealed in air for 2 h at 450 • C.
Synthesis of the exfoliated-gC 3 N 4 (eC 3 N 4 )
The sensitizer eC 3 N 4 was obtained by exfoliating the bulk synthesized gC 3 N 4 in IPA. Firstly Melamine (98%, Sigma Aldrich) was heated in air at 550 • C for 2 h to prepare yellow powder of gC 3 N 4 . The as obtained gC 3 N 4 powder was ground and 3 mg of it was dispersed into 1 ml of IPA (99.8%, Merck), and the dispersion was ultra-sonicated for 10 h at room temperature. The unexfoliated particles were removed through centrifugation at 3000 rpm for 10 min. The supernatant white suspension containing eC 3 N 4 nanoflakes was used to decorate the TiO 2 nanostructures by centrifuge-based deposition method [41] which yields an eC 3 N 4 thin film of ∼700 nm thickness over titania substrates.
Recovery of spent HF
After the thermochemical etching process was over, the foils were taken out from the polypropylene vials. The residual mixture was further filtered and washed repeatedly. The milky-white filtrate was collected and centrifuged at a speed of 30k revolutions min −1 for 10 min to remove the TiO 2 particles. The supernatant liquid was treated by dropwise adding saturated Ca(OH) 2 solution, allowing formation of CaF 2 . The synthesized CaF 2 was filtered, washed repeatedly, and dried at 80 • C overnight. Thereafter, the dried CaF 2 powder was vacuum annealed at 500 • C for 2.5 h [42] to allow the dehydration of the crystals.
Physical and photoelectrochemical characterization
Microstructural information and selected area electron diffraction (SAED) were obtained using FEI Tecnai TF20 high-resolution transmission electron microscope (HRTEM). Field emission scanning electron microscope (FESEM) images were taken by FEI Quanta 200 FESEM instrument, attached with Oxford-EDX IE 250 X Max 80 system where energy dispersion x-ray (EDX) analysis was carried out. For crystal characterization, x-ray diffraction (XRD) analysis was carried out in Rigaku Ultima Miniflex 600 x-ray diffractometer using Cu K α (λ = 0.15418 nm) radiation. Identification of molecular bonds was made using Bruker Vertex 70V Fourier transformed infrared (FTIR) spectrophotometer. The x-ray photoelectron spectroscopy (XPS) analysis was carried out in PHI 500 Versaprobe-II, UlVac-PHI (band pass 23.5 eV, energy band 20 eV). All potentials mentioned in the present work are reported against the reference electrode, Ag/AgCl in saturated KCl (−0.197 V vs. RHE at room temperature). A Pt plate-based electrode with exposed geometric area of 250 mm 2 was used as the counter electrode. The working electrodes had the geometric area of ∼220 mm 2 . All PEC studies were carried out in 1 M NaOH solution under dark and illuminated conditions (AM 1.5G) using 300 W Xenon lamp (Laser Spectra). EIS studies were done at 0.6 V in the frequency range of 100 mHz to 100 kHz with the AC perturbation amplitude of 10 mV. The faradaic efficiency measurements and chronoamperometry were done at 0.23 V (SI 1.6, 1.7) (available online at stacks.iop.org/JPENERGY/2/035002/mmedia).
For convenience, annealed pristine TiO 2 nanoflower are named as NF whereas the annealed pristine TiO 2 nanotubular scaffolds are abbreviated as NT. The heterojunctions fabricated for PEC studies are assigned letter H as the suffix to the original sample name. Thus, TiO 2 nanotubular based heterojunction with eC 3 N 4 has been abbreviated as NTH while the TiO 2 nanoflower based heterojunction with eC 3 N 4 has been abbreviated as NFH. The characterization details of NT and eC 3 N 4 are given in the supplementary information (SI 1.1-1.4). Figure 1 shows the SEM micrographs for the TiO 2 nanostructure samples prepared through the thermochemical digestion process. The vertically grown nanowire like TiO 2 structure, seen in figure 1(a) evolves, when H 2 O 2 is used for the thermochemical digestion. On the other hand, TiO 2 nanoflower array like structure seen in figure 1(b) evolves, when 100 mM HF is used along with H 2 O 2 . Additional information on HRTEM is given in supplementary text (SI 1.3).
Structural characterization
HRTEM micrographs of the TiO 2 nanoflower is shown in figure 2(a). Figure 2(b) shows a single petal and the oxide layer can be seen encasing the inner core. It reveals nearly 2d nature of the nanoflower petals. The opaque core indicates the presence of non-oxidized Ti metal. Figure 2(c) zooms into the oxide layer as marked in figure 2(b) and indicates an inter-planar spacing of ∼1.3 Å which pertains to the (1 0 1) crystal plane of TiO 2 . The SAED diffractogram shown in figure 2(d) points out poly-crystallinity of the evolved TiO 2 nanoflower structure which is further corroborated with XRD diffractogram shown in figure 3. The XRD pattern confirms the inference of HRTEM analyses which reveals that the evolved nanoflower structure is comprised of TiO 2 layer encompassing the Ti metallic core.
The XRD patterns of the eC 3 N 4 /TiO 2 heterojunction samples NFH and NTH are shown in figure 3. The SAED pattern (figure 2(d)) has suggested the polycrystallinity of TiO 2 nanoflowers, and the same is verified by the XRD data. The XRD pattern reveals the presence of Ti and TiO 2 in their respective polycrystalline form in sample NFH. The XRD data match well with JCPDS card numbers #21-1272 (TiO 2 ) and #65-3362 (Ti). In nanoflower sample, asymmetric crystal facet planes (1 0 1) and (2 0 0) of TiO 2 and (0 0 2) of Ti are dominant. Using the XRD data and the Scherrer equation, the crystal size of TiO 2 in the nanoflower sample NF is estimated to be within 34-68 nm range. A peak near 2θ value of 27 • , for both samples NFH as well as NTH, is also visible and corresponds to eC 3 N 4 sensitizer.
Compositional characterization
The SEM-EDX based elemental mapping of sample NF is shown in figure 4. The elemental mapping points out the homogeneous presence of titanium and oxygen. The atomic percentage ratio of Ti to O is 1:2 which indicates the formation of TiO 2 . It suggests the consistent presence of Ti and O 2 in sample NF.
It is worth mentioning here that sometimes EDX is not deemed a fully reliable technique to determine the oxygen content in compounds. However, it is still a very accessible method to swiftly determine the presence of different constituent elements in the compound. Therefore, to confirm the inferences drawn from HRTEM, XRD and EDX results regarding the TiO 2 layer coverage on Ti metallic core, XPS studies with depth-profiling were carried out for sample NF. The core-level elemental scans of Ti 2p and O 1s were done at the as-annealed surface (0 nm) and at the Ar ion-etched depths of 6 nm, 12 nm, and 18 nm from the surface. Figures 5(a) and (b) show the Ti 2p and O 1s spectra, respectively. From HRTEM ( figure 2(b)) the estimated thickness of TiO 2 layer is ∼6 nm. Hence an ion-etching step size of ∼6 nm is used. The XPS system was calibrated using Au (4f 3/2 , 84.0 eV), Ag (3d 3/2 , 368.2 eV), and Cu (2p 3/2 , 932.6 eV) standard samples.
From the XPS spectra of figure 5(a), it is seen that at the surface and ∼6 nm depth, only Ti 4+ (fully oxidized Ti, main peak ∼458.5 eV and satellite peak ∼464.2 eV) [43][44][45] is present. However, at a depth of ∼12 nm, Ti 0 (non-oxidized Ti, main peak ∼454.8 eV) [43,46] is also seen along with Ti 4+ . Since the Ti 0 peak can be taken to be originating from exposure of Ti metal, the thickness of the oxide layer can be estimated to be between 6 nm and 12 nm, agreeing with the earlier thickness estimation based on the HRTEM. However, neither HRTEM nor XPS can precisely measure the oxide layer thickness.
Further etching and the subsequent XPS elemental scan of Ti 2p at a depth of ∼18 nm finds stronger signals of the Ti 0 and a bit weaker signal of the Ti 4+ . It can be assumed that such a change in the signal intensities indicates that while going deep, Ti 0 availability has increased, and the Ti 4+ availability has decreased. Hence, further etching and the XPS elemental scans of Ti 2p were carried out at the approximated depths of 24 nm, 30 nm, 60 nm, 90 nm, 120 nm, and 150 nm, to confirm such an assumption. From a concurrent comparison of the peak-fitted XPS spectra of Ti 2p for the NF sample at different etching depths, it is seen that at the surface and near the surface, TiO 2 is dominantly present. However, the comparison also shows that moving deeper from the surface, the availability of the non-Ti 4+ species, Ti 0 , and Ti 3+ , also increases and can be deemed originating from the exposure of non-oxidized and inadequately oxidized core metal. Further investigation of the matter was reserved for future work. The increasing intensities of the non-lattice and defect oxygen in the O1s spectra [43,46], along with the increasing depths ( figure 5(b)) also confirm the assumption mentioned above. Thus, based on SEM, EDX, XRD, HRTEM, SAED, and XPS, it could be safely accepted that the nanoflower structure of TiO 2 contained the titania as the shell on the non-oxidized Ti core.
Photoelectrochemical characterization
The fabricated TiO 2 /eC 3 N 4 heterojunctions, NFH and NTH, were studied through the linear sweep voltammetry (LSV) and the electrochemical impedance spectroscopy. Figure 6 shows the LSV plots with the current densities and the estimated photocurrent densities of NFH and NTH, under dark and illuminated (AM 1.5G) conditions. For both the heterojunctions current density remains nearly zero under dark conditions and rises sharply under illuminated conditions. The photocurrent density of NFH (∼1.1 mA cm −2 at 0.6 V) is nearly double than that of NTH (∼0.6 mA cm −2 at 0.6 V). The improvement in the photocurrent density of NFH surpasses some of the reported recent works [47][48][49] (SI 1.5).
Nyquist and Bode plots of EIS data are shown in figures 7(a) and (b) respectively. In the present study, Nyquist, and Bode plots in conjunction reveal that at high to moderate frequencies the impedance has been unchanged under dark as well as illuminated conditions, whereas at lower frequencies they exhibit a significant reduction in impedance, by nearly an order of magnitude for NFH relative to NTH. Near equal impedance values for NFH and NTH at high and moderate frequencies are indicative of near-equal resistances. It points out that the PEC reactions in the studied system are not kinetically limited. The study of the low-frequency EIS data with Bode plot reveals a significant decrease of total impedance in NFH as compared to that in NTH, which means that ion transport is easier in NFH. Bode plot also suggests that at low frequencies, for NFH and NTH, the reactance not only decrease while switching from dark to illuminated conditions but also remain low for NFH relative to NTH under dark as well as illuminated conditions. The reduction in phase lag and shifting of phase angle from more negative value to less negative value for NFH are specific indications to the reduced reactance. It may be inferred that the ion transport has become easier under illumination due to decreased reactance for NFH than for NTH and the interfacial behavior of heterojunction in the electrolyte has become 'accepting' rather than 'reflecting' the reactant species for NFH [50]. Thus, it is fair to conclude that the replacement of nanotubular array by nanoflower as TiO 2 -eC 3 N 4 heterojunction substrate is able to reduce ion transport resistance at the electrode-electrolyte interface. The calculated faradaic efficiencies for O 2 evolution are 96.4% and 94.2% for NFH and NTH, respectively (SI 1.6).
Recovery of spent HF
The presented thermochemical digestion method uses HF, which is a highly active oxidant. In order to recover the spent fluorides and to render the whole process ecofriendly, the HF recovery process has been proposed. The HF recovery process is based on neutralization of the spent acidic solution using Ca(OH) 2 and then recovery of HF by treating the dried CaF 2 crystals with sulfuric acid. The study of the proposed reccovery process was truncated at the recovery of CaF 2 crystals, to preserve the scope of the present thermochemical process. The recovery of HF from CaF 2 (a constituent of Fluorspar) using H 2 SO 4 is a well-known process. Figure 8(a) shows the SEM micrograph of the recovered CaF 2 in its powdered form whereas figure 8(b) shows the SEM-EDX based elemental maps and x-ray diffractogram for the same. It can be seen from the elemental mapping that the recovered material contains Ca and F in stoichiometric proportion confirming CaF 2 formation. The XRD suggests the formation of pure and highly crystalline CaF 2 as the diffractogram matches with the JCPDS #77-2245. The presented work not only uses a lower concentration of fluoride (0.1 M) but also allows to recover the spent fluorides.
The residual solution was treated through different stages of filtration and separation, and the process has been reported in section 2.3. The supernatant liquid obtained after centrifugation was clean and it indicated the removal of white-colored TiO 2 particles. The TiO 2 formation mechanism elaborated later in section 3.3 points out that the supernatant liquid is mainly water, which has been checked by adding it in a dropwise manner to anhydrous CuSO 4 powder and dried silica gel granules. Upon addition of the liquid drops, the white anhydrous CuSO 4 powder turned blue, whereas the clear silica gel turned sapphire-blue. H 2 O 2 might have also been present but below detection limit.
The growth of the TiO 2 nanostructure
The use of H 2 O 2 for growing clusters of nanorod like morphology of TiO 2 on Ti metal surface and use of HF to grow TiO 2 nanotubular assembly on Ti metal foils are well known [27,51]. When used together, fluoride ions act as etchant and peroxide species act as an oxidant. There are primarily two processes involved in the formation of TiO 2 nanoflower-the oxidation of the exposed surface and the dissolution of oxides by fluoride species. On clean, fresh surface of Ti foil, a thin passive layer is formed initially under aqueous surroundings [52]. HF etches the oxide layer through the breaking of the passive layer [24] leading to the formation of H 2 TiF 6 , which dissolves in hot water. As H 2 TiF 6 dissolves, Ti surface beneath gets oxidized to form a new passive layer, and the process continues till the oxidant or the etchant is exhausted. The reaction mechanism of the process is shown below.
The rate determining step in photoelectrochemical reaction is transfer of ion from electrolyte/electrode interface to the electrode bulk. The semiconductor of photoelectrode heterojunction becomes conductive if it is photoactive and allows transport of photon-generated charge carriers, requiring a smaller biasing overpotential. If the heterojunction is not very photoactive, a larger bias is required for electron transfer through the semiconducting electrode surface. Also, if the semiconductors, used in the fabrication of the heterojunction, are not well contacted with each other, the number of sites allowing the separation of the photon-generated electrons and holes decreases which again results in low photocurrent density. The primary requirement of the perfect fabrication of the heterojunction is the maximization of the contact surface, shared between the two semiconductors which constitute the heterojunction.
Usually, flat sensitizers, e.g. graphene and eC 3 N 4 , are dip coated, spin-coated or drop cast to create heterojunctions. All the methods use the dispersion of the sensitizer in an appropriate liquid dispersant. The interfacial tension, between the sensitizer particles and the dispersant, and the substrate surface morphology plays an important role in fabrication of a uniform, thin layered interface. In case of NTH, the sensitizer particles clog the mouth of the nanotubular structure [35] and further restrict sensitizer particles to reach inside the nanotube structure. It results as reduced interfacial surface and decreased PEC activity of the heterojunction, NTH. On the other hand, the open structure of NF allows easy access of sensitizer particles, which produces larger interfacial area between the substrate and the sensitizer. It is the reason why NFH performed better than the NTH. This is further explained through FESEM images and schematics of nanotubular and nanoflower structures of TiO 2 coated with eC 3 N 4 sensitizers (figure 9). In NTH, tube openings are coated with eC 3 N 4 but the sensitizer particles block the mouth, which lead to lower interfacial area and high interfacial reactance. On the contrary, NFH offers larger interfacial area with the sensitizer resulting in decreased interfacial reactance. The decrease in charge transport resistance and the interfacial reactance contributes to decreasing the impedance and increasing the photocurrent density.
Conclusion
The thermochemical formation of stemmed TiO 2 nanoflowers is prepared in a solution of hydrogen peroxide, and hydrofluoric acid. An open-branched nanoflower structure of anatase TiO 2 , prepared at a low temperature and sensitized with eC 3 N 4 , is very effective towards PEC water splitting. It has been found that the nanoflower shape of TiO 2 evolves only in the optimum condition when Ti foil is digested for 60 h at 80 • C and in the solution containing 100 mM HF in 30% H 2 O 2 . FESEM shows the formation of stemmed nanoflower structure coated with eC 3 N 4 sensitizer. SAED, HRTEM, EDX, XRD and XPS show the presence of polycrystalline anatase TiO 2 along with Ti metal core. Linear sweep voltammetry and electrochemical impedance spectroscopy studies show that stemmed TiO 2 nanoflower based heterojunction offers a substantial reduction in reactance to allow easier transport of ions which leads to enhanced PEC water splitting. The stemmed nanoflower array based TiO 2 /eC 3 N 4 heterojunction (NFH) gives improved PEC activity towards water splitting by providing better accessibility than nanotubular array based TiO 2 /eC 3 N 4 heterojunction (NTH). The photocurrent density of the heterojunction is nearly doubled from ∼0.6 mA cm −2 for NTH to ∼1.1 mA cm −2 for NFH. The impedance of the heterojunctions at the lower frequency domain is decreased from ∼3.5 kΩ for NTH to ∼0.1 kΩ for NFH. Such significant reduction in impedance is because of decrease of reactance from ∼1.2 kΩ for NTH to ∼0.1 kΩ for NFH. Furthermore, the present method shows the possibility to recover the spent fluoride from the exhaust thermochemical solution, making the process eco-friendly. | 2020-04-16T09:15:24.828Z | 2020-05-29T00:00:00.000 | {
"year": 2020,
"sha1": "c5c8c5ce3841d684643644a19408a4f3f42c46ad",
"oa_license": "CCBY",
"oa_url": "https://doi.org/10.1088/2515-7655/ab8912",
"oa_status": "GOLD",
"pdf_src": "IOP",
"pdf_hash": "3f48efcfe7da9b77983720e4df94f24393c53bb4",
"s2fieldsofstudy": [
"Materials Science",
"Environmental Science",
"Engineering"
],
"extfieldsofstudy": [
"Physics",
"Materials Science"
]
} |
213578847 | pes2o/s2orc | v3-fos-license | Structural instability of friction-induced vibration by characteristic polynomial plane applied to brake squeal
Brake squeal generated by friction-induced vibration is one of the most important issues in brake development because it significantly reduces the comfortability of cars. Brake squeal is a noise generated by the disk brake or drum brake, and it usually occurs at 1 to 16 kHz (Papinniemi et al., 2001). The disk brake generates a braking torque by the frictional force between the pad and the disk rotor during braking. Self-excited vibration may occur between the pad and rotor resulting from various factors, such as fluctuations in frictional force. When the disk rotor vibrates in the normal direction, air is shaken and appears as noise at a frequency close to the natural frequency of the disk rotor. The possibility of brake squeal depends on the surrounding environment and conditions of use and has a complex influence on tribological and structural aspects (Eriksson et al., 1999). In recent years, the number of hybrid electric vehicles and electric vehicles has increased because of environmental regulations, and vehicles equipped with internal combustion engines are being modified to reduce weight, which affects Structural instability of friction-induced vibration by characteristic polynomial plane applied to brake squeal
Introduction
Brake squeal generated by friction-induced vibration is one of the most important issues in brake development because it significantly reduces the comfortability of cars. Brake squeal is a noise generated by the disk brake or drum brake, and it usually occurs at 1 to 16 kHz (Papinniemi et al., 2001). The disk brake generates a braking torque by the frictional force between the pad and the disk rotor during braking. Self-excited vibration may occur between the pad and rotor resulting from various factors, such as fluctuations in frictional force. When the disk rotor vibrates in the normal direction, air is shaken and appears as noise at a frequency close to the natural frequency of the disk rotor. The possibility of brake squeal depends on the surrounding environment and conditions of use and has a complex influence on tribological and structural aspects (Eriksson et al., 1999).
In recent years, the number of hybrid electric vehicles and electric vehicles has increased because of environmental regulations, and vehicles equipped with internal combustion engines are being modified to reduce weight, which affects fuel efficiency. The quietness and light-weight of vehicles are contradictory phenomena with noise and vibration of disk brakes; therefore, the suppression of brake squeal is even more difficult. A considerable amount of work has been conducted on analysis of the brake squeal mechanism. However, even today, brake squeal has not disappeared, so studies on friction materials and braking devices have been conducted. Detailed reviews have also been published (Kinkaid et al., 2003).
Recently, transient brake squeal analysis for prediction has been evaluated (Oberst and Lai, 2015). Nevertheless, at this time, complex eigenvalue analysis is widely used to estimate brake squeal (Milner, 1978). This is a method for solving structural instability problems caused by coupled vibrations of two-degree-of-freedom systems including frictional forces (Milner, 1978). Complex eigenvalue analysis is widely practiced, estimating several dozens of natural frequencies of the brake assembly at once in the audible range frequency by the finite-element method (Liu et al., 2007). Meanwhile, the transition of eigenvalues leading to unstable vibrations is unclear, and the influence of equations of motion parameters on eigenvalues has been studied .
Complex eigenvalue analysis is a useful tool that can predict several types of self-excited oscillations. However, if a parameter study is not performed, it is impossible to predict how the eigenvalue changes and branches, and it is not possible to establish a guideline for suppressing noise in advance. Therefore, the degree of eigenvalue transition because of the friction coefficient, mass, and rigidity is not known unless it is calculated. In the initial design phase of brake calipers, trial and error are repeated, which is a major development issue.
In the structural instability problem, which is a typical cause of brake squeal, each mode of a two-degree-of-freedom system is coupled when the friction coefficient increases (Huang et al., 2006). Then, the eigenvalues first approach each other in the direction of the imaginary part and then repel in the direction of the real part. Even in stable two-degree-of-freedom vibrations, a phenomenon called "curve veering" occurs, the eigenvalues approach each other in the imaginary part direction, and then they repel in the eigenvalue imaginary part direction (Du Bois, 2009). Mode coupling and curve veering are known to show similar behavior, but the correlation between the two is not clear.
The purpose of this study is to explain the mechanism of such eigenvalue transitions when coupled. Focusing on the characteristic polynomial, which is the left side of the eigenequation, this study considers that the eigenvalue exists at the intersection of the curved surface representing the real part of the characteristic polynomial drawn on a complex plain and the zero plane. Based on this idea, instead of directly calculating the eigenvalue, the reason the eigenvalue is determined is explained geometrically. A target of this survey is a transition of eigenvalues of a model with low degrees of freedom. The complex solution of the characteristic equation is plotted in a three-dimensional space to obtain a response surface that geometrically visualizes the location of the solution. Parameter studies explain the transition ways of complex solutions from changes in response surfaces. This research is conducted before and after the eigenvalue analysis and describes the advanced study of robust parameters with high stability, the evaluation of eigenvalue analysis results, and the technical study utilized for an improvement proposal for brake systems.
Current methods for the stability criterion of brake squeal
In the widely used finite-element method (FEM), the following factors cause self-excited vibration caused by friction force (Bajer et al., 2003). 1) Negative friction velocity gradient du/dv (one degree of freedom) 2) Structural instability (two-degree-of-freedom coupling) 3) Frictional damping Structural instability, which is considered to be the most typical brake noise factor, is targeted in this study. This is a phenomenon called "modal coupling" that occurs when two vibration modes interfere with each other under the influence of frictional force. This unstable vibration can be solved by complex eigenvalue analysis (Milner, 1978). Equation (1) is an equation of motion considering the frictional force (Trichês Júnior et al., 2008).
where M, C, and K are mass, damping, and stiffness matrices, respectively, and u is the generalized displacement vector. Friction function F is contributed to by the variable friction force at a pad rotor interface; it is shown in Eq.
Kf is a friction stiffness matrix proportional to friction coefficient μ and the contact stiffness between the brake pad and rotor. Because the frictional force is proportional to the contact stiffness, it is collected in a stiffness matrix, as shown in Eq.
(3). Consequently, the equation deformation causes the stiffness matrix to become asymmetric and causes instability.
In this section, the current status of stability determination using the widely used FEM. Several elements, which are solid elements for parts of a brake assembly and spring elements that simulate contact stiffness between parts, are formed into an FEM model (Fig. 1). The transformation of the equation of motion is treated as an eigenvalue problem to obtain eigenvalue matrix p and eigenvector matrix in Eq. (4). Here, p is a diagonal matrix having each eigenvalue in the diagonal term, as in Eq. (5), and { } is a matrix in which each eigenvector is arranged as in Eq. (6).
Several types of structural damping are often added to the stiffness matrix, as in Eq. (7). In the stiffness matrix Ktotal, the friction component αfKf, in which the stiffness Kf is multiplied by a factor α that adjusts the initial value of the friction coefficient used in the parameter study, is considered. Because the eigenvalues representing the convergence or divergence are complex numbers, complex eigenvalue analysis obtains root plots as shown in Fig. 2. If real parts of eigenvalues are positive, they indicate instability. The increase in the coefficient of friction causes instability with the real part of a specific eigenvalue being positive. At the same time, the mode in which the real part is negative is a paired mode that leads to coupling. The utilization of the analysis predicts the occurrence of brake squeal in actual brake caliper designs, leading to an easy decision about whether the brake is stable or unstable. However, this kind of analysis does not lead an understanding of why particular eigenvalues become unstable. Therefore, the mechanism of instability occurrence using the low-degree-of-freedom model is investigated in the next section. 3. Structural instability problem for a model with low degrees of freedom Figure 3 depicts a low-degree-of-freedom model for investigating the factor of transition of eigenvalues. This model has a caliper and a rotor, simulates a pad represented by distributed stiffness and distributed damping, and generates a friction force f from the pad. The structure has two degrees of freedom of translation in the vertical direction x and rotation θ, and spring dampers are provided on the rotational leading side and the trailing side of the pressing parts. The contact surface has distributed spring dampers, and the portion without the distributed spring corresponds to a slit. This model is characterized by mode coupling with a minimum degree of freedom with friction. Finite-element model of disk brake assembly. The parts are connected by several spring elements. The spring coefficients are identified from the experimental modal analysis results. Moreover, spring elements simulating the contact stiffness are provided between the pad and the rotor, and the friction force fluctuation is calculated from the displacement of the pressing force fluctuation in proportion.
Example of an FEM complex eigenvalue analysis. The horizontal axis is Eigenvalue (R), which is the real part of eigenvalue. The vertical axis is Eigenvalue (I), which is the imaginary part of the eigenvalue. Because of the damping parameters, most eigenvalues are stable, which are biased in the negative side of Eigenvalue (R). When the friction coefficient is changed, most eigenvalues have little shifting, although, only the specific eigenvalue becomes unstable. The parameters of each part are as listed in Table 1. Experimental modal analysis for an actual brake identifies the parameters . Values that may cause instability are substituted into the parameters using the Routh Hurwitz stability determination described later. With the center of gravity as the origin, the vertical translation direction as x, the horizontal direction as y, and the rotational angle as , the following equation of motion is derived. The translational force by the stiffness of the leading side is shown in Eq. (8), the force by the stiffness of the trailing side is in Eq. (9), and the force by the contact stiffness is in Eq. (10).
Parameters of the model with two degrees of freedom. They are based on an actual mass and dimensions of a brake system; some values such as springs and dampers have been identified from experimental values. The parameters are arbitrarily fine-tuned to cause instability.
The two degrees of freedom model with rotational directionθand translational direction x. The rotor is forcibly displaced in the direction from the leading side to the trailing side. A frictional force generated in proportion to the axial restoring force generated by the distributed spring simulating the pad stiffness acts on the caliper.
Assuming is small enough, the contact force is obtained by integrating over the section of the contact surface. The leading side damping force , the trailing side damping force , and the contact portion damping force ̃ are similarly expressed as in Eq. (11) through Eq. (13).
Thus, an equation of motion in the vertical translation direction is given by Eq. (14).
Regarding the degree of freedom in the rotation direction, the torque by the leading side stiffness, the torque on the trailing side stiffness, and the torque at the contact surface are expressed by Eq. (16) through Eq. (18).
Similarly, the leading side damping torque , the trailing side damping torque , and the contact damping torque ̃ are expressed as in Eq. (19) through Eq. (21).
Thus, an equation of motion in the rotational direction is given by Eq. (22).
Equation (24) is an integrated equation of motion of this model. Here, {u} is a column vector having a translational displacement x and a rotation angle θ, as expressed by Eq. (25). M, C, and K are 2 × 2 matrices, as in Eq. (26); each element is as shown in Eq. (27) through Eq. (34). Similar to Eq. (3), the effect of friction is considered in the damping matrix and the stiffness matrix. Eigenvalue transition is discussed using this model.
Stability determination by the Routh Hurwitz criterion
In this section, stability evaluation is performed on the above equation of motion of the two-degree-of-freedom system. First, the Routh Hurwitz discriminant conditional equation is applied to determine whether the characteristic roots are stable without directly obtaining the characteristic roots. The displacement vector is shown in Eq. (35), where U is amplitude. Symbol s is the Laplace operator, and t is time.
, , , , > 0 (42) The criterion shows a relationship of stability with the support springs shown in Fig. 4. When the leading-side spring stiffness kL and the trailing-side spring stiffness kT are set as the coordinate axes, instability occurs in a specific region. Moreover, the area is expanded by increasing the friction coefficient . In Eq. (43), coefficients α3, α2, α1, and α0 including a friction coefficient exist, and this is a result of expanding the region. However, if such a stability discriminant is used, it becomes possible to find an unstable parameter, but this result does not lead to the causes of the self-excited vibration, such as du/dv. 3.2 Complex eigenvalue analysis for a low-degree-of-freedom system Stability discrimination clarifies whether instability occurs. Nevertheless, it does not indicate physical causes of the friction-induced vibration. Therefore, the complex eigenvalue analysis widely practiced today is examined. To facilitate the evaluation, Eq. (44), excluding the damping factor for suppressing the destabilization, is considered.
A solution form, Eq. (45), is assumed as a general solution of second-order linear ordinary differential equations.
Here, λ is an eigenvalue. As in Eq. (46), λ is a complex number consisting of a real part R indicating increase or decrease of vibration and an imaginary part I indicating the speed of vibration, using imaginary unit j.
= + The equation can be expressed as an eigenvalue problem using the eigenvalue λ and the eigenvector in Eq. (47). Figure 5 shows results of the complex eigenvalue analysis of the two-degree-of-freedom model. Fig. 5(a) depicts the transition of two natural frequencies when the coefficient of friction is increased by 0.01. The frequencies have a Stable Unstable Area Stability judgment results by the Routh-Hurwitz equation. The horizontal axis is the stiffness of trailing side kT, and the vertical axis is kL. The model becomes unstable only when kL is larger than kT. In addition, several results of friction coefficient differences are shown, and the increase in the friction coefficient enlarges the unstable region.
Stable correlation with the eigenvalue imaginary parts. The increase in the coefficient of friction reduces the difference between the natural frequencies and causes the two natural frequencies to coincide on the way. After the coincidence, the real part of the eigenvalue changes from zero, as shown in Fig. 5(b). In other words, the two complex eigenvalues asymptotically approach in the imaginary axis direction and then repel in the real axis direction. As shown in Fig. 5(c), the absolute values of the real parts increase symmetrically as the friction coefficient increases. This three-dimensional transition shows a twisted diagram, as in Fig. 5(d). The divergent vibration has a positive real part of the eigenvalue, and the cause of instability is thought to be structural instability resulting from modal coupling of a two-degree-of-freedom system. This result reproduces a similar previous study (Flint and Hultén, 2002). Such a phenomenon has also been confirmed in experiments (Akay et Al., 2009). The real part transitions that mean divergence or convergence are discussed in more detail later.
Thus, eigenvalue analysis reveals not only the stability but also the modal coupling as the cause of instability. Additionally, the degree of instability is also indicated in the real part of eigenvalue. However, before the coupling, the real part of the eigenvalue is always zero, which cannot express how stable the vibration system is. When the vibration system includes a damper, the real part of the eigenvalue is affected by the damping, however, it does not indicate the difficulty of the coupling. To estimate how stable it is, the difference between the two natural frequencies is substituted. However, many studies have shown that the coupling can easily occur despite the large difference. For the brake noise prediction, an evaluation value indicating the stability is required, not the instability. This behavior is presumed to be a similar phenomenon to the curve veering observed in a general vibration system (Du Bois et al., 2009). Figure 6 shows a model of a translational two-degree-of -freedom system with curve veering, and Fig.7 indicates the evolution of the respective eigenvalues with increasing mass. From a macro point of view, the two modes appear to intersect after they approach each other, and, after that, they appear to separate from each other. However, their ways of transition are interchanged without crossing in fact. The connecting spring s is set small enough, and they only change the mass m2. As a result, only one of the eigenvalues is actively changed. Meanwhile, in the previous complex eigenvalue analysis, the eigenvalues are attracted to each other in a symmetrical manner. The behavior leading to the curve veering in friction-induced vibration has not been sufficiently studied so far, and the factor of this behavior is explained in the next section.
Evaluation by characteristic polynomial plane 4.1 Geometrical representation
In a vibration system including friction, seemingly, eigenvalues shifting from stable to unstable have similarity to curve veering. Meanwhile, in a complex eigenvalue analysis by FEM with a large degree of freedom, the transition ways and amount of the eigenvalues are various, and the factors are not sufficiently considered. One of the factors is presumed to be that the margin of the coupling during the stable state is not sufficiently expressed. Therefore, instead of dealing with a characteristic equation directly to find eigenvalues, a characteristic polynomial, which is an element of a characteristic equation obtained from an equation of motion, is processed. First, a characteristic polynomial is derived for the two-degree-of-freedom model.
For the equation of motion of Eq. (24) above, the translational velocity and rotational velocity are determined as Eq. (48) The characteristic polynomial ( ) is formulated as Eq. (55), where λ is the eigenvalue; the expression is expanded to get Eq. (56).
Next, the eigen equation is regarded as an equation in which both the real and imaginary parts of the characteristic polynomial are zero. The real part of the characteristic polynomial obtained by substituting an arbitrary complex number + into the eigenvalue λ is as shown in Eq. (57), as the real part of the eigenvalue is and the imaginary part of the eigenvalue is . Similarly, the imaginary part is shown in Eq. (58).
In the following, for simplicity, the equations are treated without damping. Equation (57) is a quartic function, which becomes an even function that usually has three extreme vertices. Equation (58) becomes zero when = 0 or = 0. In the case of a stable vibration phenomenon with no damping, eigenvalues always have pure imaginary numbers. Consequently, the eigenvalues should be obtained when Eq. (57) becomes zero with = 0. To obtain at which the real part of the characteristic polynomial becomes zero, it is geometrically illustrated to facilitate understanding of this feature. In a stable state, the response surface that is the real part of the characteristic polynomial is shown in Fig. 8. Because the characteristic polynomial is a complex quartic function, it is characterized by an axisymmetric shape having the characteristics of an even function.
Next, Fig. 9 shows overhead views of response surfaces when the friction coefficient is changed. In Fig. 9(a), the eigenvalues exist at the points where the response surface intersects with the imaginary axis. In the two-degree-offreedom stable system, four pure imaginary eigenvalues are obtained. Attention is paid to two positive coefficients of imaginary parts of eigenvalues representing vibration among these four. These two eigenvalues appear at the valley boundary where the response surface is below the zero plane. Figure 9(b) shows the result of unstable vibration when the friction coefficient is 0.6. The valley of the response surface is above the zero plane and is not divided in the imaginary axis direction. The eigenvalues appear in the constriction, changing from a pure imaginary number to complex number.
Eigenvalue Eigenvalue
where the imaginary axis intersects with the response surface having a downwardly convex valley are stable eigenvalues. Thus, the shape of the response surface, particularly the height and position of the vertex, directly determines the value of the eigenvalue.
Fig.10
A geometric interpretation of the eigenvalue determination by the response surface is shown in Fig. 11. As the friction coefficient increases, the height of the peaks formed by the surface gradually decreases relative to the horizontal plane where the real part of the characteristic polynomial is zero. When the saddle of the surface is lower than the horizontal plane, the eigenvalues change from the pure imaginary numbers to the complex numbers. The eigenvalue is determined by the relative height between the response surface and the horizontal plane. Here, an assumption is made that the relative height of the horizontal plane changes with reference to the response surface. Then, increasing the friction coefficient can be regarded as a decrease in the height of the horizontal surface. The pair of eigenvalues behaves like two balls rolling from two high peaks, and the eigenvalues are seemingly determined by where they touch the zero plane. In the case of unstable vibration, the ball reaches the saddle part of the response surface before reaching the zero plane, and the direction of the transition is shifted by 90° from the imaginary to real axis. They move as if repelling in the real axis direction, and as a result, the eigenvalues become complex numbers. When the damping coefficients are zero, the response surface is symmetric with respect to the imaginary axis and the real axis. The eigenvalues are also symmetric with respect to the imaginary axis.
2 Effects of parameter changes on response surfaces
In the previous section, the existence positions of the eigenvalues specified from the response surface were clarified. The form of the response surface determines whether the vibration system is stable or unstable. In this section, the effects of some parameters of the equation of motion on the response surfaces are evaluated, including how the parameters that make up the equation of motion, such as mass, damping, stiffness, and friction coefficient, affect the shape of the response surface. This evaluation helps you understand how the value of the eigenvalue is determined. First, to facilitate evaluation, the damping coefficients of the vibration system are zero. Because the eigenvalues exist on the imaginary axis during stable vibration, the imaginary axis sectional view indicates the influence of each parameter. Taking damping into consideration, overhead views of response surfaces are verified to determine the influences of damping.
In Table 1, representative parameters, such as friction coefficient μ, mass M, and stiffness kT, are given. For qualitative evaluation, the response surface needs to be changed boldly. Therefore, the change width of each parameter is greatly changed numerically, ignoring general physical characteristics. Figure 12 shows the transition of the cross sections when the coefficient of friction μ is changed. Increasing the friction coefficient μ raises the cross section in the direction of the real axis. The eigenvalues are the intersections of the cross section and the imaginary axis. When the friction coefficient is small, the separation between eigenvalues is wide. Conversely, the separation width decreases as the friction coefficient increases. This parameter study describes how a large friction coefficient causes the eigenvalues not to exist on the imaginary axis, and the eigenvalues change from a pure imaginary number to a complex number, leading to unstable vibration. From another viewpoint, when the vertex moves from the minus side to the plus side of the characteristic polynomial real part, it becomes unstable. Because the vertex position is unknown only by eigenvalue analysis, it is not known how stable the initial vibration system is with respect to changes in the friction coefficient. If there is information, such as whether the slope of the convex left and right is gentle or tight and how stable the vertex position is, there is no need to study parametrically the eigenvalue analysis results. Figure 13 shows the transition of the cross sections when the mass M is changed without friction. As the mass increases, the convex parts of the cross section shift in the direction of the origin of the imaginary axis. The shifting indicates that the increases of the modal mass decrease the absolute values of imaginary parts of the eigenvalues. In Geometrical interpretation by response surface. Assuming that the height of a plane where the real part of the characteristic polynomial is zero is relatively changed, the transition of the eigenvalues compared to balls is shown. When the plane is relatively lowered, the balls fall below the saddle portion of the surface. Therefore, they have the real parts of the eigenvalues. addition, in this process, the real parts of the vertexes increase halfway, and decrease halfway. Characteristically, the cross sections have a fixed point that does not depend on the size of the mass M. When the apex of the convex portion coincides with the fixed point, the cross section is at the uppermost position; in that case, a slight increase in the friction coefficient causes the apex to move to the positive side of the real axis. That is, the point at which the eigenvalue separation becomes zero is a fixed point, and unstable vibration easily occurs there. In this geometrical expression, the vertex position is clear in advance. When it is desired to move efficiently the vertex position to the stable side, it can be determined whether the mass should be increased or decreased based on the positional relationship between the vertex and the fixed point. Even in a parameter study using eigenvalue analysis, the interval between eigenvalues can be expanded, but the stability margin for the interval is not known, and a measure for expanding the interval uniformly is insufficient. If this curved surface is utilized, the tolerance to destabilization of the eigenvalue separation can be obtained. The fixed point is the point where curve veering occurs, and this time the frictional force is not generated, so it appears at the zero point in the real part axis direction of the characteristic polynomial. In other words, the eigenvalues when the mass is changed are close to each other and rebound in the imaginary direction after being closest to the curve veering point. If a frictional force is applied to this vibration system, it is probable that the eigenvalue repels in the direction of the real axis, as shown in Fig. 13. Therefore, the curve veering in the stable vibration is similar to the modal coupling in the unstable vibration, except that the direction of rebound is the imaginary axis or the real axis. Figure 14 shows the transition of the cross sections when the stiffness kT is changed. Contrary to the increase in mass, the convex portion moves away from the origin. In addition, the convex portions turn from rising to falling and have a fixed point at the highest position. Thus, mass and stiffness have similar properties, and each specific value makes the vibration system most unstable. Therefore, adjusting the mass and stiffness, so that the eigenvalue moves away from the fixed point, leads to brake squeal countermeasures. Similar to the case of mass change, the stability margin can be known by evaluating the curved surface shape.
To summarize the above, focusing on the vertices of the cross sections, the values of the real parts of the characteristic polynomial at the vertices can express not only the degree of the instability but also the stability. When the friction coefficient increases without damping, the real parts of the characteristic polynomial at the vertices change from negative to positive. The evaluation method using the real part of the characteristic polynomial is compared with the eigenvalue analysis in Fig.15. It shows the transition of the eigenvalues and the transition of the real part of the characteristic polynomial at the vertex when the friction coefficient changes. In Fig.15 (b), the vertex shows the consistent transition before and after the coupling. Therefore, the stability and the instability can be expressed relatively.
Fig. 12
Response cross sections when friction coefficient μ is increased, ignoring the real physical property and drawing 11 levels of cross section with -5 ≤ μ ≤ 5 show a relatively monotonous change.
Direction of increasing μ Response cross sections when stiffness kT is increased. drawing 9 levels of cross section with 10 5 ≤ kT ≤ 10 9 . The increase in the stiffness expands the overall unevenness. Yellow points are veering points that are fixed points.
Curve veering point
Response cross sections when mass M is increased. drawing nine levels of cross section with 2.0 ≤ M ≤ 3.0. The increase in the mass reduces the overall unevenness. Yellow points are veering points that are fixed points.
Direction of increasing kT
Difference between the transition of the eigenvalues and the transition of the vertex of the real part of the characteristic polynomial. The friction coefficient increases from 0 to 0.2 in 0.05 steps. (a) Transition of the eigenvalues. The direction of the transition changes before and after the coupling. (b) The vertex rises steadily. Thus, the vertex can be regarded as a substitute for the difficulty of the coupling. The next study is about the change of the response surface caused by the increase of the damping coefficient . A previous study reported that the transition of eigenvalues including damping is complicated (Fritz et al., 2007). When unstable with small damping, the paired eigenvalues exist in the constriction of the response surface to cross the imaginary axis. This is equivalent to the previous case without damping in Fig. 9. Figure 16 shows an overhead view of the response surface with large damping. As the damping increases, the symmetry of the response surface with respect to the imaginary axis is lost, the surface on the negative side of the real axis becomes lower, and the area exceeding the zero plane decreases. Furthermore, the center of the constricted part shifts to the negative side of the real axis. Thus, the overhead view geometrically reveals that the real part of the eigenvalue becomes negative and stabilizes. In general, when damping is applied to a vibration system, the real part of the eigenvalue shifts to the minus side, and the real part of the eigenvalue representing the speed of vibration decreases. In the case of coupled vibration with damping, the imaginary parts of the eigenvalues take different values, and even if the eigenvalue imaginary part does not match, if friction force flows in, it becomes unstable. By evaluating this curved surface, it can be explained how a vibration system including damping becomes unstable, even if the eigenvalue imaginary part does not match. It is difficult to derive such an interpretation by eigenvalue analysis. The main transitions of the above parameters are summarized in Table 2. This study geometrically reveals that the transition changes depending on whether the eigenvalues are larger or smaller than the fixed points. One of measures against brake squeal is to increase the separation of eigenvalue pairs that are coupled by changing the mass and stiffness. This is attempted for multiple-degree-of-freedom systems; however, it often fails to obtain the desired amount of expansion. The response surface provides guidance for improvement studies.
Eigenvalue
Top view of response surface when damping coefficient is increased, with the damping coefficient = 10 3 . Red points are eigenvalues calculated from eigenvalue analysis. The symmetry of the response surface with respect to the imaginary axis is lost.
Parameter
Transition of response surface Friction coefficient Increasing the convex of response surface.
Mass
The convex part shifts toward the origin on imaginary axis direction. Most rising at a fixed point.
Stiffness
The convex portion moves away from the origin. Most rising at a fixed point. Damping The response surface shifts real axis direction. Loss of symmetry.
Conclusion
In this research, trends of transitions of eigenvalues were investigated using the complex response surface, which shows the real part of the characteristic polynomial for the friction-induced vibration in a low-degree-of-freedom system. It was revealed geometrically how each parameter constituting the equation of motion, is related to the shape of the response surface that determines the eigenvalues. The real part of the proper polynomial forms a quartic surface that is extended to complex space. The surfaces are useful for geometrically indicating the location of eigenvalues obtained by eigenvalue analysis. The shape of the curved surface is effective for estimating how the eigenvalue changes because of the change of the parameter. The stability of the vibration system can be judged by how far the vertex position of the curved surface is from the zero plane. It is also possible to estimate the ease of destabilization by grasping the vertex position and the separation between eigenvalues. In eigenvalue analysis, an eigenvalue is naturally obtained, but where the vertex position is cannot be obtained. If one compares the eigenvalues to a ball rolling on a curved surface, both mode coupling and curve veering can be regarded as equivalent phenomena. The evaluation of the result of the FEM complex eigenvalue analysis that is widely used is a difficult task that relies only on the position of the intersection of the curved surface and the zero plane, and the eigenvalue transition leading to coupled vibration is not sufficiently considered. Adopting this evaluation suggests how eigenvalues are affected by each parameter. It is hoped that this will lead to effective measures to suppress brake noise.
In the future, this research should be applied to the transition of complex eigenvalues of the FEM with many degrees of freedom. The factors are clearly understood and verified by experiments. In addition, an effective countermeasure against brake squeal of the actual brake assembly is formulated. Squeal suppression by optimization techniques for complex eigenvalue analysis has been studied (Matsushima et al., 2014) (Inoue et al., 2015); however, the technique for the optimal solution was not revealed. This evaluation should physically explain factors of optimal shapes. | 2020-02-20T09:17:58.094Z | 2020-01-01T00:00:00.000 | {
"year": 2020,
"sha1": "8f4f0a1138167fce6b1f548d50afd3772c47284a",
"oa_license": null,
"oa_url": "https://www.jstage.jst.go.jp/article/jamdsm/14/1/14_2020jamdsm0014/_pdf",
"oa_status": "GOLD",
"pdf_src": "Adhoc",
"pdf_hash": "d76fc2b8e169dc6a978e67eec5f18f618b6fae60",
"s2fieldsofstudy": [
"Engineering"
],
"extfieldsofstudy": [
"Physics"
]
} |
251726071 | pes2o/s2orc | v3-fos-license | Mental and physical training with meditation and aerobic exercise improved mental health and well-being in teachers during the COVID-19 pandemic
Teachers face significant stressors in relation to their work, placing them at increased risk for burnout and attrition. The COVID-19 pandemic has brought about additional challenges, resulting in an even greater burden. Thus, strategies for reducing stress that can be delivered virtually are likely to benefit this population. Mental and Physical (MAP) Training combines meditation with aerobic exercise and has resulted in positive mental and physical health outcomes in both clinical and subclinical populations. The aim of this pilot study was to evaluate the feasibility and potential effectiveness of virtual MAP Training on reducing stress and improving mood and well-being in teachers. Participants (n = 104) were from recruited online from kindergarten to grade twelve (K-12) schools in the Northeastern region of the United States and randomly assigned to a 6-week program of virtual MAP Training (n = 58) or no training (n = 13). Primary outcomes included pre-intervention and post-intervention ratings on self-report measures of social and emotional health. Changes in cognitive functioning and physical health were also examined in secondary analyses. By intervention end, participants in the MAP Training group reported less anxiety and work-related stress compared to those who received no training (ds = −0.75 to −0.78). Additionally, MAP Training participants reported improvements in depressive symptoms, rumination, work-related quality of life, perceived stress, and self-compassion (ds = 0.38 to −0.82), whereas no changes were observed in the no training group. Participants also reported increased subjective ratings of executive functioning, working memory, cognitive flexibility, and fewer sleep disturbances (ds = −0.41 to −0.74). Together, these results suggest that the combination of meditation and aerobic exercise is an effective virtual intervention for improving mental health and well-being among K-12 teachers and may enhance resilience to stressful life events such as occurred during the coronavirus pandemic.
Introduction
The coronavirus disease 2019 (COVID-19) pandemic upended education systems nationwide and created a uniquely stressful and demanding situation for teachers. Teaching has long been recognized as a high-stress profession, with 46% of teachers reporting high daily stress during the school year (Gallup, 2014). Teacher stress has been linked to high job demands (McCarthy, 2019), as educators struggle to balance pressures from administrators, students, and parents. Additional sources of stress include a perceived lack of support, poor working conditions, and student misbehavior (Shernoff et al., 2011;Richards, 2012). Together, these factors contribute to low job satisfaction (Liu and Ramsey, 2008;McCarthy, 2019), reduced occupational commitment (McCarthy, 2019;Fitchett et al., 2021), and high rates of attrition (Boe et al., 2008;Conley and You, 2009). Teachers are also likely to experience workplace fatigue (Fitchett et al., 2021) and burnout (Haberman, 2005;Bottiani et al., 2019) as a result of work-related stress. Moreover, job stress has been associated with mental health symptoms including anxiety, depression, and somatization (Godin et al., 2005;Mark and Smith, 2012), as well as physical health effects such as increased disease risk, weight gain, and poor sleep (Bosma et al., 1998;Kivimäki et al., 2006;Knudsen et al., 2007).
The onset of the COVID-19 pandemic exacerbated some of the mental health outcomes associated with this high-stress occupation. For example, increases in anxiety and depressive symptoms following the onset of the pandemic have been reported globally (Ciacchella et al., 2022). Early studies suggest a high percentage of educators have experienced significant distress (Aperribai et al., 2020;Ozamiz-Etxebarria et al., 2021) as well as reduced quality of life (Lizana et al., 2021). Moreover, teachers have reported moderate levels of secondary traumatic stress (i.e., avoidance, intrusion, arousal; Anderson et al., 2021). Importantly, concerns about health and safety, teaching demands, parent communication, and administrative support were identified as significant predictors of teacher burnoutstress (Pressley, 2021). Further, rates of teacher attrition are projected to increase with COVID-19 cited among the top reasons teachers chose to leave the profession in 2020 (Diliberti et al., 2021). Thus, the COVID-19 outbreak placed teachers in critical need of mental health support.
Mindfulness is often defined as becoming aware of what you are sensing and feeling in the present moment, without judgment or interpretation and is usually practiced during sitting and/or breathing meditation. In recent years, there has been growing interest in incorporating mindfulness training with meditation into schools. Much of this work has focused on students rather than teachers, with modest increases in student learning, cognition, and psychological well-being [see Zenner et al. (2014) and Carsley et al. (2018) for review]. Meta-analyses suggest that mindfulness-based interventions may provide additional benefit (Klingbeil and Renshaw, 2018;Zarate et al., 2019). For example, studies report large increases in self-compassion in teachers (Roeser et al., 2013;Frank et al., 2015), along with moderate decreases in stress and anxiety, and small, but significant, improvements in depression and burnout (Zarate et al., 2019), and medium effects overall (Klingbeil and Renshaw, 2018). Positive effects on physical health have also been reported in teachers, such as improvements in sleep quality and decreases in fatigue (Crain et al., 2017;Dave et al., 2020). Teachers who engaged in the most established meditation-based intervention, Mindfulness-Based Stress Reduction (MBSR), reported increases in mindfulness and sustained attention (Flook et al., 2013). Thus, mindfulness training through meditation may be an effective tool for reducing stress and improving health and well-being among teachers, especially while living through a stressful life event as occurred during the coronavirus pandemic.
Aerobic exercise is cardiovascular activity achieved through a large increase in heart rate, usually corresponding to a nearly two-fold increase from the rate at rest. As a result, more oxygenated blood is distributed throughout the body, with a large percentage (∼20%) of it reaching the brain. The benefits of aerobic exercise are widespread, including changes in coronary blood flow, sleep quality, reductions in blood pressure, and systemic inflammation (e.g., Pratley et al., 2000;Hamer et al., 2006;Warburton, 2006;Ismail et al., 2012;Passos et al., 2012;Zheng et al., 2019). But more germane to the present study, regular aerobic exercise is linked to less depression and anxiety (Shamus and Cohen, 2009;Basso and Suzuki, 2017) and greater quality of life (e.g., Pang et al., 2013;Wu et al., 2020). Moreover, exercise is associated with small, but significant, improvements in attention, executive functioning, processing speed, and memory (Smith et al., 2010;Basso and Suzuki, 2017). Whereas a plethora of school-based exercise programs have been developed to promote student engagement, few have examined the potential benefit of exercise interventions for teachers. Abós et al. (2021) investigated the effects of a 16-week physical activity program consisting of playful, strength, aerobic, and backpain prevention exercises. Teachers who participated in the program reported significant improvements in work-related outcomes such as work satisfaction, vigor, and absorption in comparison to controls (Abós et al., 2021). These findings, together with substantial evidence of mental and physical health benefits, suggest that exercise interventions may promote positive outcomes in teachers.
In this pilot study, we delivered a brain fitness program that combines mental training with meditation and physical training with aerobic exercise to teachers during the COVID-19 pandemic. The program, known as MAP Training, includes 30 min of focused-attention and slow-walking meditation, both done in complete silence. These activities are immediately followed with 30 min of aerobic exercise (Shors et al., 2014;Shors, 2021). In clinical and subclinical populations, MAP Training has yielded positive effects on mental health with decreases in depression, rumination, and post-traumatic thoughts, along with an increase in quality of life (Shors et al., 2014(Shors et al., , 2017(Shors et al., , 2018Alderman et al., 2016;Lavadera et al., 2020). In addition, studies suggest an increase in the volume of oxygen consumption as measured with VO 2 (Shors et al., 2014), synchronous brain activity as measured with electroencephalography (EEG) during cognitive control (Alderman et al., 2016), and discrimination learning during a pattern separation task associated with neurogenesis in the adult hippocampus (Millon et al., 2022). Additionally, engaging in the combination of mental and physical training activities has been shown to be especially effective when compared to engaging in one activity on its own (Shors et al., 2018).
During the coronavirus pandemic, there was increased need for interventions and exercise programs that could be delivered and practiced online through virtual mechanisms such as Zoom.
To meet this need, we evaluated the feasibility of virtual MAP Training on reducing stress and improving psychological, cognitive, and health outcomes in primary and secondary school teachers who were living through the COVID-19 pandemic.
Participants
Participants included K-12 (kindergarten through grade 12) educators in schools in the states of New York, New Jersey, Connecticut, and Pennsylvania, given that the impact of COVID-19 was similar among these regions (i.e., containment strategies, case counts). Subjects were recruited in three waves (from June 2020 to July 2020) through flyers distributed to area school administration (e.g., principals, assistant principals) and social media (i.e., Facebook) advertisements. Interested individuals with a physical health condition that may contraindicate vigorous exercise (e.g., history of heart disease, stroke, cardiac arrythmia, uncontrolled asthma, severe joint problems) were excluded from study participation. A computergenerated randomization sequence was used to assign participants to intervention (MAP Training) and waitlist control (No Training) groups using a ratio of 4:1 to obtain a sufficient sample size to test for treatment effects in the MAP Training group. One subject who expressed interest in participating in the study but was unable to attend the MAP Training sessions was thus assigned to the No Training group. The protocol was approved by the Rutgers IRB (Pro2020001365) and electronic informed consent was obtained for each subject prior to participation and reaffirmed at each assessment timepoint.
Intervention
Mental and physical Training combines mental training with meditation and physical training with aerobic exercise (Shors et al., 2014; see Figure 1). This program was delivered online via the Zoom platform. First, participants were presented with a "brain bit, " which was a short piece of information about the brain to keep them engaged and motivated. Then, they watched and listened as the facilitator instructed them on how to set-up the meditation activities and then engaged in the activities along with the participants. The mental training component consisted of 20-min of a focused-attention (FA) meditation while sitting in silence, followed by 10-min of a walking meditation, again in silence. During the FA meditation, participants were instructed to breathe naturally while bringing their full attention to their breath. They were told to notice the short space between the out-breath and the in-breath (i.e., SIT; Figure 1). Participants were instructed to count the space between each breath, beginning with one and continuing until they lost count. Should their attention wander, they should acknowledge their thoughts without judgement, and return their attention to counting the space between each breath, beginning again with one. A timer was set to ring after 20 min, at which point participants were told to stretch out their legs before standing up. Once they felt ready, they were asked to stand up for the 10-min of walking meditation. During this part of the intervention, participants were instructed to clasp their hands loosely behind their backs and maintain their gaze at the floor ∼3 feet in front of them. They were to focus their attention on their feet while they walked a circular path at a very slow pace, noticing how the weight of the body changes with each step and meanwhile noticing how the bottom of the feet touch the floor (i.e., WALK; Figure 1). As during sitting meditation, participants were instructed to maintain attention on their feet as they walk until they lost concentration, at which time they were to recognize that they have lost their focus of attention and return it to the feet. Again, a timer was set to ring after 10 min. Next, the participants prepared themselves for the physical training component, which consisted of 30-min of moderate intensity aerobic exercise (i.e., SWEAT; Figure 1). Participants began the exercise component with a 5-min warm-up. Next, they were led through a choreographed aerobic exercise routine to popular music. Each session incorporated 9-10 tracks which were rotated in and out each week. The session concluded with a 5-min cool down. Each session was approximately 1 h.
Before training, participants were instructed to take their own heart rate by pressing their finger against the side of their neck and then asked in all sessions, to gauge their heart rate to ensure, to the extent possible, that exercise was performed at a moderate level of intensity. Approximately 20 min into the physical exercise component of each MAP Training session, subjects were directed to count their pulse over a 10 s period, then multiply the value by six. The aerobic range is generally One session of MAP Training begins with 20-min of silent focused attention meditation (SIT), followed by 10-min of silent slow-walking meditation (WALK), and ending with 30-min of aerobic exercise (SWEAT).
defined as lying between 60 and 80% of participants' maximum which is calculated by subtracting their age from 220. For most participants, their aerobic range was greater than 90 beats per minute but less than 140.
Measures Primary outcomes Depressive symptoms
The depression module of the Patient Health Questionnaire (PHQ-9; Spitzer, 1999) was used to evaluate the impact of MAP Training on teachers' ratings of mood symptoms. The PHQ-9 is a self-report questionnaire consisting of nine items assessing DSM-IV criterion A for a major depressive episode. Participants were asked to rate the frequency of each symptom over the past 2 weeks using a four-point Likert scale ranging from 0 (not at all) to 3 (nearly every day). Total scores can be classified as minimal (0-4), mild (5-9), moderate (10-14), moderately severe (15)(16)(17)(18)(19), and severe (20-27). Scores greater than 15 indicate a likely major depressive disorder diagnosis (Kroenke et al., 2001; and clinically significant change is indicated by a total score reduction of five or more points (Löwe et al., 2004).
Anxiety
Anxiety symptoms were measured with the General Anxiety Disorder scale (GAD-7; Spitzer et al., 2006). The GAD-7 is a seven item self-report measure of DSM-IV diagnostic criteria A, B, and C for generalized anxiety disorder. Subjects were asked to rate the frequency of each item over the past 2 weeks using a four-point Likert scale ranging from 0 (not at all) to 3 (nearly every day). The GAD-7 total scores can also be categorized as minimal (0-4), mild (5-9), moderate (10-14), and severe (15)(16)(17)(18)(19)(20)(21). Score changes of four points or more on the GAD-7 are considered clinically significant (Toussaint et al., 2020).
Ruminative thoughts
The Ruminative Response Scale (RRS; Nolen-Hoeksema and Morrow, 1991) was used to examine the impact of MAP Training on frequency of rumination among teachers. The RRS is a 22-item self-report measure of ruminative thinking. Rumination refers to a pattern of perseverative thinking generally focused on symptoms of distress and their possible causes and consequences and has been closely linked with poor mental health outcomes, especially depression (Nolen-Hoeksema, 1991;Nolen-Hoeksema and Morrow, 1991;Spasojević and Alloy, 2001;McLaughlin and Nolen-Hoeksema, 2011). Participants rated the frequency of ruminative behaviors over the past 2 weeks using a scale from 1 (almost never) to 4 (almost always). The original 22-item RRS used here contains three subscales: depressive, brooding, and reflective. Depressive ruminations focus on mood changes, whereas brooding ruminations tend to focus on negative self-evaluations and judgements involving blame and/or guilt. Reflective ruminations are less tied to mood and more aligned with contemplative thinking and problem solving. Both brooding and reflection have been associated with current depressive symptom severity, whereas reflection has been linked with lower risk for future depression (Treynor et al., 2003;Burwell and Shirk, 2007). However, regardless of subscale, most previous studies with MAP Training report decreases, including participants from clinical and nonclinical populations (Alderman et al., 2016;Shors et al., 2017;Shors et al., 2018;Lavadera et al., 2020;Shors, 2021;Millon et al., 2022).
Perceived stress
Subjective evaluations of stress have been associated with severity of depressive symptoms, anxiety symptoms, and experiences of stressful life events (Cohen et al., 1983;Otto et al., 1997). The Perceived Stress Scale (PSS-10; Cohen et al., 1983) was administered to examine the effects of MAP Training on teachers' experience of stress. The PSS consists of 10 items that assess the degree to which an individual perceives their life to be unpredictable and uncontrollable. Participants were asked to report on their thoughts and feelings over the previous 2 weeks using a five-point Likert scale with scores ranging from 0 (never) to 4 (very often).
Quality of life
The Professional Quality of Life Scale (ProQOL; Stamm, 2010) was used to assess the impact of MAP Training on teachers' quality of life. It has been suggested that in caring for people who have experienced stressful events, caregivers (helpers, etc.) are also at risk of developing stressrelated symptoms (Stamm, 1995). The ProQOL is a 30item self-report questionnaire consisting of three subscales: compassion satisfaction (the converse to compassion fatigue), burnout, and secondary traumatic stress; but given that each subscale is psychometrically unique, capturing both positive and negative outcomes of helping professions, a total score is not recommended (Stamm, 2010). Participants rated the frequency of each experience over the past 2 weeks using a five-point Likert scale ranging from 1 (never) to 5 (very often).
Self-compassion
The Self-Compassion Scale Short Form (SCS-SF; Raes et al., 2011) was administered to evaluate the effects of a MAP Training on teachers subjective ratings of self-compassion, which is conceptualized as an openness and non-judgmental understanding of one's own pain, inadequacies, and failures (Neff, 2003). Participants responded to 12 items assessing frequency of self-compassion over the previous 2 weeks on a five-point Likert scale ranging from 0 (almost never) to 5 (almost always).
Distress tolerance
Distress tolerance relates to an individual's ability to tolerate negative emotional states (Leyro et al., 2010). Low distress tolerance has been associated with the development of mental health problems, such as anxiety (Keough et al., 2010), depression (Lass and Winer, 2020), and posttraumatic stress symptoms (Vujanovic et al., 2011). The Distress Tolerance Scale (DTS; Simons and Gaher, 2005) was administered to examine the effects of MAP Training on teachers' ability to tolerate negative emotional or aversive states (i.e., distress; Leyro et al., 2010). The DTS is 15-item self-report questionnaire assessing an individual's perceived ability to tolerate emotions, appraisal of distress, absorption by negative emotions, and regulation of emotions. Participants evaluated their present abilities to tolerate distress using a five-point Likert scale from 1 (strongly agree) to 5 (strongly disagree).
Mental and physical health questionnaire
The MAP Health Questionnaire, was included in the assessment battery to further evaluate the potential effectiveness of MAP Training on overall mood and well-being. The questionnaire comprises 20 items derived from four aspects of mental health often measured by existing self-report instruments that assess posttraumatic thoughts (Posttraumatic Cognitions Inventory [PTCI]; Foa et al., 1999) Beck, 1961;Beck et al., 1996) (five items). Prior studies of MAP Training in distressed populations indicate that these symptoms are closely linked at baseline and improve with training (i.e., Shors et al., 2018;Shors, 2021;Millon et al., 2022).
Secondary outcomes Executive function
Executive functions are a category of mental skill processes that include working memory, cognitive flexibility and selfcontrol of behavior. The Adult Executive Functioning Inventory (ADEXI; Holst and Thorell, 2018) was used to assess perceived changes in these skills. The ADEXI consists of 14 items that comprise subjective estimates of working memory and inhibition. Participants rated their level of agreement with each item over the previous 2 weeks using a five-point Likert scale ranging from 1 (Definitely not true) to 5 (Definitely true).
Cognitive flexibility
The impact of MAP Training on subjective estimates of cognitive flexibility was assessed using the Cognitive Flexibility Inventory (CFI; Dennis and Vander Wal, 2010). The CFI is a 20-item self-report instrument consisting of two factors. The Control factor evaluates the extent to which an individual perceives a difficult situation as controllable, and the Alternatives factor measures an individual's ability to generate multiple explanations and solutions for difficult situations. Participants reported on their cognitive flexibility over the previous 2 weeks using a seven-point Likert scale from 1 (Strongly disagree) to 7 (Strongly agree).
Physical health
The Patient Health Questionnaire (PHQ-15; is a self-administered scale for evaluating somatic symptom severity. The PHQ-15 assesses 15 of the most commonly reported somatic complaints in primary care settings (Kroenke, 2003), including gastrointestinal, musculoskeletal, pain, and fatigue symptoms. Participants rated the degree to which they were bothered by each symptom over the previous 2 weeks using a three-point Likert scale ranging from 0 (Not at all) to 2 (Bothered a lot). Total scores can be classified as low (0-5), moderate (6-10), and high (11-15) levels of symptom severity.
Sleep quality
The Pittsburgh Sleep Quality Index (PSQI; Buysse et al., 1989) is a commonly used and well-validated research tool for assessing sleep quality (Mollayeva et al., 2016). An abbreviated version of the full scale, the short PSQI (sPSQI), containing 13 of the original 19 items has been developed in an effort to reduce participant burden and increase research utility (Famodu et al., 2018). The sPSQI assesses five components of sleep quality. Sleep latency, sleep duration, sleep efficiency scores are based on reported bedtime, sleep time, wake time, and rise time in the past 2 weeks while sleep disturbances and daytime dysfunction components are rated from 0 (Not in the past 2 weeks) to 3 (Three or more times a week) in terms of frequency. To facilitate scoring, participants were asked to select categorical responses for each of the 13 items.
Pre-intervention assessment
All participants completed the initial (pre-intervention) assessment within 1 week prior to the start of the MAP Training sessions. Surveys were administered electronically (i.e., Qualtrics). A unique link was generated for each subject ID and distributed to participants via email. The pre-intervention assessment included a sociodemographic and health questionnaire and battery of self-report measures, described above. After completion of the pre-intervention assessment, subjects received a $20 Amazon e-gift card as compensation for their time and participation.
Mental and physical training group
Live MAP Training sessions were delivered virtually through the Zoom platform. Recorded MAP Training sessions were accessible with a private YouTube link. Participants in the MAP Training group were asked to engage in one live MAP Training session and one recorded session each week for 6 weeks. Thus, the MAP Training program consisted of two 1-h sessions per week over 6 weeks (12 sessions total). The virtual sessions were led by Dr. Tracey Shors, who developed the MAP Training program (described above). Prior to the initial live session participants were provided with a 30-min video introduction to MAP Training. To maintain participant confidentiality, attendee information (i.e., names, videos) was disabled during the Zoom sessions. At the conclusion of each live MAP Training session, participants were asked to complete a brief Qualtrics survey to assess adherence, which included a multiple-choice question about the content of the live session. We used this to gauge their attendance. We also asked them to report their maximal heart rate during the aerobic exercise component, and indicate their level of engagement (i.e., not very [0-25%], somewhat [26-50%], moderately [51-75%], very [76-90%], fully [>90%]). The survey also asked them whether and if so, how long they had engaged in similar activities during the week and outside of live MAP Training sessions. The survey also asked whether they had completed the weekly recorded session.
No training group
Participants allocated to the No Training group did not partake in MAP Training sessions over the course of the study, but were asked each week to report their engagement in meditation and physical activity each week in a Qualtrics survey. They were provided with unlimited access to six recordings of MAP Training sessions at the end of the study.
Post-intervention assessment
Within 1 week of the last session, participants in both groups completed the post-intervention assessment consisting of the same set of self-report measures as the pre-intervention assessment (described above). Surveys were distributed electronically and accessed via a unique Qualtrics link.
Data analysis
Analyses were conducted using SPSS Version 27 (IBM Corp, 2020). Between group (MAP Training, No Training) differences on baseline characteristics (i.e., sociodemographic variables [age, race, ethnicity, sex, educational attainment], health history, teaching experience, engagement in mindfulness, and exercise activities) were examined with independent samples t-tests and Pearson's Chi-square tests for independence, where appropriate. An a priori power analysis indicated that a total sample size of 99 was necessary to detect large (f = 0.50) within-between interaction effects with 90% power (α = 0.05). As a result, the target sample size was 80 for the MAP Training group and 20 for No Training group (4:1 ratio).
A repeated-measures multivariate analyses of variance (MANOVA) tested between-group differences in primary (i.e., psychosocial) and secondary (i.e., cognitive, health) outcomes at pre-vs. post-intervention. Data were assessed for multivariate normality, homogeneity of covariance matrices, and multicollinearity. Significant univariate and multivariate interactions were followed with post hoc analyses. Group differences on outcome measures at pre-intervention and post-intervention timepoints were assessed with independent samples t-tests, whereas within-group changes were tested with paired samples t-tests. In both sets of analyses, a False Discovery Rate (FDR; Benjamini and Hochberg, 1995) correction (α = 0.05) was applied. The effects of MAP Training on primary and secondary outcomes were tested with repeated-measures MANOVAs. Significant univariate and multivariate interactions were followed up with independent and paired samples t-tests with an FDR correction applied. Data were further explored with pairwise comparisons, corrected for multiplicity.
Missing data
The person mean imputation approach was applied in cases of missing responses to questionnaire items. Reversescored items were recoded as needed. Imputed values were then calculated using the mean of the observed item responses for each participant. Mean scores were imputed only for cases in which less than 10% of questionnaire data were missing (i.e., questionnaires with 11 or more items).
Participant characteristics
Of the 104 teachers recruited, 71 completed the initial baseline assessment and were included in the data analyses. Of these participants, 58 were randomly assigned to the MAP Training group and the remaining participants to the No Schematic of participant recruitment, randomization, assessment, and attrition.
Training group. One participant was not randomly assigned because they could not attend the live sessions (Figure 2). Groups did not differ significantly on characteristics prior to training [e.g., age, sex, race, years teaching, experience with mindfulness (including meditation), participation in exercise]. Rates of attrition did not differ by group [χ 2 (1, N = 104) = 0.16, p > 0.05]. Engagement was assessed after each live session by asking for self-reported engagement and maximal heart rate. On average, participants assigned to the MAP training group engaged in at least four out of the six live sessions and completed approximately half of the six recorded sessions. Based on this information, we considered participants who attended at least four out of six live MAP training sessions (n = 35) as treatment-adherent. These treatment-adherent participants and No Training groups did not differ significantly with respect to their characteristics before training (e.g., age, sex, race, years teaching, experience with mindfulness, participation in exercise, etc.; see Table 1). Treatment-adherent and non-adherent participants did not differ on most baseline characteristics, although a greater proportion of teachers in the treatment-adherent group reported no previous experience with mindfulness programs [χ 2 (2, N = 58) = 9.09, p = 0.01].
Group differences in primary outcomes
A one-way repeated measures MANOVA tested for significant differences between treatment adherent MAP training participants and those who received no training. The multivariate group × timepoint interaction was not significant (p = 0.06). The sample size in the No Training group (n = 11) was less than the number of dependent variables in the analysis (n = 12), and thus the analysis may have been underpowered. Nevertheless, a series of one-way ANOVA's on these variables revealed a significant group x timepoint interaction on multiple variables, indicating that the MAP group demonstrated greater change than the No Training group on each of the following outcomes: overall mood and well-
Group differences in secondary outcomes
Multivariate analyses of variance was applied to identify between-group differences in pre-intervention and postintervention scores on cognitive measures, with no multivariate interaction (group × timepoint; p > 0.05). A series of independent samples t-test with FDR correction examined between group differences on cognitive measures at preintervention and post-intervention timepoints, consistent with the study's a priori hypotheses. Pre (Figure 6). There were no significant changes across time for teachers assigned to the No Training group. In general, subjective assessments of physical health did not change as a result of the intervention, although participants in the MAP Training group reported significantly fewer sleep disturbances [sPSQI-Sleep Disturbances; t(34) = 3.36, p = 0.01, d = 0.57] at intervention end (Figure 7).
Discussion
MAP stands for "mental and physical" and MAP Training combines mental training with meditation and physical training with aerobic exercise to improve mental and physical health (Shors et al., 2014;Shors, 2021). The program has demonstrated efficacy in a number of studies with distressed populations, including men and women diagnosed with major depressive disorder (Alderman et al., 2016;Shors et al., 2017), young adult women who have experienced sexual trauma (Shors et al., 2018), mothers who were homeless (Shors et al., 2014), medical school students (Lavadera et al., 2020), and women living with HIV (Millon et al., 2022). The purpose of this pilot study was to examine the effects of an online version of MAP Training on teacher stress and related mental health outcomes experienced during the height of the COVID-19 pandemic. The findings suggest that MAP Training was beneficial, as delivered during the first summer of the pandemic, when most teachers were out of the classroom for the school year but were preparing There were no significant differences between pre-intervention and post-intervention scores on these same measures in the No Training group. There were also no significant differences between groups at either timepoint. Asterisks indicate significant adjusted p-values. Total scores on the RRS range from 22 to 88, RRS-Depression subscores range from 12 to 48, RRS-Brooding subscores range from 5 to 20, and RRS-Reflection subscores range from 5 to 20. There were no significant changes in self-reported cognitive functioning reported by participants in the No Training group. There were also no significant between-group differences in these domains at either timepoint. Asterisks indicate significant adjusted p-values. Total scores on the ADEXI range from 14 to 70, ADEXI-WM subscores range from 9 to 45, and ADEXI-Inhibition subscores range from 5 to 20.
to either go back into the classroom in the fall semester or teach virtually. Teachers who participated in the MAP Training program reported sizeable improvements (i.e., ds = 0.4-0.8) in mood (i.e., depressive symptoms), along with less anxiety, fewer ruminative thoughts, less perceived stress, and more self-compassion by intervention end. In addition, those who participated reported less work-related (secondary) traumatic stress when compared to those who did not participate. These positive results stand in contrast to those reported by teachers who did not participate and were instead assigned to a waitlist, some of whom reported increases in stress-related symptoms over the same time period. Importantly, positive outcomes were observed despite the relatively brief duration of each session (1 h) and the course of the intervention (i.e., a maximum of two sessions per week over 6 weeks). Moreover, most participants did not attend all the sessions, averaging about four out of six live sessions and half of the recorded ones. The "effective" level of training is not inconsistent with previous studies. For example, women with a history of sexual trauma, reported fewer trauma-related thoughts and ruminations after two sessions per week over 6 weeks (Shors et al., 2018). In another study, women living with HIV reported similar outcomes after only one in-person session a week for 6 weeks (Millon et al., 2022). However, these finding do stand in contrast to those following many exercise-related interventions, which often depend on multiple sessions per week to produce sizeable improvements in mental health outcomes (Stathopoulou et al., 2006;Asmundson et al., 2013;Stanton and Reaburn, 2014). As a result, we do not claim that the positive outcomes reported here arise from the exercise component alone, but rather in response to the combination of meditation and aerobic exercise, especially when each activity is conducted one after another closely in time. Indeed, the combination of these two activities was reportedly more effective than either component alone in reducing trauma-related thoughts and ruminations, while at the same time improving self-worth (Shors et al., 2018).
In general, the present results suggest that the combination of FA meditation training and aerobic exercise may prevent or at least mitigate some of the mental health symptoms that arose during the height of the COVID-19 pandemic. Reported levels of anxiety tended to increase among teachers who did not participate in MAP Training over the 6 weeks. However, these changes were not significant after applying statistical correction, . There were no significant differences between pre-intervention and post-intervention scores on health measures in the No Training group. There were also no significant differences between groups on these measures at either timepoint. Asterisks indicate significant adjusted p-values.
perhaps due in part to a less than desirable sample size in the control (No Training) group. However, the purpose of this study was to test whether an online intervention would support teacher mental health and well-being during the height of the pandemic. As a result, we could not continue to enroll participants beyond the summer months, as the impact of COVID-19 was evolving (due to vaccines, distance learning, etc.). Therefore, a large proportion of subjects were randomized to the MAP Training group.
Rumination in teachers during the COVID-19 pandemic
Ruminations tend to be negative, about the past, and infused with some degree of blame or regret. After training, the K-12 teachers reported fewer of these thoughts, including brooding and depressive subtypes. There were no changes in reflective rumination, which is about the past and tends to be less negative. There are numerous theories and still some controversy surrounding rumination subtypes, but in general, depressive rumination is most tightly linked to changes in mood while brooding rumination appears to predict later depression (Treynor et al., 2003;Burwell and Shirk, 2007). In this study, intervention participants also reported fewer symptoms of depression. Thus, the effects of MAP Training on more detrimental aspects of rumination (i.e., brooding and depressive rumination) may be in part responsible for alleviating some of the depression reported by the teachers during the pandemic. It is also conceivable that this type of intervention, when practiced routinely, may help prevent some increases in depression that can arise during stressful life events.
In addition to depression, rumination is linked to other aspects of mental health and wellness, including anxiety, trauma-related cognitions, and even how someone interprets changes in their body. In a recent factor analytic study, ruminations accounted for much of the variance (>96%) in mental health outcomes acquired through many of the same self-report measures used here (Millon and Shors, 2021). Of course, there are similarities amongst the questions in these surveys and these similarities may account in part for the relationships. But nonetheless, these analyses have led us to suggest that rumination may serve as a proxy for overall mental health. All that being said, we do not know the neural or psychological mechanisms through which rumination may affect other health outcomes. A recent meta-analysis did identify neural networks that were especially engaged in people who are inclined to ruminate whereas other networks, especially those in the temporal cortex were less engaged (Zhou et al., 2020). Perhaps these later networks are becoming more engaged because of training. In theory, this would be consistent with the increase in executive function and cognitive flexibility reported here by those who engaged in MAP Training. Functional imaging studies are underway to test this hypothesis.
Perceived vs. traumatic stress and distress
Stress has wide-spread effects on mental and physical health, many of which arise not only from exposure to the stressful life event itself, but also from one's subjective appraisal of the experience (Lazarus and Folkman, 1984). Our results indicate that MAP Training lessened teachers' perceived stress. This finding is especially meaningful given that teaching is recognized as a high-stress profession (Shernoff et al., 2011;Richards, 2012;Gallup, 2014;McCarthy, 2019), with jobstress increasing for many teachers after the onset of the COVID-19 pandemic. In addition to the direct experience of stress, teachers, like other helping professions, are susceptible to secondary traumatic stress (Hydon et al., 2015;Molnar et al., 2017). Secondary traumatic stress differs from vicarious trauma in that it involves emotional and behavioral reactions to secondary exposure to trauma as opposed to changes in cognitive schemas and beliefs (Jenkins and Baird, 2002). In this study, MAP Training led to a significant reduction in secondary traumatic stress. Therefore, MAP Training may lessen the risk and thus help prevent secondary traumatic stress and associated distress among teachers who may have been exposed to student trauma during the COVID-19 pandemic (i.e., Collin-Vézina et al., 2020;Kovler et al., 2021).
Despite reported improvements in mood, anxiety, and indicators of stress, we did not observe changes in teachers' ratings of distress tolerance in this study. Interestingly, intolerance of uncertainty, a subcomponent of distress tolerance, has been identified as a unique predictor of future perceived stress (Bardeen et al., 2016). Thus, distress tolerance may have been somewhat impervious to the effects of the intervention given the general state of uncertainty and anxiety during the initial months of the COVID-19 pandemic (Twenge and Joiner, 2020). It is also important to note that distress tolerance is commonly conceptualized as a relatively stable, trait-like marker of psychopathology symptoms (Leyro et al., 2010). Moreover, some data suggest that standalone mindfulness interventions may lead to changes in behavioral distress (e.g., task persistence) but not perceived distress tolerance (e.g., Carpenter et al., 2019). Taken together, these data suggest that MAP Training may be effective in improving one's current experience of distress as opposed to their perceived ability to relate to symptoms (i.e., distress tolerance).
Subjective estimates of cognition and physical health during the COVID-19 pandemic
Virtual delivery of MAP Training during the pandemic also had a positive impact on teachers' subjective ratings of cognitive functioning. Treatment adherent participants reported significant improvements in working memory and executive functioning of medium effect size after 6 weeks of MAP Training. Additionally, participants who adhered to the program reported significant improvements in cognitive flexibility and control following training, whereas no such change was observed among waitlist control participants. The impact of MAP Training on self-reported cognition in this study is consistent with previous research on the cognitive effects of mindfulness meditation interventions (Chiesa et al., 2011) as well as considerable data demonstrating a positive link between exercise and brain function (i.e., Ludyga et al., 2020). And as noted, MAP Training has been associated with an increase in amplitude of the early components of the evoked response during the Flanker task, which engages neural processes related to executive function and cognitive control (Alderman et al., 2016).
Aerobic exercise alone has numerous effects on brain function, including increases in vascular growth, neurogenesis in the hippocampus, as well as the release of growth factors such as BDNF (i.e., van Praag et al., 1999;Piepmeier and Etnier, 2015;Phillips, 2017;Vivar and van Praag, 2017;Liu and Nusslock, 2018). Meditation has also been linked with increases in hippocampal volume among those who practice meditation regularly (Luders et al., 2013) and reductions in hippocampal atrophy in those with mild cognitive impairment (Wells et al., 2013), possibly through effects of synaptogenesis, angiogenesis and neurogenesis. In fact, MAP Training was developed for humans based on preclinical studies suggesting that mental training with effortful learning procedures increases neurogenesis in the adult hippocampus (Gould et al., 1999;Leuner et al., 2004;Curlik and Shors, 2011;Shors, 2021). Others have reported that a combination of spontaneous learning during environmental enrichment when preceded by aerobic exercise is especially neurogenic (Fabel et al., 2009). Therefore, the combination of mental (i.e., FA meditation) and physical (i.e., aerobic exercise) training coupled with enhanced mood may produce cognitive change in humans through mechanisms of hippocampal plasticity.
The combination of mental and physical training had a positive impact on teachers' physical wellbeing, specifically on the quality of sleep. Participants reported fewer sleep disturbances including fewer nighttime awakenings, breathing difficulties and nightmares relative to their reports before training. There were no such changes in the waitlist control condition. Sleep quality is an important predictor of subjective well-being (Reid et al., 2006;Peach et al., 2016;Wickham et al., 2020) and demonstrates a bidirectional relationship with depression and anxiety symptoms (Alvaro et al., 2013). Given the increased prevalence of poor sleep quality during the COVID-19 pandemic (i.e., Gupta et al., 2020;Pinto et al., 2020;Hyun et al., 2021), interventions that improve sleep, even indirectly, are especially needed.
MAP Training does not necessarily impact mental health through direct changes in cardiovascular activity. For example, we observed no change in heart-rate variability and related measures of sympathetic nervous system activity after women with HIV completed 6 weeks of training, despite robust decreases in ruminative and trauma-related thoughts (Millon et al., 2022). However, training did increase the volume of oxygen consumed in women with physical complaints such as addiction and malnutrition (Shors et al., 2014). In contrast, the current participants were relatively young high-functioning adults. It is likely that individual differences in physical health prior to training are important (i.e., the potential range for change) as well as the length of the intervention, which is relatively short per session (1 h) and over its course (6 weeks).
Limitations and considerations
There are several limitations to this research. First, our results were limited by a less than desirable sample size and suboptimal adherence, with most participants completing about four out of six of the live sessions, and half of the weekly recorded sessions. High rates of attrition are commonly observed in studies of virtual and web-based interventions (Eysenbach, 2005;Melville et al., 2010). In this study, dropout rates may have been exacerbated by an overall increase in anxiety, depression, and stress in the general population during the COVID-19 pandemic (e.g., Xiong et al., 2020), and especially among teachers (e.g., Aperribai et al., 2020;Anderson et al., 2021;Ozamiz-Etxebarria et al., 2021). Indeed, data from reviews indicate that the COVID-19 pandemic has resulted in substantial declines in participant enrollment in clinical trials and research studies (Sathian et al., 2020). Additionally, we began recruitment during the beginning of summer when most teachers were out or soon to be out of the classroom. And thus, burnout and workplace fatigue may have lessened their willingness to participate (Pressley, 2021). Because the conditions of the pandemic were constantly changing, we could not continue to recruit once they had returned in the fall to the classroom. Yet, despite these restrictions, we observed significant and positive effects of MAP Training on subjective estimates of mental health and wellbeing.
The MAP Training intervention was delivered virtually to accommodate stay-at-home orders and remote working conditions during the COVID-19 pandemic. Importantly, the teachers reported a high level of engagement and their average heart rate during the physical training was 130 beats per minute, suggesting that they were exercising at an intensity consistent with aerobic exercise. Nevertheless, there were certain limitations to the virtual format. For example, the participants' cameras were disabled during the live sessions to protect their privacy. (And several participants stated beforehand that they did not wish to turn on the camera). Therefore, it is not known whether participants were fully adhering to the MAP Training intervention. Additionally, variability in internet and technology literacy and quality may have interfered with their engagement. These potential barriers have been documented in prior studies of virtual interventions (e.g., Borghouts et al., 2021) and have yet to be adequately addressed.
Finally, we recruited teachers within the local tri-state area because of similarities in the impact of COVID-19 in this region. At the time of the study, COVID-19 cases were among the highest in the country and residents were facing statewide travel restrictions, mask mandates, limits on social gatherings, and supply shortages. Thus, we cannot be certain that our findings would generalize to other populations of teachers within or outside of the United States.
Implications and future directions
Overall, findings from this pilot study suggest that 6 weeks of virtual MAP Training can lead to positive changes in select measures of mental health, especially those related to mood, negative thinking, and overall well-being. Even prior to the COVID-19 pandemic teachers were affected by high levels of occupational stress (Gallup, 2014), which has been associated with poor mental health outcomes, including depression and anxiety (Besse et al., 2015;Jones-Rincon and Howard, 2019). Moreover, workplace burnout has been closely linked with depression (Ahola et al., 2014), with some studies suggesting significant overlap between the two constructs (Schonfeld and Bianchi, 2016). Therefore, interventions that target psychological distress, such as MAP Training, may help to reduce burnout and subsequent rates of attrition among teachers, even under normal working conditions. The mental and physical health benefits of meditation and aerobic exercise have been demonstrated independently in a variety of populations (e.g., Warburton, 2006;Gillison et al., 2009;Chiesa et al., 2011;Keng et al., 2011;Kvam et al., 2016;Stubbs et al., 2017;Creswell et al., 2019), including teachers (e.g., Crain et al., 2017;Klingbeil and Renshaw, 2018;Zarate et al., 2019;Abós et al., 2021). The present results highlight the potential benefit of combining these activities together in one intervention to alleviate stress and promote well-being in teachers. They further suggest that the virtual delivery of interventions such as MAP Training are effective in improving mental health and mitigating the impact of stressful life events, such as occurred during the coronavirus pandemic.
Data availability statement
The raw data supporting the conclusions of this article will be made available by the authors, without undue reservation.
Ethics statement
The studies involving human participants were reviewed and approved by Rutgers University Institutional Review Board. The patients/participants provided their written informed consent to participate in this study.
Author contributions
DD and TS implemented the research. DD performed the analyses. All authors devised the project, contributed to interpretation of results and writing of the manuscript, and approved the final version. | 2022-08-23T13:32:12.158Z | 2022-08-23T00:00:00.000 | {
"year": 2022,
"sha1": "cf1e85d354e4d7b4d6bc9baa101514837ecd432d",
"oa_license": null,
"oa_url": null,
"oa_status": null,
"pdf_src": "Frontier",
"pdf_hash": "cf1e85d354e4d7b4d6bc9baa101514837ecd432d",
"s2fieldsofstudy": [
"Psychology"
],
"extfieldsofstudy": []
} |
263619544 | pes2o/s2orc | v3-fos-license | Improving health-care planning for fracture patients in Türkiye: insights from a nationwide study
ABSTRACT BACKGROUND: The distribution of fractures may vary according to age and gender. In a country like Türkiye, which has high population density and covers a large geographical area, it is important to understand the regional variations in fractures and identify the health institutions in which patients seek treatment to plan new health-care investments effectively. The objective of our study was to investigate the distribution of fractures across the seven regions of Türkiye considering age, gender, and the level of health institutions the patients visited. METHODS: Between January 2021 and May 2023, the total number of fractures, locations of the fractures, patient age and gender, geographical regions, and levels of the health-care institutions to which the patients presented were examined through the e-Nabız personal health record system. Age groups were divided into pediatric (0–19 years), adult (20–64 years), and geriatric (≥65 years) categories. Geographical regions included the Marmara, Central Anatolia, Black Sea, Eastern Anatolia, Aegean, Mediterranean, and Southeastern Anatolia regions. RESULTS: A total of 2,135,701 patients with 2,214,213 fractures were analyzed. Upper extremity fractures were the most common among all considered fracture groups (1,154,819 fractures, 52.2%). There were 643,547 fractures in the pediatric group, 1,191,364 fractures in the adult group, and 379,302 fractures in the geriatric group. While the total number of fractures was higher among men with 1,256,884 fractures (58.9%), the rate among women was higher in the geriatric group (67.2%). Geographically, the highest number of fractures was observed in the Marmara region (714,146 fractures), and 67.92% of all patients presented to secondary health-care institutions (1,500,780 fractures). The most commonly diagnosed fracture in the study population was distal radius fractures. The most common fracture in the geriatric group was femur fractures while distal radius fractures were the most common fractures in the adult and the pediatric groups. CONCLUSION: By understanding the distribution of fractures in Türkiye based on fracture site, geographical region, age, and gender, it becomes possible to improve the planning of patient access to health-care services. In regions with limited health resources, a more successful resource distribution can be achieved by considering fracture distributions and age groups.
INTRODUCTION
Fractures can constitute significant public health issues and pose economic burdens across all age groups throughout the human lifespan.Previous studies have reported incidence rates of all fractures across all age groups ranging from 81 to 235/10,000 individuals, with men showing higher fracture rates. [1,2]However, the distribution of fractures varies further based on factors such as gender, age, and lifestyles in different geographical regions.Fracture types and their incidence rates demonstrate specific distributions throughout the human lifespan.For example, in epidemiological studies conducted in the United States, the lifetime risk of fragility or osteoporotic fractures in women, including those of the vertebrae, hips, or wrists, was estimated to be between 15.6% and 17.5%. [3]Nearly half of all women and one in five men will experience a fracture during their remaining lifetime after the age of 50. [4,5]Fractures in children are also common, accounting for up to 25% of all injuries in the pediatric age group. [6]Few studies have been conducted on the overall counts of fractures considering these parameters.Meanwhile, as a result of the increasing population, the global costs of fractures can be expected to increase over time.
To address these issues and provide comprehensive information on this major global public health concern, we utilized records from e-Nabız, the e-health database of the Turkish Ministry of Health, to determine the descriptive counts of fractures based on sex, age, and geographical region during the period from 2021 to 2023.Additionally, we investigated the relationships between those parameters and the level of health care at which the diagnosis was performed.
Data Collection with the E-Nabız Database
The electronic health records of individuals of all ages who were admitted to government, private, and university health institutions were obtained using e-Nabız, the e-health database of the Turkish Ministry of Health. [7]The study was conducted according to the Declaration of Helsinki and received approval from the Turkish Ministry of Health with a waiver of informed consent for retrospective data analysis and the health information privacy law (ID: 95741342-020/27112019).The e-Nabız system is a nationwide personal health records system that provides 30 different services for treatment, prevention, and other health-related areas.It also stores all kinds of imaging records for patients.The number of e-Nabız users has risen in recent years, reaching 68 million active users by 2022 or 80% of the population of Türkiye. [7]A computerized review of medical records was conducted to determine all types of fractures among e-Nabız users admitted to healthcare facilities between January 2021 and May 2023 in Türkiye.The initial date of fracture diagnosis was recorded as the fracture date.Recurrent International Classification of Diseases (ICD) codes assigned within 6 months of the date of fracture diagnosis for the same patients were excluded.
Study Population
Information stored in the e-Nabız database between January 2021 and May 2023 was extracted.Patient data including age and gender, level of health care provided by the admitting hospital, fracture site, and geographical region were investigated.Fractures were classified into four groups according to ICD-10 codes including upper limb fractures, lower limb fractures, axial skeleton fractures, and craniofacial fractures (Table 1).Patients with multiple fractures diagnosed at the time of first admission were also recorded with ICD codes T02.1 through T02.9 and were categorized as having multiple fractures.Patients were further divided into three age groups as pediatric (0-19 years), adult (20-64 years), or geriatric (≥65 years).The most commonly diagnosed fracture codes were analyzed in whole population and age groups.
Distribution of fractures according to the specific geographical regions of Türkiye was also evaluated for the Marmara region, Central Anatolia region, Black Sea region, Eastern Anatolia region, Aegean region, Mediterranean region, and Southeastern Anatolia region.For all included patients, the health-care level at the time of first admission was divided into four groups: primary health care, secondary health care (including government hospitals and government training hospitals), university hospitals, and private hospitals.
Statistical Analysis
IBM SPSS Statistics 25 (IBM Corp., Armonk, NY, USA) was used in this study for the analysis of all data.Frequency and percentage statistics were used for descriptive measures, while chi-square (Pearson) tests were used for categorical variables.
Fractures According to Sex, Age, and Anatomical Location
In the time interval of 29 months, a total of 2,214,213 fractures of 2,135,701 patients were extracted from the e-Nabız system.Among these cases, 883,695 fractures were diagnosed in 2021, while 945,226 were diagnosed in 2022 and 385,292 were diagnosed in 2023 between January and May.Overall, the most common fracture site was the upper extremities (1,154,819 fractures, 52.2%) (Table 2).The total fracture count and all types of fractures were also examined according to age groups.While 643,547 fractures were observed among pediatric patients (0-19 years) in the time interval of 29 months, 1,191,364 fractures were seen among adult patients (20-64 years) and 379,302 were seen among elderly patients (≥65 years) (Table 3).While a majority of the fractures in the elderly patient group were seen in women, more fractures were seen in male in the adult and pediatric groups (Table 4).
The most commonly diagnosed fracture in the study population was distal radius fractures (367,768 fractures health-care level, 129,624 (5.87%) were presented to university hospitals, and 556,646 (25.19%) were presented to private hospitals.Hospital admissions according to health-care levels and age groups are presented in Table 6.By geographic distribution, the most fracture cases were treated in the Marmara region, respectively followed by the Mediterranean, Aegean, Central Anatolia, Black Sea, Southeastern Anatolia, and Eastern Anatolia regions (Table 7).There were also 229,882 fracture records from polytrauma patients, which were not included in the analysis of the present study.The distributions of fracture numbers according to different age groups by regions are shown in Figures 1-3.
DISCUSSION
The present study investigated the distribution of fractures among a total of 2,214,213 fractures treated in Türkiye across different geographical regions in 2021-2023.The health-care level of the admitting health-care institution was also docu- Many studies with smaller cohorts have been published.Cur-tis et al. investigated the age-and sex-specific fracture incidence rates in patients older than 18 years.To do so, they used electronic health records that covered approximately 7% of the population of the UK. [8]Their study showed a bimodal distribution of fracture incidence in terms of age.The present study, on the other hand, covers 80% of the Turkish population, which accounts for approximately 68/80 million users of the national health system and includes all age groups.Fractures were seen most often in the adult age group, followed by the pediatric and geriatric age groups, respectively.This can be attributed to the fact that Türkiye is a country with a relatively young population. [9]The proportion of the population in the age group of 15-64 years, defined as working age, increased over the years to reach 67.9% in 2021, with 22.4% of the population categorized as children and 9.7% as elderly.
According to an analysis of 10-year nationwide adult-geriatric fracture study from Germany revealed that hip fractures and distal radius fractures were the most commonly encountered fracture types with increasing incidence in aging population. [10]Our findings were parallel to this study.Hand and wrist fractures were the most commonly encountered fractures in whole Turkish population.This finding is also consistent with a recent Swedish nationwide registry study including 37,266 adult patients.Holtenius et al. revealed that hand and wrist fractures constitute 28% of all upper extremity injuries. [11]Upper extremity fractures were the most commonly encountered fracture types in pediatric population.This result was parallel to a nationwide study conducted by Naranje et al. from the United States. [12]Our analysis indicated that the incidence of fracture types changes with aging and specific fracture types are more common different age groups.
The findings of the present study also reveal gender differences in fracture rates across different age groups, which is consistent with the literature.In both the pediatric and adult age groups, male patients exhibited higher fracture rates.However, in the elderly group, female patients had a significantly higher rate of fractures.This disparity can be attributed to the prevalence of osteoporosis among elderly women, which is a well-known risk factor for specific fractures.Implementing strategies such as regular bone density screenings, promoting adequate calcium and vitamin D intake, and encouraging physical activity can play a key role in reducing the burden of fractures among elderly women in Türkiye.
[15][16] Our study has shown significant regional disparities in fracture counts, with higher rates observed in urban areas compared to rural areas.This finding may be related to several factors including differences in occupational hazards, access to recreational activities, or lifestyle choices. [5,17]Moreover, diversity in the health infrastructure and socioeconomic status between regions may also contribute to differences in fracture rates.
The effects of health-care levels on fracture management were also examined in the present study.Most of the included fractures were treated in secondary health-care facilities including government hospitals and government training hospitals.We also found that regions with higher levels of health care in metropolitan areas, such as the Marmara and Aegean regions, exhibited more comprehensive fracture manage-ment.These geographical regions offered better access to hospitals and specialized orthopedic services.In contrast, residents of rural areas such as those in the Southeastern and Eastern Anatolia regions may face challenges related to limited health-care resources, leading to potential errors in fracture diagnosis and management.
There are several limitations of the present register-based study.First of all, some information may have been unavailable or misclassified, and variations in coding between providers and institutions are difficult to handle.Moreover, information on potential confounding factors may have been missing, which is a common drawback of register-based studies.Considering the massive dataset of the present study, however, we believe that these drawbacks did not play a significant role in the analysis.This study has only presented the health-care levels at which the fractures were initially diagnosed; no specific information regarding treatment facilities or treatment methods such as surgical treatments versus reduction and casting was provided.Incidence and prevalence data of specific fracture types were not analyzed as this study aimed to present the geographic distribution of fracture cases in Türkiye.The most important strength of this study was the inclusion of the medical records of the entire Turkish population with very limited missing data.
CONCLUSION
This study has highlighted the distribution patterns of fractures within the Turkish population across geographical regions and health-care levels.A better understanding of these variations is crucial for developing effective strategies to improve fracture management and reduce the associated burden on individuals and the health-care system.Our findings can significantly contribute to public health strategies and resource allocation in Türkiye.
Figure 1 .Figure 2 .Figure 3 .
Figure 1.Number of fractures in the pediatric age group on a regional basis
Table 3 .
Relationships between age groups and fracture sites ( January 2021 to May 2023)
Table 4 .
Fracture counts according to gender and age groups ( January 2021 to May 2023)
Table 5 .
15most common diagnosed fracture codes in whole population and age groups between January 2021 and May 2023.
Table 6 .
Relationships between age groups and health-care admission levels ( January 2021 to May 2023) Identifying higher fracture counts in areas with limited health-care resources or in regions with fractures among specific age groups can guide the implementation of targeted interventions.Furthermore, further research is needed to determine the specific risk factors associated with fractures in different geographical regions of Türkiye.This way, health-care management and resource allocation can be planned more effectively.This study was approved by the Ministry of Health, General Directorate of Health Information Systems Ethics Committee (Date: 27.11.2019,Decision No: 95741342-020).Concept: Ş.B., İ.B., S.K.; Design: Ş.B., İ.B., S.K., S.B., N.E.Y.; Supervision: Ş.B., İ.B., N.E.Y., The author declared that this study has received no financial support. | 2023-10-05T06:18:02.655Z | 2023-09-08T00:00:00.000 | {
"year": 2023,
"sha1": "3b7b11b798bef6f446b09811fd95cb1134e43e32",
"oa_license": null,
"oa_url": "https://doi.org/10.14744/tjtes.2023.01364",
"oa_status": "BRONZE",
"pdf_src": "ScienceParsePlus",
"pdf_hash": "30a394d1d8e3f84758bab7a7d0f4aed19ccd31d3",
"s2fieldsofstudy": [
"Medicine"
],
"extfieldsofstudy": [
"Medicine"
]
} |
259363403 | pes2o/s2orc | v3-fos-license | Multispectral Image Noise Removal With Adaptive Loss and Multiple Image Priors Model
Multispectral image (MSI) denoising is a crucial preprocessing step for various subsequent image processing tasks, including classification, recognition, and unmixing. This article proposes a novel image denoising model that integrates both noise modeling and image prior knowledge modeling. Specifically, to account for the complexity and nonuniformity of noise, a nonindependent identically distributed mixture of Gaussian model is employed for noise modeling, and a weighted loss function is obtained. The weights used in the loss function are adaptively learned from noisy MSI and employed to adjust the denoising strength of each pixel. In additionally, the model leverages the prior knowledge of the image by utilizing a nonlocal low-rank matrix model that captures the spatial–spectral correlation and nonlocal spatial similarity priors of the image. Moreover, our model adopts the weighted spatial–spectral TV model to encode the local smoothness prior of the image. Both prior models are translated into regularization terms in the denoising model. The efficacy of the proposed method is demonstrated through both simulated and real image experiments.
I. INTRODUCTION
A MULTISPECTRAL image (MSI) is a 3-D image consisting of a set of 2-D images, each of which is the imaging result of a certain spectral band. MSIs are used extensively in mineral exploration, pharmaceutical counterfeiting, and food safety due to their ability to provide richer information than traditional 2-D images [1], [2], [3]. However, noise contamination is inevitable in MSIs due to sensor sensitivity, calibration errors, and physical mechanisms. The noise distribution is complex and the noise intensity varies across spectral bands [4], [5], [6], [7], leading to significant degradation in image quality and negative impacts on subsequent processing, such as classification, unmixing, and target detection. As a result, MSI noise reduction is a critical and challenging task within the MSI processing field.
In recent decades, there has been a significant advancement in MSI denoising methods. These methods can be broadly classified into two categories: 1) deep learning-based methods and 2) traditional machine learning-based methods. The deep learning-based methods rely heavily on data and require the design of complex network structures to extract deep prior knowledge from images. For instance, Chang et al. [8] first introduced the deep convolutional neural network (CNN) for MSI denoising. Later, Yuan et al. [9] proposed a deep residual CNN with multiscale and multilevel feature representation for bandwise denoising. To encode the spatial-spectral correlation of image, Zhang et al. [10] presented a deep CNN by incorporating the spatial-spectral gradient information; Shi et al. [11] proposed a 3-D attention network with two separate branches. Pan et al. [12] and Wei et al. [13] introduced a quasi-recurrent network to capture the correlation of spatial features among the spectral domain. Dong et al. [14] presented a modified 3-D U-net architecture, and Cao et al. [15] proposed global reason network with three well-designed modules. More recently, some transformer-based approaches have been applied to HSI and have performed well [16], [17]. Although they provide outstanding denoising results, they lack theoretical support and often do not generalize well to new datasets. In contrast, the traditional machine learning-based methods do not rely on training data and usually have good theoretical support, and demonstrate better generalization. This type of denoising models typically consists of two primary components, i.e., the loss function term and the regularization term. The loss function term measures the deviation between the ground truth image and the observed image. Most denoising models use the l 2 norm as the loss function, which is simple and convex, leading to fast and efficient algorithms for obtaining the global optimal solution. However, the l 2 norm loss function assumes that the noise obeys an independent identically (i.i.d.) Gaussian distribution, which deviates from the true MSI noise distribution, resulting in the lack of robustness in removing mixed noise. To address this issue, many denoising models have proposed a Gaussian noise plus sparse noise assumption, which treats the non-Gaussian noise as sparse noise and embeds it as a parameter to be learned into the model [18], [19], [20]. This improves the robustness of the algorithm, and thus, have been widely used in MSI denoising task. However, this noise model roughly treats all non-Gaussian noise as sparse noise, which still deviates from the real mixed noise distribution. To alleviate this issue, the mixture of Gaussian (MoG) distribution noise model was proposed [21], [22], which can theoretically approximate any distribution as long as there are enough Gaussian components. The MoG model derives the weighted l 2 norm This work is licensed under a Creative Commons Attribution 4.0 License. For more information, see https://creativecommons.org/licenses/by/4.0/ loss function based on the maximum likelihood principle. However, the approximation ability of MoG model is often limited by the small number of Gaussian components in practice. To solve this problem, Cao et al. [23], [24] proposed the mixture of exponential power (MoEP) distribution model, which results in a weighted l p norm loss function. In addition, Chen et al. [25] proposed a non.i.i.d. mixture of Gaussian (NMoG) noise model considering the non.i.i.d. statistical characteristics of MSI noise and came up with the weighted l 2 norm function. Further, Barron [26] proposed a more general noise model that can obtain a general loss function by incorporating a set of robust loss functions.
Another critical component of denoising model is the regularization term, which is modeled based on the image prior knowledge. To date, the most significant prior knowledge that have been demonstrated include spatial and spectral correlation, nonlocal spatial similarity, and local smoothness. Extensive research has been dedicated to exploring more accurate and efficient methods for modeling these prior knowledge components. For spatial and spectral correlation prior, the low-rank matrix decomposition model [18], [27], [28], [29] and low-rank tensor decomposition model [30], [31], [32], [33], [34] have been proposed. To incorporate nonlocal spatial similarity prior, it is common practice to partition the image into small blocks, which are then grouped into subimage groups based on their similarity. The model is then applied to each subimage group separately, such as BM4D [35] applies 4-D filtering on each subimage group, Chang et al. [36] arranged each group into a matrix and then applied a low-rank matrix model. In addition, Xue et al. [32] and Zhang et al. [37] reorganized each subimage group into a 3-D tensor and 4-D tensor, respectively, and applied a low-rank tensor model on it. For local smoothness prior, a commonly used approach is to apply 2DTV band by band and then sum them together [38]. However, this method does not account for the smoothness of the image along the spectral dimension. So, the 3DTV model [39] was transferred to MSI and named spatial-spectral total variation (SSTV) [7]. In order to address the large difference in pixel scale and noise intensity across spectral bands in MSI, an adaptive SSATV model was developed by introducing weights into the SSTV model to adjust the denoising strength of each pixel [40], [41]. In addition, to protect image boundary information, Chen et al. [42] presented an adaptive SSTV model. Furthermore, Peng et al. [43], [44] confirmed that the gradient domain of an image is also lowrank, which resulting in the development of enhance TV and CTV models. To achieve improved denoising performance, the denoising model often incorporates multiple prior knowledge models. These may include a combination of spatial and spectral correlation priors with local smoothness priors [45], [46], [47], or a combination of spectral correlation priors with nonlocal similarity priors [48]. Moreover, some models integrate all the three priors completely or partially [7], [49], [50], [51], [52].
The denoising methods mentioned above, which incorporate multiple prior knowledge models, have demonstrated exceptional performance. However, to improve model efficiency, these methods often use relatively simple loss functions, such as l 2 norm. Unfortunately, noise in MSI is often complex and noni.i.d., which limits the performance of these methods since they assume independent and identically distributed statistical characteristics. In this article, we propose a novel MSI denoising method that combines noise modeling methodology with prior modeling methodology to address this issue.
We begin by assuming a non-i.i.d. noise structure for MSI and use the NMoG to model it. Based on this noise model, we derive an adaptive weighted l 2 norm loss function, the weight reflects the noise intensity of each pixel, and the pixel with high noise intensity has small weight to reduce the adverse impact of strong noise on the model results. On the contrary, the pixel with low noise intensity has large weight to protect the information in the original image from being distorted. The weight information can be efficiently and adaptively learned from noisy images using the variational expectation maximization (VEM) algorithm. Next, we build the regularization term by completely integrating the previously mentioned three image priors. Specifically, we use the nonlocal low-rank matrix model to capture the nonlocal similarity and the spatial-spectral correlation prior. In addition, we adopted an edge preserving total variation model to encode the nonlocal smoothness prior of HSI. Finally, we develop an effective ADMM algorithm to solve the denoising model. We validate the effectiveness of our proposed method on both synthetic and real HSI datasets, and our results indicate that our approach achieves competitive or superior performance compared to other state-of-the-art methods.
II. NOTATIONS AND PRELIMINARIES
A tensor is a multidimensional data represented by decorated letters, such as X ∈ R I 1 ×I 2 ×···×I N , and its element is denoted by X i 1 ,i 2 ,...,i N . In addition, the matrices, vectors, and scalars are represented by nonbold upper case letters X, bold lower case letters x, and nonbold letters x, respectively.
The MSI data cube can be treated as a 3-D tensor X ∈ R M ×N ×S with two spatial modes and a spectral mode, where M, N , and S represent the spatial height, spatial width, and spectrum number, respectively. The MSI tensor X can be unfolded into a matrix along the spectral mode, denoted as X (3) with the element (X (3) ) l,s corresponding to the element X i,j,s , where l is given by The observed MSI is often corrupted by complex types of noise, including Gaussian noise, stripe noise, deadline noise, and others. To simplify the noise model, the noise was assumed to be additive, and therefore, we can express the noise degradation model as follows: where Y represents the noisy MSI tensor, X is the clean MSI tensor, and E is the noise tensor.
The distribution of E is complex and unknown. To model this type of distribution, a mixture of Gaussian (MoG) distribution is often used. However, since the noise types and intensity between different bands can vary significantly, a more effective approach is to apply the MoG model to the noise of each band. This results in different model parameters for each band, and the parameters for all MoG models are generated from the same prior distribution. This approach is referred to as NMoG and can be expressed as follows [25]: where α sk and τ sk are the proportion and precision of the kth Gaussian component for the sth band, with the constraint that K k=1 α sk = 1, K is the total number of Gaussian components. To capture the common properties of noise across bands, the precision τ s was assumed to sample from the following prior distribution: where Gam(·) represents the Gamma distribution, η 0 , μ 0 , and ν 0 are the hyperparameters.
III. PROPOSED METHOD
In this section, we propose a new denoising model, which falls under the category of traditional machine learning and has the following expression: where Loss(Y, X ) presents the loss function which measures the deviation between observation MSI and the ground truth. And R(X ) is the regularization term which encodes the prior information of MSI. We will introduce the proposed method in detail from two aspects: 1) loss function and 2) regularization term.
A. Adaptive Loss Function
To construct a more appropriate loss function, it is crucial to have a comprehensive understanding of the noise and then reasonably model the distribution characteristics of the noise, which in turn will help to measure the deviation between observed data and ground truth. Moreover, since the noise in each data point can vary, the loss function must be adaptive to cater to the specific noise characteristics. To meet these requirements, we have selected the NMoG noise model and used the VEM algorithm to estimate the model parameters. This has enabled us to derive an explicit loss function that is suitable for our purposes. We have simplified the NMoG model slightly by assuming that the noise is zero-mean. To solve this model, we have introduced a hidden variable Z in the noise model. The NMoG model can be expressed as follows: where Mul(.) and Dir(.) represent the multinomial distribution and the Dirichlet distribution, respectively. The maximization of marginal likelihood p(Y|X ) can be transferred to maximize the variational lower bound (ELBO) as follows [53]: Step: We employ the variational inference algorithm to solve the approximate posterior distribution of the parameters involved in (6). The estimated posterior distribution is assumed to have the following decomposition form: The posterior distribution of parameter τ s can be updated using the following equation: where the parameters involved in the estimated posterior is obtained as follows: where · represents the expectation operator. The posterior distribution of latent variable Z is where the closed-form solution of distribution parameter ijsk is as follows: The posterior distribution of the mixing proportion α s is as follows: where where μ = μ 0 + c 0 KS and ν = ν 0 + s,k τ sk . Next, we present the necessary expectations required for the above update equations, based on the current posterior distributions. The details are provided as follows: where ψ(·) is the digamma function. E Step: We calculate the evidence lower bound (ELBO) of the logarithm of likelihood p(Y|X ) as (6). Note that we only focus on the components related to X and others were treated as constants. Therefore, the ELBO can be denoted as a fucntion of X with respect to X old . We denote the function as Q(X , X old ) M Step: We maximum the ELBO function Q(X , X old ) with respect to X , the loss function of this optimization problem can be reformulated as follows: where represents elementwise product. The loss function obtained here takes the form of a weighted l2-norm. The weight is calculated adaptively based on the estimated noise from the denoising process. It can be seen that the high-intensity noise receives small weight, reducing the negative impact of strong noise on model results. Conversely, the low-intensity noise receives large weight to preserve the structure information in the original image.
B. Multiple Image Priors Model-Based Regularization
As mentioned previously, the nonlocal spatial similarity, the spatial and spectral correlation, and the local smoothness are the most significant priors knowledge for MSI restoration. The regularization term in our denoising model will completely encode all these priors with the following form: where X NSC represents the regularization related to nonlocal spatial similarity and the spatial-spectral correlation priors. And X LS represents the regularization related to local smoothness prior.
To model the nonlocal spatial similarity prior, it is necessary to divide the images into overlapping patches for matching and grouping. In order to simultaneously encode the spatial-spectral correlation prior, the MSI is divided into overlapping full-band patches to ensure that the correlation exists in each patch. This allows the low-rank model to be applied to each group of full-band patches, enabling both nonlocal spatial similarity prior and the spatial-spectral correlation prior modeling. Note that Chang et al. [36] demonstrated that, for MSI, the low-rank property between patches is much more significant than between bands. To simplify the model, each patch in the group is vectorized to form a matrix, which is then subjected to low-rank matrix decomposition. Therefore, the regularization related to nonlocal spatial similarity and spatial-spectral correlation priors can be expressed as follows: where P i represents the operator to group the similar patches for the ith patch and rearrange it into matrix, P denotes the number of patches, and · w, * represents the weighted nuclear norm. The local smoothness prior is not only present in two spatial modes but also extends to the spectral mode. To capture such prior, the spatial-spectral total variation (SSTV) model is commonly used. Due to significant variations in noise intensity and pixel values, it is necessary to impose TV regularization constraints with different strengths on each pixel. To address this, we introduce a weighted SSTV model that allows for the application of varying regularization strengths on each pixel, which defined as follows: where the operator D = [D 1 , D 2 , D 3 ] represents the 3-D firstorder forward finite-difference operator. In addition, the operators D 1 , D 2 , and D 3 correspond to the first-order finitedifference operator along the spatial horizontal, spatial vertical, and spectral mode, respectively. The parameter S = [S 1 , S 2 , S 3 ] is the weight tensor, which is used to preserve image texture while enforcing sparsity constraint on the pixels. We can estimate the weight tensor from gradient maps of image. Although a large amount of noise has been removed from the denoised imagê X , the SNR is not enough to extract the texture information of the image. To alleviate the situation, we carry out low-rank approximation with rank setting as 1 to further denoise to get X 0 with higher SNR, so the texture information of the image can be effectively obtained from the gradient maps of X 0 . Therefore, the weight tensor is obtained from restored MSI during the denoising process as follows [42]: where LR 1 is the low-rank approximation operator using a lowrank matrix decomposition model with a fixed rank of 1.X represents the restored MSI data during denoising processing, and the threshold δ is utilized to prevent undesirably large values in the weight tensor. Typically, it is set to the first quartile of G i .
C. Denoising Algorithm
Based on the adaptive loss function and regularization term introduced above, the denoising model can be represented as follows: where λ 1 and λ 2 is the tradeoff parameters.
In order to solve this model, the auxiliary variables are introduced, and thus, (21) can be rewritten into the following optimization problem: The ADMM methodology can be used to minimize the optimization problem mentioned above by transforming it into an augmented Lagrangian function as follows: where μ 1 , μ 2 , and μ 3 are penalty parameters, and L 1 , L 2 , and L 3 are the lagrange multipliers.
The ADMM framework provides an alternative approach to optimize the augmented Lagrangian function by fixing all variables except one and optimizing it.
Update A: The suboptimal problem for optimizing the variable A is derived from (23) by removing the terms not related to A This problem can be treated as solving the linear system 25) where D * represents the adjoint operator of D. The fast Fourier transform is adopted to efficiently solve this problem, and the closed-form solution to A can be obtained by the following equations: where | · | 2 represents the elementwise square, the operators fftn(·) and ifftn(·) are the fast 3-D Fourier transform and its inverse transform, respectively, and 1 is the tensor with all elements 1.
Update B: The suboptimal problem with respect to B is as follows: This suboptimal problem can be solved by using the known soft-thresholding operator [54] as follows: where R λ (·) represents the soft-thresholding operator with parameter λ. Update C i : The suboptimal problem concerning C i is as follows: To solve this subproblem, we utilize the off-the-shelf algorithm WNNM [55]. The optimal solution of C i is deduced as the following equations: where the diagonal elements of Σ correspond to the singular values of P i X + 1 μ 3 L 3 arranged in decreasing order. In addion, theΣ is the diagonal matrix, and the diagonal elements are calculated as follows: where the parameters α 1 and α 2 are expressed as follows: Update X : The subproblem with respect to X can be written as where C = {C i } P i=1 , and the operator P −1 represents the inverse operator of P . Its function is to arrange each patch back to its original position in the MSI while also averaging out the Update S by (20).
5:
Update lagrange multipliers L 1 , L 2 , L 3 by (35). 6: t ← t + 1. 7: end while overlapping portions. The solution of X has the close-form as follows: The multipliers are updated as follows: After updating the denoised data, we proceed to update the weights involved in the loss function using (16). More information on this step can be found in Section III-A. The denoising algorithm is summarized in Algorithm 1.
IV. EXPERIMENTS
To demonstrate the effectiveness of our proposed denoising method, we compared our method with several state-of-theart denoising methods on both simulated and real MSI data, including BM4D [35], TDL [30], LRTV [38], NMoG [25], LRTDTV [47], LLRT [36], and RCTV [56]. Specifically, BM4D is a classical method that utilizes block-matching and 4-D filtering for denoising. TDL utilizes the l 2 -norm loss function and spatial-spectral correlation prior model. LRTV is based on the Gaussian plus sparse noise model, combined with the low-rank matrix decomposition model. NMoG is a popular method that uses the NMoG noise model and low-rank matrix decomposition model. LRTDTV combines the Gaussian noise plus sparse noise model with the low-rank tensor decomposition prior model and SSTV regularization. LLRT uses the l 2 -norm loss function and combines the nonlocal similarity, spectral correlation, and local smoothness priors model. RCTV uses the Gaussian plus sparse noise model, as well as the local smoothness and spectral correlation priors model. Overall, the comparison experiment involved a diverse range of denoising methods, allowing us to accurately evaluate the performance of our proposed method against the current state of the arts. All experiments were implemented in Matlab R2021a on a PC with 3.4 GHz CPU and 32 GB RAM.
A. Simulation Experiments
This experiment aims to evaluate the performance of our proposed denoising method quantitatively. We utilized two MSI datasets in the simulation experiment: the Balloons data from the CAVE dataset 1 [57] with size of 512 × 512 × 31, and the Washington DC MALL dataset 2 with size of 1208 × 307 × 191. We resized the Balloons data to 200 × 200 × 31 and cropped the main part of the Washington DC MALL dataset to obtain a size of 200 × 200 × 191. These datasets were used as clean MSI data since they do not contain significant visual noise and their gray values were normalized to [0,1].
To simulate the real noise cases, we added four kinds of noise to the clean MSI data, including the following. sian noise and stripe noise were added to clean MSI. In addition, 40% (on Balloons dataset) and 45% (on Washington DC Mall dataset) bands were randomly selected to add the impluse noise with the percentage of impluse is from 50% to 70% (on Balloons dataset) and 90% to 100% (on Washington DC Mall dataset), respectively. We conducted 20 repetitions of the noise addition and denoising experiments on two datasets for each noise case. In these experiments, the maximum number of iterations of our method is set to 10, and the hyperparameters λ 1 and λ 2 are adjusted in each noise case. The sensitivity of parameter is analyzed in Section IV-C. Five quantitative measurements were employed to evaluate the denoising performance, namely: 1) MPSNR, which is the means peak signal-to-noise ratio (PSNR) across bands; 2) MSSIM, which is the mean structural similarity (SSIM) across bands; 3) ERGAS, which stands for Erreur Relative Globale Adimensionnellede Synthese, and 4) SAM, which refers to the spectral angle mapper; 5) time, which is the experimental time cost of each method. The average results of 20 repeated experiments on the Balloons dataset are presented in Table I. The best value of measurements are marked in bold. LLRT method performs remarkably in the i.i.d. Gaussian case since their noise assumption matches the actual noise well. However, as the noise complexity increases, the performance of LLRT tends to decrease. Similarly, BM4D and TDL methods that use the l 2 -norm loss function show a similar trend. On the other hand, methods utilizing the Gaussian plus sparse noise model, such as LRTV, LRTDTV, and RCTV, demonstrate more robust performance. However, their denoising performance is weaker than our method due to the better image priors model adopted in our denoising approach. Although the NMoG method performs stably under all noise conditions, but its denoising measurements are worse than others since it models only the spectral correlation prior, which may not be the best choice for this dataset with only 31 bands. Overall, our method exhibits strong robustness and remarkable denoising performance compared to other methods.
As a specific example, Fig. 1 provides the visual presentation of the denoising results of one experiment under mixture noise case. To facilitate comparison, we enlarged a common region of each figure, marked by a red box. The figures demonstrate that our method visually achieves best denoising performance compared with other methods. Table II presents the denoising results on the Washington DC mall dataset. It is worth noting that this dataset has a greater number of spectral bands and exhibits strong spectral correlation, making the NMoG method remarkably effective in all noise cases. On the other hand, the i.i.d. Gaussian noise model-based method is unstable, while the Gaussian plus sparse noise model-based method is relatively more stable. Our method achieves the best or second-best performance under all noise cases. The visual comparison results of one realization are illustrated in Fig. 2. From the figure, it is evident that our method outperforms other methods in removing noise and preserving detailed information.
B. Real Data Experiments
In this section, we present an evaluation of the performance of our method on a real MSI dataset, namely, the AVIRIS Indian Pines dataset 3 with the size 145 × 145 × 220. This dataset is corrupted by various types of noise, including Gaussian noise, stripes, atmosphere absorption, and other unknown noise. In these experiments, the maximum number of iterations of our method is set to 10, and hyperparameters are set as λ 1 = 500 and λ 1 = 5.
Figs. 3-5 show the visual presentation of denoising results on bands 103, 149, and 165, respectively. It is evident that the TDL method fails to denoise this dataset effectively, and the BM4D method obviously has residual noise. This is due to the overly simplistic noise assumption and inadequate image prior modeling. Conversely, other methods yield significant noise reduction. Compared to all the other methods, our approach exhibits the most substantial noise removal in the spatial domain and less visual image distortion.
To facilitate further comparison, Fig. 6 displays the spectral signatures of a pixel located at (24, 88) before and after restoration. The horizontal axis denotes the band number, while the vertical axis represents the digital number value of the given location. Due to the presence of noise, the curve exhibits rapid fluctuations, as shown in Fig. 6(a). After denoising, the fluctuations are substantially suppressed. In addition, we observe that the NMoG, LLRT, and our method yield the most notable fluctuation suppression, consistent with the visual results depicted in Figs. 3-5.
C. Parameters Setting
In the proposed denoising model, the parameters λ 1 and λ 2 play a crucial role in balancing the loss term and regularization terms. In our algorithm, we initially estimate the noise variance σ 2 using the initialized X and set λ 1 = λ 1 σ 2 and λ 2 = λ 2 σ 2 . It is important to note that the parameter selection for λ 1 and λ 2 occurs before multiplying them by σ 2 . To analyze the sensitivity of parameters, we conducted denoising experiments on the Balloons dataset under various parameter settings in the case of mixture noise. The experiment results are presented as contour maps. In Fig. 7(a), the MPSNR value is plotted against λ 1 and λ 2 . It is observed that MPSNR changes slowly with variations in λ 2 , suggesting its insensitivity to this parameter. Conversely, MPSNR exhibits rapid changes with variations in λ 1 , indicating its sensitivity to this parameter. Similarly, Fig. 7(b) shows that the value of MSSIM is also insensitive to λ 2 but relatively sensitive to λ 1 . However, within a large range, MSSIM remains stable at a high level. Hence, λ 2 can be considered insensitive while λ 1 is sensitive. Furthermore, λ 1 represents the intensity of TV regularization. Therefore, its value can be determined
V. CONCLUSION
In this article, we propose a novel MSI denoising model with an adaptive loss function and multiple image prior model-based regularization. Specifically, we apply the NMoG distribution to model the complex and unknown noise distribution, and then obtain the weighted l 2 -norm loss function by solving the noise model with VAE algorithm. The weight involved in the loss function is adaptively and efficiently learned from observed MSI. In addition, we model the nonlocal spatial similarity, spatial and spectral correlation, and local smoothness priors comprehensively and integrate them into the regularization term. Finally, we conduct the simulation and real data experiments to demonstrate that the proposed method can effectively reduce the noise compared with the state-of-the-art methods. | 2023-07-08T13:37:18.311Z | 2023-01-01T00:00:00.000 | {
"year": 2023,
"sha1": "b7b183ffc5ebf9f0fdb98459de775ce4c4cf1736",
"oa_license": "CCBY",
"oa_url": "https://ieeexplore.ieee.org/ielx7/4609443/4609444/10154156.pdf",
"oa_status": "GOLD",
"pdf_src": "IEEE",
"pdf_hash": "b7b183ffc5ebf9f0fdb98459de775ce4c4cf1736",
"s2fieldsofstudy": [
"Computer Science",
"Engineering"
],
"extfieldsofstudy": [
"Computer Science"
]
} |
234312675 | pes2o/s2orc | v3-fos-license | Vital Protocol for Reliability and Accuracy of PolyWareTM Measurements
Background: PolyWareTM software (PW) has been exclusively used in most of the polyethylene wear studies of total hip arthroplasty (THA). But, we found that PolyWareTM (PW) measurements can be signicantly inaccurate and unrepeatable depending on imaging conditions or subjective manipulation choices. In this regard, this study reveals the required conditions to achieve the best accuracy and reliability of the PW measurements. Methods: The experiment examined the dependency of PW on several measurement conditions. The Xray images of in-vitro THA prostheses were acquired under a clinical X-ray scanning condition. A liner wear of 6.67 mm, an acetabular lateral inclination of 36.7° and an anteversion of 9.0° were simulated. Results: Among all the imported X-ray images, those with a resolution of 1076×1076 exhibited the best standard deviation in wear measurements as small as 0.01 mm and the least occurrences of blurriness. The edge detection area specied as non-squared and off the femoral head center exhibited the most occurrence of blurriness. At the X-ray scanning moment, an eccentric placement of the femoral head center by 15 cm superior to the X-ray beam center led to an acetabular anteversion error up to 5.3°. Conclusion: The results request researchers to observe following conditions; 1) the original Xray image be 1076×1076 squared X-ray images, 2) the edge detection area be specied as a square with edge lengths of 5 times the diameter of the femoral head centered at the femoral head center, 3) the femoral head center or acetabular center be placed as close to the center line of the X-ray beam as possible, at the X-ray scanning moment.
Abstract
Background: PolyWareTM software (PW) has been exclusively used in most of the polyethylene wear studies of total hip arthroplasty (THA). But, we found that PolyWareTM (PW) measurements can be signi cantly inaccurate and unrepeatable depending on imaging conditions or subjective manipulation choices. In this regard, this study reveals the required conditions to achieve the best accuracy and reliability of the PW measurements.
Methods: The experiment examined the dependency of PW on several measurement conditions. The Xray images of in-vitro THA prostheses were acquired under a clinical X-ray scanning condition. A liner wear of 6.67 mm, an acetabular lateral inclination of 36.7° and an anteversion of 9.0° were simulated.
Results: Among all the imported X-ray images, those with a resolution of 1076×1076 exhibited the best standard deviation in wear measurements as small as 0.01 mm and the least occurrences of blurriness. The edge detection area speci ed as non-squared and off the femoral head center exhibited the most occurrence of blurriness. At the X-ray scanning moment, an eccentric placement of the femoral head center by 15 cm superior to the X-ray beam center led to an acetabular anteversion error up to 5.3°.
Conclusion: The results request researchers to observe following conditions; 1) the original Xray image be 1076×1076 squared X-ray images, 2) the edge detection area be speci ed as a square with edge lengths of 5 times the diameter of the femoral head centered at the femoral head center, 3) the femoral head center or acetabular center be placed as close to the center line of the X-ray beam as possible, at the X-ray scanning moment.
Full-text
Due to technical limitations, full-text HTML conversion of this manuscript could not be completed.
However, the manuscript can be downloaded and accessed as a PDF. Figure 1 Error (in absolute value) in the wear for spatial eccentricity modes of the femoral head in the original Xray images Figure 2 Errors (in absolute value) in the acetabular anteversion for spatial eccentricity modes of the femoral head in the original X-ray images Measurement of acetabular anteversion using a CAD to investigate the effect eccentricity of the prosthesis from the center of X-ray beam on the acetabular anteversion. The same X-ray images used for Polyethylene measurements were also used for the measurement using a CAD software, i.e. Rapidform 2006®(INUSTechnology, Seoul, Korea). The superior and inferior placements of the prosthesis bring about errors in acetabular anteversion, by the nature of perspective X-ray imaging.
Figure 4
Overall process scheme of the current study X-ray images of the initial (the left) and nal (the right) positions, simulating the wear of the cup by a translation of the femoral stem of 6.67 mmnormal to the equator plane of AC.
Figure 7
Measured values for Ployware evaluation. (a) Wear X-ray images of the initial (the left) and nal (the right) positions, simulating the wear of the cup by a translation of the femoral stem of 6.67 mmnormal to the equator plane of AC.
Figure 8
Eccentricity comparison test setup, i.e. nine spatial eccentricity modes. With respect to the center of the Xray detector, nine spatial eccentricity locations of the THA prostheses were set up to gure out how the eccentricity of the component location affected PolyWare measurement results.
Figure 9
Blur of the edge detection area. For the same X-ray image, different speci cations of rectangular edge detection areas result in different image sharpness. The left is shown normal, but the right is shown blurred. In the normal case, the rectangular edge detection area is speci ed such that its center positions at the very center of the femoral head. In the blurred case, in contrast, the rectangular edge detection area is speci ed such that its center positions considerably off the center of the femoral head, and the edge detection area becomes blurred.
Figure 10
Three ways of speci cation of the edge detection area. Edge detection area assigned as a rectangle whose edge lengths were 5 times (5Dh) square, 7 times (7Dh) square of the diameter of the femoral head | 2021-05-11T00:04:19.392Z | 2021-01-11T00:00:00.000 | {
"year": 2021,
"sha1": "2a51dfa0a392efb1714cda7585bff833422cacb7",
"oa_license": "CCBY",
"oa_url": "https://www.researchsquare.com/article/rs-142211/latest.pdf",
"oa_status": "GREEN",
"pdf_src": "ScienceParsePlus",
"pdf_hash": "d570b790588a5874f127df971aa42b1d1778eb6e",
"s2fieldsofstudy": [
"Medicine"
],
"extfieldsofstudy": [
"Computer Science"
]
} |
119655467 | pes2o/s2orc | v3-fos-license | On the Lyapunov numbers
We introduce and study the Lyapunov numbers -- quantitative measures of the sensitivity of a dynamical system $(X,f)$ given by a compact metric space $X$ and a continuous map $f:X \to X$. In particular, we prove that for a minimal topologically weakly mixing system all Lyapunov numbers are the same.
Introduction
Throughout this paper (X, f ) denotes a topological dynamical system, where X is a compact metric space with metric d and f : X → X is a continuous map.
The notion of sensitivity (sensitive dependence on initial conditions) was first used by Ruelle [14]. According to the works by Guckenheimer [10], Auslander and Yorke [5] a dynamical system (X, f ) is called sensitive if there exists a positive ε such that for every x ∈ X and every neighborhood U x of x, there exist y ∈ U x and a nonnegative integer n with d(f n (x), f n (y)) > ε.
The topology obtained from the metric d f is usually strictly coarser than the original d topology. When we use a term like "open", we refer exclusively to the original topology.
A point x ∈ X is called Lyapunov stable if for every ε > 0 there exists a δ > 0 such that rad(U x ) < δ implies rad f (U x ) ≤ ε. This condition says exactly that the sequence of iterates {f n : n ≥ 0} is equicontinuous at x. Hence, such a point is also called an equicontinuity point. We label associated point sets: Eq ε (f ).
As the label suggests, Eq(f ) is the set of equicontinuity points. If Eq(f ) = X, i.e. every point is equicontinuous, then the two metrics d and d f are topologically equivalent and so, by compactness, they are uniformly equivalent. Such a system is called equicontinuous. Thus, (X, f ) is equicontinuous exactly when the sequence {f n : n ≥ 0} is uniformly equicontinuous. If the G δ set Eq(f ) is dense in X then the system is called almost equicontinuous. On the other hand, if Eq ε (f ) = ∅ for some ε > 0 then it is the same that the system shows sensitive dependence upon initial conditions or, more simply, (X, f ) is sensitive. We define L r := sup{ε : for every x ∈ X and every neighborhood U x of x there exist y ∈ U x and a nonnegative integer n with d(f n (x), f n (y)) > ε} and call it the (first ) Lyapunov number. It can happen that Eq ε (f ) = ∅ for all positive ε and yet still the intersection, Eq(f ), is empty (see [3]). This cannot happen when the system is transitive † (Glasner and Weiss [9], Akin et al [2]). Theorem 1.2. Let (X, f ) be a topologically transitive dynamical system. Exactly one of the following two cases holds.
Case i (Eq(f ) = ∅) Assume there exists an equicontinuity point for the system. The equicontinuity points are exactly the transitive points, i.e. Eq(f ) = Trans(f ), and the system is almost equicontinuous. The map f is a homeomorphism and the inverse system (X, f −1 ) is almost equicontinuous. Furthermore, the system is uniformly rigid meaning that some subsequence of {f n : n = 0, 1, ...} converges uniformly to the identity. Case ii (Eq(f ) = ∅) Assume the system has no equicontinuity points. The system is sensitive, i.e. there exists ε > 0 such that Eq ε (f ) = ∅.
If (X, f ) is a minimal dynamical system then it is either sensitive or equicontinuous.
Let us define
Equ Obviously Eq(f ) = ∩ ε>0 Equ ε (f ), and if Equ ε (f ) = ∅ for some ε > 0 then the system (X, f ) is sensitive (see also Proposition 1.1). Therefore, it is natural to define L d := sup{ε : in any opene U ⊂ X there exist x, y ∈ U and there is a positive integer n with d(f n (x), f n (y)) > ε} and call it the second Lyapunov number. † We recall the definition in Section 3.
According to Proposition 1.1 we will define L r := sup{ε : for every x ∈ X and every open neighborhood U x of x there exists y ∈ U x with lim sup n→∞ d(f n (x), f n (y)) > ε}, Sometimes it will be useful to use also the following notations So, various definitions of sensitivity, formally give us different Lyapunov numbers -quantitative measures of these sensitivities.
In Section 2 we prove that for a topological dynamical system (X, f ), it holds L d ≤ 2L r . In Section 3 we examine the equalities between the Lyapunov numbers for topologically transitive systems and in Section 4 for weakly mixing systems. In particular, we prove that for topologically weakly mixing minimal systems all Lyapunov numbers are the same. Finally, in Section 5 we give some examples and open problems for Lyapunov numbers.
Acknowledgements. We thank the anonymous reviewer for helpful remarks and suggestions. The first author was supported by Max-Planck-Institut für Mathematik (Bonn); he acknowledges the hospitality of the Institute.
A general inequality for the Lyapunov numbers
Directly from the definitions, the following inequalities hold Proof. Let L d be the second Lyapunov number of (X, f ). Fix a (small enough) δ > 0, a point x ∈ X and a neighborhood U x of x. Let U 0 = U x and n 0 be the first positive integer, for which diam(f n0 (U 0 )) > L d − δ. There exists a point y 0 ∈ U 0 such that d(f n0 (x), f n0 (y 0 )) > (L d − δ)/2. Choose an opene U 1 with its closure contained in U 0 such that y 0 ∈ U 1 and diam(f m (U 1 )) ≤ δ/2 for every non-negative integer m ≤ n 0 . Let n 1 be the first positive integer, for which diam(f n1 (U 1 )) > L d − δ. By the definition of U 1 , we clearly have n 1 > n 0 .
We define recursively opene sets U 2 , U 3 , ... and positive integers n 2 , n 3 , ... as follows. Since n k−1 is defined, there exists a point As in the previous step, by the definition of U k we clearly have n k > n k−1 .
If y is a point of the nonempty intersection ∩ k U n k , then, obviously, y ∈ U and lim sup n→∞ d(f n (x), f n (y)) ≥ L d /2 − δ.
As a consequence of the inequalities at the beginning of Section 2 and Proposition 2.1 we conclude that L i ≤ 2L j for any i, j ∈ {1, 2, 3, 4}.
Lyapunov numbers for transitive maps
If (X, f ) is topologically transitive and X is compact, then the set of transitive points is a G δ -dense subset of X.
If every point of a dynamical system (X, f ) is transitive, then this system is called minimal. An f -invariant closed subset M ⊂ X is called minimal if the orbit of any point of M is dense in M (in this case a point of M is called minimal, too). For a dynamical system (X, f ), a point x ∈ X and a set U ⊂ X let A point x ∈ X is said to be recurrent if for every neighborhood U of x the set Proof. By the definition of L d , for any ε < L d and for any opene U ∈ X there are points x, y ∈ U and a positive integer n 0 such that d(f n0 (x), f n0 (y)) > ε. Choose an arbitrary (small) δ > 0. Let U x , U y ⊂ U be neighborhoods of x and y such that diam(f n0 (U x )) < δ and diam(f n0 (U y )) < δ. If z ∈ U x is a transitive point, there is a positive integer m for which f m (z) ∈ U y . By the triangle inequality we have d(f n0 (z), f n0+m (z)) > ε − 2δ.
Let U z be a neighborhood of z such that U z ⊂ U x and f m (U z ) ⊂ U y . Then obviously diam(f n0 (U z )) < δ and diam(f n0+m (U z )) < δ. Since a sensitive system has no isolated points, U z is infinite. Therefore, the orbit of the point z visits U z infinitely many times. If n k is such that f n k (z) ∈ U z , then f n0+n k (z) = f n0 (f n k (z)) ⊂ f n0 (U z ) and f n0+n k +m (z) = f n0+m (f n k (z)) ⊂ f n0+m (U z ) = f n0 (f m (U z )) ⊂ f n0 (U y ). And so, by the triangle inequality, d(f n0+n k (z), f n0+n k +m (z)) > ε − 2δ. From this we have L d > lim sup n→∞ d(f n (z), f n (f m (z))) ≥ ε − 2δ. Since δ > 0 and ε < L d were chosen arbitrarily, L d = L d .
A topologically transitive dynamical system (X, f ), where X has no isolated points, is called ToM if every point x ∈ X is either (topologically) transitive or minimal. ToM systems were introduced by Downarowicz and Ye in [6]. Since we do not require that both types are present (as in [6]), a minimal system is also ToM. If a ToM system is not minimal, then the set of minimal points is dense in X (because for a transitive, but non-minimal system, the set of non-transitive points is dense (see for instance [12])).
Theorem 3.2.
Let (X, f ) be a sensitive ToM system. Then L r = L r .
Proof. Fix a point x ∈ X. Let U x be a neighborhood of x and let δ > 0. By the definition of L r , there exist a point y ∈ U x and a positive integer m such that d(f m (x), f m (y)) > L r − δ. Take a neighborhood U y ⊂ U x of point y such that diamf m (U y ) < δ. Now, if x is a transitive point, one can just repeat the idea of the proof of Theorem 3.1 for the proof of this case. If x is not transitive, then is minimal. Since (X, f ) is ToM, we can find a minimal point z 1 ∈ U y and therefore d(f m (x), f m (z 1 )) > L r −2δ.
Consider the direct product system (Orb ). Let M be a minimal subset of this system. Then obviously Hence there is a point (x, z 2 ) ∈ U x ×Orb f (z 1 ), which is minimal, and therefore (uniformly) recurrent for the map f | Orb f (x) × f | Orb f (z1) ).
Clearly, every point of the form (x, f k (z 2 )), k = 0, 1, 2, ... will be uniformly recurrent too. Since z 1 is minimal, we can take a positive integer k, such that z 3 := f k (z 2 ) ∈ U y . Therefore, we have lim sup n→∞ d(f n (x), f n (z 3 )) ≥ L r − 2δ. Since x and δ > 0 were chosen arbitrarily, we get L r = L r .
As a corollary of the last two theorems we conclude that the equalities L r = L r and L d = L d hold for minimal dynamical systems. And what we can say about dynamical systems for which L r = L d holds?
Lyapunov numbers for weakly mixing maps
Recall that a topological dynamical system (X, f ) is called (topologically) weakly mixing if for any opene U 1 , U 2 , V 1 , V 2 ∈ X there is a non-negative integer n such that in X and a positive (small enough) number δ such that the distance between V x and V y is large or equal to diam(X) − δ.
As we have mentioned before, since TDS (X, f ) is minimal, any point of X is uniformly recurrent. In particular, it means that n f (x, V x ) is a syndetic subset of Z + . On the other hand (X, f ) is also a topologically weakly mixing dynamical system. And again it means that n f (U, V y ) is a thick subset of Z + . Hence n f (x, V x ) ∩ n f (U, V y ) = ∅ and therefore there exist a point y ∈ U and a pos- Since δ > 0 was arbitrary, we get L r = diam(X).
Two more open questions: 1. Does there exist a non-transitive dynamical system (X, f ) for which L d > L d and/or L r > L d ? 2. Does there exist a minimal dynamical system (X, f ) for which L d > L r ? Proposition 5.1. There exists a topological dynamical system (X, f ) for which L r = 2L r .
Proof. We define the space X as a compact surface in R 3 which is homeomorphic to a two-dimensional disk in R 2 . More precisely, the cylindric coordinates of a point (x, y, z) ∈ X have the form (r, ϕ, z), where r = x 2 + y 2 and ϕ is an angle, for which x = r cos ϕ and y = r sin ϕ. In other words, (r, ϕ) are the polar coordinates of (x, y), and z remains unchanged. Let h(r) = 8r(1 − r). Now, define X as a set of points with cylindric coordinates (r, ϕ, h(r)), where 0 ≤ r ≤ 1, ϕ ∈ R, and let the Euclidian metric (in R 3 ) d be the metric on X.
Let p ∈ X and U be a neighborhood of p. If p = (0, 0, 0), then for any δ > 0 there are n ∈ N and q ∈ U such that d(f n (p), f n (q)) > 2 − δ. If p = (0, 0, 0), then there are n ∈ N and q ∈ U , for which f n (q) lies on a circumference of X with the center (0, 0, 2) (in R 3 ) and the radius 1 2 . For these n and q we have d(f n (p), f n (q)) > 2 and so L r ≥ 2. Now, let p = (0, 0, 0). The equality lim n→∞ d(f n (p), f n (q)) = 1 holds for any q = p. So L r ≤ 1. Since L r ≤ 2L r (by Proposition 2.1), it gives L r = 2L r .
The idea of introducing and studying the Lyapunov numbers is derived from the following: 1. If some practical assumption holds for the behavior of a particular system, for example, a physical object, we need to know how far we can go wrong in calculations, if we mean to predict the evolution of the system over a quite long term. Only knowing that there could exist errors in the calculations of the future behavior of a system is not that useful, since from the practical point of view, the existence of errors in calculations of almost all natural systems (as a result of inaccurate initial data) is a well-known fact. So, quantitative analysis of sensitivity that determines to what extent one's calculations are accurate is of great interest. Comparison of different Lyapunov numbers (the ones which are determined by the upper limit and the ones without limit) demonstrates that errors in calculations cannot disappear (decrease) during passing of time. That is, we cannot expect that, for example, after 10000 or 1000000 steps the accuracy of our prediction increase significantly (which seems commonsensical).
2. According to the Auslander theorem, one of the most important theorems in topological dynamics, any proximal cell (i.e., Prox f (x) := {y ∈ X : lim inf n→∞ d(f n (x), f n (y)) = 0}) contains a minimal point [4]. This implies, in particular, that a distal point is always minimal. It should be noted that, if (X, f ) is a weak mixing dynamical system then for every x ∈ X, the proximal cell Prox f (x) is dense in X [3]. What about this property for the sensitive topologically transitive systems, in particular, for the Devaney systems (i.e., topologically transitive with a dense set of periodic points systems)? There is a direct connection between this question and the following one: When does L r = L r hold for a sensitive topologically transitive system? | 2013-03-25T16:24:47.000Z | 2013-03-01T00:00:00.000 | {
"year": 2013,
"sha1": "c49bc1d9d0a585dfa2a69472397327aebe9c9a6c",
"oa_license": null,
"oa_url": "https://pure.mpg.de/pubman/item/item_3121221_1/component/file_3121222/Kolyada_Lyapunov_oa_2013.pdf",
"oa_status": "GREEN",
"pdf_src": "Arxiv",
"pdf_hash": "c49bc1d9d0a585dfa2a69472397327aebe9c9a6c",
"s2fieldsofstudy": [
"Mathematics"
],
"extfieldsofstudy": [
"Mathematics"
]
} |
8877521 | pes2o/s2orc | v3-fos-license | Exact short-time height distribution in 1D KPZ equation with Brownian initial condition
The early time regime of the Kardar-Parisi-Zhang (KPZ) equation in $1+1$ dimension, starting from a Brownian initial condition with a drift $w$, is studied using the exact Fredholm determinant representation. For large drift we recover the exact results for the droplet initial condition, whereas a vanishingly small drift describes the stationary KPZ case, recently studied by weak noise theory (WNT). We show that for short time $t$, the probability distribution $P(H,t)$ of the height $H$ at a given point takes the large deviation form $P(H,t) \sim \exp{\left(-\Phi(H)/\sqrt{t} \right)}$. We obtain the exact expressions for the rate function $\Phi(H)$ for $H<H_{c2}$. Our exact expression for $H_{c2}$ numerically coincides with the value at which WNT was found to exhibit a spontaneous reflection symmetry breaking. We propose two continuations for $H>H_{c2}$, which apparently correspond to the symmetric and asymmetric WNT solutions. The rate function $\Phi(H)$ is Gaussian in the center, while it has asymmetric tails, $|H|^{5/2}$ on the negative $H$ side and $H^{3/2}$ on the positive $H$ side.
The early time regime of the Kardar-Parisi-Zhang (KPZ) equation in 1 + 1 dimension, starting from a Brownian initial condition with a drift w, is studied using the exact Fredholm determinant representation. For large drift we recover the exact results for the droplet initial condition, whereas a vanishingly small drift describes the stationary KPZ case, recently studied by weak noise theory (WNT). We show that for short time t, the probability distribution P (H, t) of the height H at a given point takes the large deviation form P (H, t) ∼ exp −Φ(H)/ √ t . We obtain the exact expressions for the rate function Φ(H) for H < Hc2. Our exact expression for Hc2 numerically coincides with the value at which WNT was found to exhibit a spontaneous reflection symmetry breaking. We propose two continuations for H > Hc2, which apparently correspond to the symmetric and asymmetric WNT solutions. The rate function Φ(H) is Gaussian in the center, while it has asymmetric tails, |H| 5/2 on the negative H side and H 3/2 on the positive H side. Many works have been devoted to studying the 1D continuum KPZ equation [1][2][3][4], which describes the stochastic growth of an interface of height h(x, t) at point x and time t as starting from a given initial condition h(x, t = 0).
Recently, the short time behaviour of the KPZ equation has been investigated [15,[20][21][22][23][24]. It was found that the probability distribution function (PDF) of the height H at a given point, see below, takes the following large deviation form , H fixed and t 1 (2) where the rate function Φ(H) depends on the initial condition [25]. Three types of initial conditions (IC) have been studied so far, the droplet IC (also called sharp wedge or curved), the flat IC and the stationary IC. Universal features emerge: (i) the center of the distribution, associated to typical fluctuations, is Gaussian, i.e. Φ(H) c(H − H 0 ) 2 for |H − H 0 | 1 where here and below H 0 := H , and corresponds to the Edwards-Wilkinson [26] scaling H ∼ t 1/4 (ii) the tails are asymmetric and exhibit power law exponents, Φ(H) c − |H| 5/2 for H large negative, and Φ(H) c + H 3/2 for large H positive, where the exponents do not depend on the initial condition but the prefactors do.
Two methods have been used to obtain some properties of the rate function. The weak noise theory (WNT) uses a saddle point evaluation of the dynamical action associated to the KPZ equation (1), using 1/ √ t as a large parameter [21,23,24,27]. Until now these saddle point equations have been solved analytically only (i) near the center of the distribution (ii) in the two tails. This led to predictions for c, c ± , while the complete shape of Φ(H) could be obtained only numerically. The second method uses exact formula in terms of a Fredholm determinant for the moment generating function of e H , valid at any time t. These formula however are available only for the three IC mentioned above, and until now led to the determination of Φ(H) only for the droplet IC [22], which we refer to as Φ drop (H), in agreement with earlier results [15] for the three lowest cumulants of H. Note that, contrary to the WNT, it yields an exact formula for Φ drop (H), recently confirmed in numerical simulations of lattice directed polymer models [22,28]. For droplet IC the two methods were found to agree, leading to c − = 4 15π , c + = 4 3 and c = 1 √ 2π . The flat IC was studied in [21] using the WNT leading to c − = 8 15π , c + = 4 3 and c = √ π 2 √ 2 , showing that the amplitude of the left tail depends on the initial condition.
Interesting connections seem to arise with the (a priori quite different) large deviation tails observed at large times. On the positive H side, the form P (H, t) ∼ exp(− 4 3 t(H/t) 3/2 ) was shown to hold both for droplet and flat IC, implying that the right tail is established at early times [29]. On the negative H side a similar feature was recently found in Ref. [30] for droplet IC. It would be interesting to understand if these properties hold for a broader class of initial conditions. arXiv:1705.04654v1 [cond-mat.stat-mech] 12 May 2017 Recently, Janas, Kamenev and Meerson [24] studied stationary initial conditions using the WNT. On the negative H side they found c − = 4 15π . A surprising feature arises on the positive H side, where for H > H c2 a spontaneous symmetry breaking of reflection invariance occurs, leading to the coexistence of symmetric and asymmetric solutions. The value H c2 ≈ 1.85 was obtained numerically in [24]. While the symmetric solution gives c + = 4/3 the asymmetric ones gives c + = 2/3, i.e. the same amplitude as the late time Baik-Rains distribution [7]. An outstanding question is whether similar results can be obtained using the exact solution.
The aim of this Letter is to use the available exact Fredholm determinant representation, valid for all times, to obtain the exact short time rate function Φ(H) for a broader class of initial conditions, which interpolate between the droplet and the stationary IC's. We consider the Brownian IC in presence of a drift where B(x) is the unit two-sided Brownian motion with B(0) = 0, and w is the drift. The limit w → 0 + is called the stationary initial condition (the distribution of height differences at different points being time-independent) while the limit w → +∞ yields the droplet IC. We will show that the rate function Φ(H) depends only on the scaled drift variablew = wt 1/2 . Forw → +∞, we recover the result of [22] as a useful check. The limitw → 0 continuously leads to the main result of our paper, namely the stationary case.
As in [22] we define the shifted height at a point x as and for now we focus on the random variable H = H(0, t). We show that its distribution P (H, t) takes the form (2). From the Fredholm determinant formula we obtain unambiguously the exact form of Φ(H) for where H c (0) = 0, see Eqs. (19), (154), which leads to exact formula for the cumulants H p c , see Eq. (22). As in the droplet case, a first analytic continuation is required to obtain Φ(H) for H > H c (w), and is given in (154). A new feature arises at the value H = H c2 (w) where the validity of the first analytic continuations ends. We obtain H c2 (0) ≈ 1.85316 consistent with the numerical estimate of [24], which suggests that this is the same critical point. We propose two continuations for Φ(H) for H > H c2 (w), given in (30), an analytic one which leads to c + = 4/3, apparently corresponding to the symmetric WNT solution and a non-analytic one which leads to c + = 2/3, corresponding to the asymmetric WNT solution. Our result for Φ(H) is plotted in Fig. 1 and the asymptotic behaviors of Φ(H) are forw = 0 obtained as where c + = 4/3 for the analytic branch and c + = 2/3 for the non-analytic one. Our result for all continuations of Φ(H) are compared with the numerical determination given in [24] and we observe [5] a point to point correspondence between our rate function, the symmetric nonoptimal action and asymmetric optimal action of [24]. (2), which describes the distribution of the KPZ height H = H(x = 0, t) at small time for the stationary initial condition (w = 0), with Φ(0) = 0 and H = 0. The blue line corresponds to the exact solution for H < 0, the dashed red line corresponds to a first analytic continuation for 0 < H < Hc2, the dot-dashed green line corresponds to a second symmetric analytic continuation for H > Hc2 and the dot-dashed brown line corresponds to a second asymmetric non-analytic continuation for H > Hc2, where Hc2 ≈ 1.85316 is discussed in the text. Note the symmetric continuation, with c+ = 4/3, is not the optimal one in the sense of WNT and the asymmetric continuation with c+ = 2/3 is regarded as the optimal one. Let us start by recalling the exact formula obtained in [31][32][33] for the initial condition (3) with H = H(x, t) and x = 0 (for details and general x see [5]). One needs to introduceH = H + χ where χ ∈ R is a random variable, independent of H, with a probability distribution p(χ)dχ = e −2wχ−e −χ dχ/Γ(2w). Then the moment generating function is given by where . . . denotes an average over the KPZ noise, the random initial condition and the random variable χ. Here Q t (s) is a Fredholm determinant associated to the kernel defined in terms of the weight function σ t,s (u) := σ(t 1/3 (u − s)) , σ(v) := 1 1 + e −v . (11) and of the deformed Airy kernel itself is defined from the deformed Airy function In principle, the formula (8) allows us to obtain, via a Laplace inversion, the PDF of H for arbitrary t. We now show how to extract the small time behavior directly from the generating function (8). We recall the trace formula for Fredholm determinants a convenient form to study the small t limit. Our strategy throughout will be to consider the small t limit at fixed w = wt 1/2 . To calculate the traces in the equation (14) we need the following asymptotic estimate, valid for fixed v < 0, t → 0 and κ,w fixed (see [5]) whereṽ =v − lnw 2 + ln t, and fw(v) = W 0 (w 2 e −v+w 2 ) −w 2 and W 0 is the first branch of the Lambert W function, i.e. y = W 0 (x) is the solution of −v and the deformed Airy kernel yields the standard one up to a shift, see [5], hence both sides of (15) identify with Eq. (18) in [22] (withv → v). Definings = st 1/3 , the series (14) can be summed up, extending the derivation in [22] to arbitraryw, leading to [5] where the integral Ψ(z) is defined for z > −w 2 . Defining z = te −s , the exact formula for the generating function (8) takes the following form at small time Note that the l.h.s. is finite only for z > 0 (for z < 0 it is infinite). We now want to extract from (17) information about the PDF of H. To this aim, we now define χ = χ − ln( √ t), inserting the assumed form (2) into (17) for any z > 0, we obtain Φ(H) by a saddle point analysis on z and χ , the latter being exactly solved [5]. The range of optimization can be enlarged from z > 0 to z > −w 2 as the argument continuously extends on this domain. The rate function is then given as a generalized Legendre transform of Ψ.
This yields a parametric system of equation to determine Φ (H), see Fig. 3. Since G(z) is monotonically decreasing, the solution z(H) is unique. It is also possible to integrate this system to obtain a parametric equation on Φ(H) It is important to note that Eqs. (18), (19), (20) are valid for z ∈ [−w 2 , +∞[, hence as for now we have solved the problem only for The extension is studied below. Note that from (19), 1 − G(−w 2 ) is a complete square, hence H c (w) ≤ 0 for anyw. We now extract from this solution the cumulants of H and the left tail behavior. The most probable value is also the average H 0 = H determined by Φ(H 0 ) = 0. Noticing that Ψ(0) = 0 and that Ψ (z) is bounded, (20) and (19) imply that e H0 = 2wΨ (0) = Erfc(w)ew 2 cor- which implies that the average H 0 = H = 0 for the stationary case. Expanding (19) around H = H 0 and z = 0 we obtain iteratively the derivatives Φ (q) (H 0 ), and calculate the leading short time behavior of the cumulants of the height given by where φ (q) is the q-th derivative of the Legendre transform φ(p) of Φ(H). We display here the first three cumulants [34] for smallw (see [5] for details) in agreement with the result of [24] for the second cu-mulant at w = 0. In addition we have checked the predictions for H q c , q = 1, 2, 3 for arbitraryw by a direct small time expansion of the KPZ equation [5]. It is also possible to obtain the left tail of Φ(H) from (19). For allw, G(z) is decreasing and Ψ(z) z→+∞ 4 15π [ln z] 5/2 , which means that as z increases to +∞, H decreases to −∞. Inserting the asymptotics of Ψ into (19), see [5], we obtain the left tail of the rate function 15π . This result is valid for allw, and is in in agreement both with the droplet result [22,23] (forw → +∞) and the stationary result w = 0 [24].
An important check of our result (18) is that, forw → +∞, it recovers the exact formula [22] for the droplet IC. Indeed the function Ψ(z) in (16) recovers the one of the droplet IC [5], limw →+∞ Ψ(w 2 z) = − 1 Defining G 0 (z) the analytic partner of G(z) obtained by doing the minimal replacement Ψ(z) → Ψ(z) + ∆ 0 (z) in (19), we see [5] that G 0 (z) is increasing. Hence as z increases from −w 2 to e −1−w 2 , H increases from H c (w) to H c2 (w) = ln G 0 (e −1−w 2 ). In the stationary case, using (19), (25), H c2 (0) is given by hence 2H c2 ≈ 3.70632 to be compared with the value 3.7 in [24]. We also find that limw →+∞ H c2 (w) = +∞, which means that for the droplet IC, only one continuation is needed, i.e.Ĥ c2 = +∞, as found in [22]. However, for finitew a second extension is needed to obtain Φ(H) for H > H c2 (w). We now investigate the fundamental reason for this point to be special. This leads us to identify two extensions, by defining two other real partners of Ψ(z). We now study their properties, and compare below with the work of [24]. When z = e −1−w 2 the Lambert function inside ∆ 0 (z) in Eq. (24) equals W 0 (−e −1 ) which is the point where it exhibits a secondorder branch point separating three branches W 0 , W −1 and W 1 , only the first two being real valued (see Fig. 4 in [35]). For this reason, a natural continuation for ∆ 0 (z) is the function ∆ −1 (z) defined by replacing the first branch of the Lambert function W 0 by the second real valued one W −1 in (24), leading to As shown in [5], Ψ(z) is then be continued by either of the following minimal replacements Ψ(z) → Ψ(z) + ∆ −1 (z) and Ψ(z) → Ψ(z) + ∆0(z)+∆−1(z)
2
. We call the first replacement the symmetric continuation of Ψ(z) and the second one the asymmetric continuation. They are defined on the interval z ∈]0, e −1−w 2 ] as W −1 is real valued on the interval [−e −1 , 0[. Similarly, we define G −1 (z) and G −1/2 (z) as the continuations of G 0 (z) replacing ∆ 0 (z) by ∆ −1 (z) and ∆0(z)+∆−1(z) 2 . G −1 (z) and G −1/2 (z) are now decreasing functions, as z decreases from e −1−w 2 to 0 + , H increases from H c2 (w) to +∞ for using both symmetric and asymmetric continuations, therefore completing the range of Φ(H) by two extensions above H c2 (w). Note that this construction yields a function Φ(H) with a symmetric continuation analytic at H c2 (w) and an asymmetric continuation non-analytic at H c2 (w) inducing a discontinuity in the second derivative Φ (H c2 (w)) for any finitew, see [5].
We now compare with the results of Ref. [24]. The fact that the value of H c2 (0) obtained there coincides, up to their numerical precision, with our exact result strongly suggests that this is the same point. In [24] the authors found that at this point Φ(H) exhibits a second order phase transition, i.e. the second derivative Φ (H) has a jump. They observe that this is due to a spontaneous breaking of the spatial reflection symmetry x → −x in the saddle point solutions of the dynamical action of the WNT. For H > H c2 (0) they find three solutions: (i) a symmetric solution, which leads to a positive H tail with c + = 4 3 (ii) a pair of asymmetric solutions with c + = 2 3 , and they claim that the asymmetric solutions dominate the dynamical action. The two continuations that we have identified very likely correspond to the two solutions found numerically in Ref. [24]. Indeed, overlapping the plot of the exact expression of Φ(H) with the numerical estimates of [24] provided by Janas, Kamenev and Meerson, we observe [5] that the non-analytic continuation of Φ(H) coincides point to point [25] with the value of the action obtained there from the asymmetric solution, and the analytic continuation of Φ(H) coincides with the symmetric one.
To summarize for the stationary limit,w = 0 + , we find the following parametric representation for Φ(H), made of three branches, the last one being composed of an analytic one and a non-analytic one. We recall the intervals and the relation between H and z in these intervals For z ∈ I 3 and H ∈ J 3 there are two distinct relations We then recall the relation between Φ(H) and z For z ∈ I 3 there exist two branches for Φ(H), an analytic one and a non-analytic one with different asymptotics where ∆ 0 (z) is given in (24) and ∆ −1 (z) in (27) (setting w = 0 + which cancels the logarithmic terms). In addi- From the parametric representation of Φ(H) one obtains the asymptotic behaviors given in Eqs.
In conclusion we studied the statistics of the height fluctuations for the continuum KPZ equation at short time with the Brownian initial condition with a drift. We obtained an exact determination of the rate function Φ(H), which describes the stationary IC at zero drift, and recovers the droplet IC at large drift. It extends, through an exact solution, recent approaches using weak noise theory for the stationary geometry. We have obtained exactly the value H c2 at which a spontaneous symmetry breaking was found in WNT, showed that this phase transition should happen for any finite drift, and identified the symmetric and asymmetric solutions beyond that point. We hope it provides a further bridge between quite different methods to address large deviations in growth and particle transport problems.
We thank G. Schehr and S. Majumdar for very helpful discussions, and M. Janas, A. Kamenev and B. Meerson for providing their data from [24] and for their comments. .
SUPPLEMENTARY MATERIAL
We give the principal details of the calculations described in the manuscript of the Letter.
SOLUTION OF CONTINUUM KPZ EQUATION WITH BROWNIAN INITIAL CONDITION
In this paper we study the KPZ equation (1) using everywhere the following units of space, time and height which amounts to set λ 0 = D = 2 and ν = 1 in (1). Let us recall the solution obtained in [31][32][33] for the Brownian initial condition in its most general form, i.e. with two unequal drifts at an arbitrary point x. The initial condition is where θ(x) is the Heaviside unit step function and B(x) a double-sided Brownian motion, with B(0) = 0. Defining now H = H(x, t) as in (4), andH = H + χ where χ ∈ R is a random variable, independent of H, with a probability distribution p(χ)dχ = e −(w++w−)χ−e −χ dχ/Γ(w + + w − ), it was shown in [31,32] that (in our units) where, as in the text, . . . denotes an average over the KPZ noise, the random initial condition and the random variable χ. Here Q t (s) is a Fredholm determinant associated to the kernel where σ t,s (u) is defined in (11), and the deformed Airy functions are defined in (13). Now one can rewrite This can be seen e.g. by expanding in powers of Tr(P 0 K) p and exchanging the order of integrations. Specializing to w ± = w and x = 0 one obtains (8), (9), (10) and (12) in the text.
THE LAMBERT FUNCTION W
We introduce the Lambert W function [35] which we use extensively throughout the Letter. Consider the function defined on C by f (z) = ze z , the W function is composed of all inverse branches of f so that W (ze z ) = z. It does have two real branches, W 0 and W −1 defined respectively on [−e −1 , +∞[ and [−e −1 , 0[. On their respective domains, W 0 is strictly increasing and W −1 is strictly decreasing. By differentiation of W (z)e W (z) = z, one obtains a differential equation valid for all branches of W (z) Concerning their asymptotics, W 0 behaves logarithmically for large argument W 0 (z) z→+∞ ln(z) − ln ln(z) and is linear for ). Both branches join smoothly at the point z = −e −1 and have the value W (−e −1 ) = −1. These remarks are summarized on Fig. 2. More details on the other branches, W k for integer k, can be found in [35].
Asymptotics of the deformed Airy function
We are interested in the asymptotics of the deformed Airy function (13) with the arguments of (12) which correspond to the case w + = w − = w and x = 0. We scale the arguments so that all terms share the same scaling in time, allowing to apply the steepest descent method. Since the scale of the first argument t −1/3 is imposed so that the weight function in Eq. (11) has an argument of order O(1), that leads to the rescaling of the drift w =wt −1/2 , as mentioned in the text, and to a rescaling of the integration variable, i.e. we defineη = ηt 1/6 . We then obtain, using the asymptotics (39) where <w and where we have defined (dropping the tilde on η from now on for notational simplicity) Note that the explicit time dependent factor is harmless, as it can be absorbed by the redefinitionã :=â+ln t−lnw 2 , andâ fixed as t → 0, see below. To apply the steepest descend method, we look for the zeros of the derivative of the phase, which are given by where here and below primes denote derivatives w.r.t. the first argument, and W is the Lambert W function (see Section 1.). For the case of a realã studied here, the argument of W is positive hence one chooses the branch W 0 . This leads to a pair of zeroes that are real forâ < 0, vanish atâ = 0 and become imaginary forâ > 0. The latter case corresponds to fast decaying behavior which, as in [22] we claim contributes subdominantly in the calculation of the traces. Hence we focus on the caseâ < 0 which leads to oscillating behavior.
At the stationary points the phase and its second derivative w.r.t. η are given by We now expand the integral around the two saddle points and sum their contribution.
Asymptotics of the deformed Airy kernel
To calculate the deformed Airy kernel, we first rescale the arguments in exactly the same way as in the previous calculation for the deformed Airy function. We obtain where in the last line we have redefined η → ηt −1/6 , η → η t −1/6 , and used again the asymptotics (39) of the Gamma function. The function φ(η,ṽ) is defined in (42). Applying the steepest descent on η and η , as in (43) in the previous section, we obtain the saddle points where W is the Lambert W function. Here we choose the branch W 0 of the Lambert function which is the only one leading to a real saddle point. We again definedṽ =v + ln t − lnw 2 andṽ =v + ln t − lnw 2 and study the casev,v < 0 where the above saddle points are real. We now expand around the four saddle points and sum their contribution.
We are left with four terms and we drop the terms with same sign as they decay too quickly and are therefore subdominant, leading to and similarly for η 0 andũ . We now introduceκ such thatṽ =ṽ +κ and study the limitκ → 0. Taking the derivative w.r.t.ṽ of φ (η 0 (ṽ),ṽ) = 0 gives dη0 we see that to leading order inκ in (50) the denominator cancels the second derivatives in the square root. Next, since d dṽ φ(η 0 (ṽ),ṽ) = ∂ṽφ(η 0 ,ṽ) from the saddle point condition, one has φ(η 0 ,ṽ ) = φ(η 0 ,ṽ)+κ η0 2 +O(κ 2 ). Finally we use arctan We finally define κ =κt − 1 2 and drop the second term in the sine which is subdominant. We obtain which is valid in the limit t 1, provided we defineṽ =v + ln t − lnw 2 , and keepv < 0 andw > 0 fixed in the limit. This leads to (15) in the text, with the branch W 0 . Note that the asymptotics (52) involves only the value of the saddle point η 0 , suggesting a more general asymptotic formula for kernels of a similar type.
Derivation of the function Ψ(z)
We start by deriving the formula for Q t (s) given in Eq. (16) in the Letter. The derivation follows very closely the one of Ref. [22]. From Eqs. (9) and (14) given in the Letter, one has where K Ai,Γ (v, v ), the Airy kernel, and σ t,s are given in Eqs. (12) and (11) of the Letter (respectively). Hence one has The expression of σ t,s (v) = σ(t 1/3 (v −s)) suggests to perform the change of variable v i → v i /t 1/3 , which yields (setting s = st 1/3 ): Let us now recall the representation of the deformed Airy kernel for the case x = 0 and w Recalling the short time asymptotics (52) of K Ai,Γ we get where f (v) = W 0 (w 2 e −v+w 2 ) −w 2 , v =v + ln t − lnw 2 and W 0 is the first real branch of the Lambert W function (here we drop the subscriptw on f as compared to the text). In particular, we define v j =v j + ln t − lnw 2 . We may now use the asymptotics of the deformed Airy kernel for t → 0 andv j < 0 such that W 0 (w 2 e −vj +w 2 ) −w 2 > 0, otherwise the Kernel vanishes exponentially. Hence for p ≥ 2, separating the center of mass coordinate (which we take as v 1 ) and the p − 1 relative coordinates v j = v j−1 + t 1/2 κ j we obtain Combining the different results, and recaling that v =v + ln t − lnw 2 ,we obtain It is then straightforward to perform the sum over p for z > −w 2 , and upon the changev → −v we obtain Performing the change of variable y = W 0 (w 2 ev +w 2 ) −w 2 , and using the definition and properties of the Lambert function and its derivative (38) we obtain an equivalent formula Ψ(z) = 1 π +∞ 0 dy 1 + 1 y +w 2 √ y ln 1 + ze −y y +w 2 (60) leading to (16) in the main text. Note the expression for the derivatives: for q ≥ 1 3.2. The function Ψ(z) for the stationary casew = 0 + It is useful to study in details the function Ψ(z) forw = 0. We now show that it is non analytic in z, but for z > 0 it can be expanded in a power series in u = √ z as follows i.e. ψ(u) can be Taylor expanded for u > 0. To calculate these derivatives, one can start from the expression (61) for q = 1 setting y = x 2 Using that u π(u 2 +y 2 ) u→0 + δ(y) + .. we obtain ψ (0) = 2. To obtain the higher derivatives we note the formula, for any The odd derivatives are obtained using integration by parts where we have used that h(y) = W (y 2 )/y 2 = 1 2 +∞ n=0 (n+ 1 2 ) n−1 n! (−y 2 ) n . The even derivatives, after integration by part are given by One can further perform integrations by parts, noting that where for q = 1 we use the change of variable x = h(y) and note that y = h −1 (x) = − ln(x 2 )/x 2 with h(0) = 1 and h(+∞) = 0. This leads to a convergent integral. For q ≥ 2 we define h reg (y) = h(y) − 2q−2 n=1 h (n) (0)y n /n! which leads to the "regularized version" of the (divergent) integral, given by analytic continuation. We have checked the correctness of the final formula. Putting all together we obtain the result given in (62).
Saddle point equations
Defining z = te −s , we start from Eq. (17) of the text which takes the following form at small time Recalling thatH = H + χ, where χ is a random variable independent from H, the difficulty is now to extract the leading small time behavior of the cumulants of H, equivalently the function Φ(H). One route is to observe that from (70) one easily obtains the cumulants ofZ = eH from the derivatives of the known func- In principle, to obtain the cumulants of Z we can now use relations between the moments of Z = e H and ofZ, i.e.
is the Pochhammer symbol. We have performed that exercise up to q = 3. We checked that indeed the leading small time behavior of Z q and then of H q , could be extracted in this manner, and that it agrees the small time expansion of the KPZ equation (see Section 10.). We have then verified that the limitw → 0 produces the correct cumulants for w = 0 (which is far from a priori obvious in the intermediate steps of the calculation).
A more powerful method, which as we checked reproduces these results and allows to obtain directly the function Φ(H) is as follows. We consider the leading behavior for fixedw, which implies that w =w/ √ t is large. We define χ = χ − ln( √ t), use Stirling's formula for the Γ(2w) factor in the PDF of χ given in the text and Section 0, and write where here the second bracket denotes average only on the KPZ noise and initial condition. We now define R(z) to be the cumulant generating function of e H , In (71) using 1/ √ t as a large parameter we perform a saddle point and obtain the following relation between the functions R(X) and Ψ(z) Ψ(z) = min χ (2wχ + e −χ + R(ze χ )) − 2w + 2w ln(2w) Thus we have, with X = ze χ , which we invert as On the other hand, by substituting the anticipated form, hence we can perform the saddle point on the variable X X = e −H (± w 2 + ze H −w) By consistency with the droplet case we must take the positive root. Indeed forw → +∞ (78) gives X z/(2w) and since X = ze χ = ze χ / √ t = z/(2w) this is consistent with the fact that for large w, χ becomes a deterministic variable equal to − ln(2w) (see Section 9.2). Taking the positive root we obtain the expression of the rate function in terms of the solution of a maximization problem From the definition of z the maximization was to be done for z ≥ 0, yet observing the domain of definition of Ψ(z) and the square root, we actually have weaker constraints As we will show the second constraint is always verified, and we thus have defined in (79) the the range of optimization by the first constraint.
The maximization problem is equivalent to the parametric system of equations given in the text For completeness, we also have the following parametric relation : Φ (H) =w − √w 2 + ze H (see below however for a modification of this relation in some range of values of H).
Analysis of the saddle point equations
Now that we solved the optimization problem exactly, we wish to know if it allows us to obtain all values of H ∈ R. We know that the optimization has to be done in the interval z ∈ [−w 2 , +∞[ so we first investigate the behavior of Ψ(z) and of G(z) on these boundaries, and then use the monotonicity of G(z) to extrapolate the range of H.
behavior of Ψ(z) for z → +∞
We recall the definition (60) of Ψ for z ≥ −w 2 and look for its asymptotics for large positive z and fixedw. After an integration by part we obtain In the limit of large z one can show that the fraction z (y+w 2 )e y +z can be replaced by either one or zero depending which term in the denominator is larger, the change occurring for (y +w 2 )e y = z which is equivalent to y = W 0 (zew 2 ) −w 2 (similarly to the computation of the asymptotics of the polylogarithm function [36]), leading to For a fixedw one can further neglect the arctan term in the integrand, which leads to Recalling that W 0 (z) z→+∞ ln(z) − ln ln(z), and expanding at large z and fixedw we finally find From (60), the expression of Ψ (z) at z = −w 2 is given by √ y 1 (y +w 2 )e y −w 2 (86) In the smallw limit, this integral can be computed and behaves as and it can be seen that in that interval it is monotonically decreasing. One notes that G(−w 2 ) = 1 − (1 −wΨ (−w 2 )) 2 ≤ 1 and that, using the previous estimates Hence as one decreases z from +∞ to −w 2 , G(z) increases monotonically from 0 to 0 < G(−w 2 ) ≤ 1.
Recalling that e H = G(z), we find that for any given H ∈ [−∞, H c (w)], there is a unique solution z(H), and that H c (w) ≤ 0 (which justifies our neglect of the condition (80)). In the smallw limit, we find that H c (0) = 0.
Derivatives of Φ(H) at H = H0
Here we show how to identify the center of the distribution, H 0 = H , and to calculate iteratively the derivatives of Φ(H) at H = H 0 , in order to obtain the cumulants. From the equations (81) we obtain by integration where by definition of H 0 , Φ(H 0 ) = 0, and corresponds to the value z = 0, i.e. z(H 0 ) = 0, which implies Φ (H 0 ) = 0 since zΨ (z) → 0 as z → 0. Expanding the last equation into a series the first non-zero derivatives, we first recover e H0 = 2wΨ (0), as given in the text, as well as We can now calculate explicitly the derivatives Ψ (q) (0) from (61) as This leads to while the third derivative is given by Expanding aroundw = +∞ we find Expanding aroundw = 0 we obtain up to first order We see that all derivatives of Φ(H) have a finite limit asw = 0 which coincides with the stationary IC.
Cumulants of the height
To compute the cumulants of the height H at short times, we first define the cumulant generating function where P (H, t) is the height PDF. Substituting the short time form, P (H, t) ∼ e − 1 √ t Φ(H) , and performing the integral by the saddle point method as t → 0 gives By definition, the logarithm of G(p, t) generates the height cumulants as Hence, taking logarithm on both sides of Eq. (102), using (103) and matching powers of p gives for all q ≥ 1, where φ (q) (0) is the q-th derivative of φ(p) evaluated at p = 0. The optimization problem (102) can be solved exactly and yields the implicit equation Expanding Eq. (105) into a series and using the explicit values of the derivatives of Φ at H = 0, one obtains φ (q) (0) explicitly. For example, the first three non-trivial cumulants are given by This leads to the following cumulants, for anyw and, for smallw
ANALYTIC CONTINUATION OF Ψ
Let us obtain an analytic continuation of Ψ, we start with the following form of Ψ(z) obtained from (82) upon the change of variable b = (y +w 2 )e y , i.e. y = W 0 (bew 2 ) −w 2 We now make use of the following expression that makes sense in distribution theory.
where we used the complex expression of arctan in terms of logarithm. Besides, we have the derivative of the jump Note that we have introduced an absolute value in the logarithm of (122), since, as we will see, the argument can change sign on the interval that we will consider. This does not affect the value of the derivative.
From this, we define the analytic continuation of Ψ(z) to a multivalued function on The upper boundary e −1−w 2 comes from the definition of the real branch W 0 of the Lambert function. Note that at the branching point z = −w 2 , both Ψ and Ψ + ∆ 0 are only once right-differentiable and since ∆ 0 (−w 2 ) = ∆ 0 (−w 2 ) = 0 their first derivatives coincide. Higher derivatives are ill defined, i.e. ∆ With some additional work, it is possible to find two other reals jumps that are the continuations of ∆ 0 (z). For this aim, we generalize (122) by defining two complex numbers z 1 and z 2 and study the limit There are two contributions in ∆(z 1 , z 2 ), the first one comes from the principal values which cancels when z 1 = z 2 , the second one comes from the δ functions δ(−z 1 ) + δ(−z 2 ) and yields a contribution ∆0(z1)+∆0(z2) 2 . As z 1 − i and z 2 + i are independent complex numbers, ∆ 0 (z 1 ) and ∆ 0 (z 2 ) are independent functions. In particular, they might not be defined on the same Riemann sheet, meaning that they can be analytically continued independently one from the other.
We can define a second real jump called ∆ −1 (z) on z ∈]0, e −1−w 2 ] using the second branch of the Lambert function W −1 , i.e replacing W 0 by W −1 in formula (122). The above remark about the independence of z 1 and z 2 leads us to define three possible continuations to ∆ 0 (z 1 ) + ∆ 0 (z 2 ).
We now wish to know if we can obtain all H ∈ R + with these partners. 6. BEHAVIOR OF G0, G −1/2 AND G−1 and can be written as G 0 (z) = z[Ψ (z) + ∆ 0 (z)] 2 + 2w[Ψ (z) + ∆ 0 (z)]. Using ∆ 0 (−w 2 ) = 0, we have the continuity relation G(−w 2 ) = G 0 (−w 2 ). Recalling the parametric relation e H = G 0 (z) and observing numerically that G 0 (z) is monotonically increasing with z, as z increases from −w 2 to e −1−w 2 , H increases from H c (w) = ln G(−w 2 ) = ln G 0 (−w 2 ) ≤ 0 to a second critical value H c2 (w) = ln G 0 (e −1−w 2 ). At this point In the stationary limitw = 0 we can write more explicitly as given in the text, where √ y e −1 +ye y . We do not have a closed form for this integral, but numerically, we find Ψ 0 (e −1 ) 1.27213, yielding the numerical estimate for H c2 (0) In terms of the units of reference [24], this would yield a critical heightH c2 = −2H c2 −3.7 as predicted for the phase transition : we likely found an explicit exact expression for the critical height.
Analyticity at the point Hc
We first discuss analyticity at H c (w), which corresponds to the point z = −w 2 . Let us examine, in the vicinity and on both sides of H c (w), the pair of parametric equations consisting of (i) Eqs. (150)-(151) (ii) Φ (H) = −zΨ (z), H < H c , Φ (H) = −zΨ continued,0 (z) for H > H c . As discussed above since ∆ 0 (z = −w 2 ) = 0, Φ (H) is continuous at H = H c , with value Φ (H c (w)) =w 2 Ψ (−w 2 ). We now show continuity of the second derivative. Taking a derivative of (i) allows to express dH/dz as a function of the triplet z, Ψ (z), Ψ (z). Taking a derivative of (ii), and using this relation, one obtains Φ (H) as a function of z, Ψ (z), Ψ (z). As z → −w 2 , Ψ (z) diverges, and the expression for Φ (H) has a finite limit depending only on z, Ψ (z) and one finds since the expression is the same upon replacing Ψ → Ψ continued,0 the second derivative is continuous Φ (H c ) at H c , and can be expressed from the first one. In principle this can be pushed to higher derivatives to show continuity of all by expanding the implicit relation (89) up to any order.
Let us now examine the stationary casew = 0 + , for which H c = 0. Let us recall the result (62) We can now insert this expansion in the pairs of parametric equations Elimination of z leads to an expansion of Φ(H) in powers of H around H = 0 on both sides. One can check order by order that inserting the values for the odd derivatives ψ (2q−1) (0) given in (136) yield identical Taylor series on both sides. This shows that Φ(H) is analytic at H = 0. Inserting the values for the even derivatives ψ (2q) (0) given in (136) then allows to recover the results of (98) forw = 0 + , showing that the calculation at w = 0 matches the one at w = 0 + .
Non-analyticity of Φ(H) at H = Hc2(w)
Starting from the implicit representation for Φ(H) we observe that the regularity of Φ(H) highly depends on the regularity of Ψ(z). To have continuations of Φ(H) that are analytic, we at least require Ψ(z) to have the same regularity as its continuations at the branching points.
Recalling the system of parametric equations (81), as we have at z = e −1−w 2 we are ensured that Φ(H) and Φ (H) are continuous at H = H c2 (w) whatever branch we choose.
Summary
To summarize, for a given H ∈ [−∞, ∞], the optimum z is determined from the equation where the function G(z) is given by For H > H c2 (w), there exist two solutions G −1/2 (z) and G −1 (z) given by The function ln G(z) vs z is plotted in Fig. (3), with the four elements ln G(z) (shown by solid blue line), ln G 0 (z) (shown by the dashed red line) ln G −1/2 (z) (shown by the dot-dashed brown line) and ln G −1 (z) (shown by the dot-dashed green line). Note that the branching G 0 (z) → G −1/2 (z) is continuous but not differentiable. To obtain now the continuations of the rate function Φ(H) for all H we use the first equation of (89) replacing Ψ by Ψ + ∆ −1 and Ψ + ∆ 0 , Ψ + 1 2 (∆ 0 + ∆ −1 ) respectively. Using the above definitions of ∆ 0 and ∆ −1 , we find that the rate function Φ(H) is determined from the parametric equations On the interval z ∈]0, e −1−w 2 ], Φ(H) finally has two extensions, the first one being analytic and the second one being non-analytic where z should be replaced by the corresponding solution z(H) from (150)-(151). Note that the arguments of the logarithms are actually positive in each interval considered hence the absolute value could be removed. In the limit w = 0, this system can be simplified by setting allw's in (153) to 0. The logarithmic factors smoothly vanish, as confirmed by numerics, and that way, we obtain the solution Φ(H) forw = 0 which is the stationary case.
We represent in Fig. 4 the function Φ (H) vs H atw = 0 for the exact solutions and all extensions discussed above. One easily identifies the non-analyticity at the point H = H c2 (0) where Φ (H) is continuous but not differentiable.
It is also possible to obtain a variational representation of Φ(H) in the stationary case, atw = 0, for all branches as follows 7.4 Comparison with the data of Janas, Kamenev and Meerson [24] We compare in this section our exact expression for the rate function with the numerical estimates obtained by Janas, Kamenev and Meerson in [24]. The authors kindly provided us their numerical data enabling us to overlap our results with theirs in Fig. 5.
In our system of units, see [25], the comparison is possible for a range H ∈ [0, 4] which comprises all continuations of Φ(H). The data were provided for both symmetric and asymmetric WNT solutions, allowing us to test our hypothesis whether our analytic and non-analytic branches match these solutions.
The interpretation of Fig. 5 is that our analytic branch matches point to point the symmetric WNT solution and that our non-analytic branch also matches point to point the asymmetric WNT solution for the interval considered H ∈ [0, 4]. Further numerics would be required to allow a comparison outside H ∈ [0, 4] but according to the overlap of our exact result with numerical estimates, we are confident in saying that the branching point H c2 is the critical field where a phase transition was observed in [24]. with β 1 and c − positive reals. Using the fact that Ψ has a logarithmic asymptotic for large positive arguments, We start by re-introducing the deformed Airy function with the proper scaling of our problem.
For large w the ratio of Gamma functions converges towards a power law Inserting this asymptotics into the integrand of (167), we recognize the Airy function with argument (ã + ln(w 2 ))t 1/3 .
The limit (169) also points out a misprint in Ref. [32], Eq. (2.15), where the shift ln(w 2 ) is missing in the asymptotics of the deformed Airy function.
The deformed Airy kernel and exact Fredholm representation
As the kernel of the Fredholm determinant related to the droplet IC is the Airy kernel, the convergence of the deformed Airy function to the Airy function also gives the convergence of the kernels. Therefore, up to the shift ln(w 2 ) that we can incorporate in the definition of H, we are able to obtain the droplet IC in the limit of large w.
Starting from the exact Fredholm representation at the point x = 0 of the generating function of eH in terms of the kernelK t,s (v, v ) = K Ai,Γ (v, v )σ t,s (v ) (see Section 0.) with Using (169), the asymptotics of the kernel for large w is where K Ai is the Airy kernel entering in the Fredholm determinant giving the generating function of the droplet IC.
Noting that σ t,s (v ) = σ t,s+ln(w 2 )t −1/3 (v + ln(w 2 )t −1/3 ), it yields Coming back to [22], and defining Q drop The moment generating function exp −eH −st 1/3 tell us that shifting s is equivalent to shiftingH, which is itself equivalent to shifting H. Furthermore, by a saddle point analysis, from the PDF of χ or from (78), one sees that for large w, χ is almost surely a deterministic variable χ = − ln(2w).
Defining H droplet = H +ln( w 2 )+ln √ 4πt, we fully recover the result of [22], i.e the droplet IC. This is perfectly consistent with the exact property that the solution h(x, t) of the KPZ equation with the initial condition (3) converges to the droplet solution in the following sense and from the difference of definitions of H here and H drop in [22] by a term 1 2 ln(4πt). Note that all the above considerations are valid for arbitrary time t > 0. 9.3 Convergence of the large deviation function Ψ(z) to its droplet limit As claimed in the text, in the limitw = +∞, it is also possible to find the short time estimate of the Fredholm determinant of the droplet IC by noticing the following limit, from (16) The analytic partner of Ψ was obtained by adding the jump (122) following the change of Riemann sheet to the function Ψ.
For negative z, in the limit of largew using the logarithmic asymptotics of W 0 for large positive argument, we find that ∆ 0 (z) = 4 3 [− ln(−z)] 3/2 , which is the analytic continuation used for the droplet IC in [22].
SHORT TIME EXPANSION OF THE STOCHASTIC HEAT EQUATION
Here we sketch the calculation of the cumulants of Z = e H at short time, which provides a useful test of our method, we provide more details at the end of the section. The KPZ equation (1) in our units (32) is equivalent to the stochastic heat equation (SHE) with ξ(x, t)ξ(x , t ) = δ(x − x )δ(t − t ), and the initial condition Z(x, t = 0) = e Bw(x) where B w (x) := B(x) − w|x|, and B(x) a two-sided unit Brownian motion. | 2017-05-12T16:46:43.000Z | 2017-05-12T00:00:00.000 | {
"year": 2017,
"sha1": "93032d05be482a1b22fa239cb68b3518002c6c5b",
"oa_license": null,
"oa_url": "http://arxiv.org/pdf/1705.04654",
"oa_status": "GREEN",
"pdf_src": "Arxiv",
"pdf_hash": "6eaa4f50ff358c17efe2d6cb1da5403754c266cf",
"s2fieldsofstudy": [
"Mathematics"
],
"extfieldsofstudy": [
"Physics",
"Mathematics",
"Medicine"
]
} |
261050377 | pes2o/s2orc | v3-fos-license | Reward contingency modulates olfactory bulb output via pathway-dependent peri-somatic inhibition
Associating values to environmental cues is a critical aspect of learning from experiences, allowing animals to predict and maximise future rewards. Value-related signals in the brain were once considered a property of higher sensory regions, but its wide distribution across many brain regions is increasingly recognised. Here, we investigate how reward-related signals begin to be incorporated, mechanistically, at the earliest stage of olfactory processing, namely, in the olfactory bulb. In head-fixed mice performing Go/No-Go discrimination of closely related olfactory mixtures, rewarded odours evoke widespread inhibition in one class of output neurons, that is, in mitral cells but not tufted cells. The temporal characteristics of this reward-related inhibition suggest it is odour-driven, but it is also context-dependent since it is absent during pseudo-conditioning and pharmacological silencing of the piriform cortex. Further, the reward-related modulation is present in the somata but not in the apical dendritic tuft of mitral cells, suggesting an involvement of circuit component located deep in the olfactory bulb. Depth-resolved imaging from granule cell dendritic gemmules suggests that granule cells that target mitral cells receive a reward-related extrinsic drive. Our results support the notion that value-related modulation appears at the early stages of sensory processing and provide constraints on long-range and local circuit mechanisms.
reward-related modula,on is present in the somata but not in the apical dendri,c tu_ of mitral 23 cells, sugges,ng an involvement of circuit component located deep in the olfactory bulb. Depth-24 resolved imaging from granule cell dendri,c gemmules suggests that granule cells that target 25 mitral cells receive a reward-related extrinsic drive. Our results support the no,on that value-26 related modula,on appears at the early stages of sensory processing and provide constraints on 27 long-range and local circuit mechanisms. 28 29 30 Introduc.on 31 32 Sensory systems of the brain play a crucial role in guiding animals' choices. One of their uses is 33 in the reward-driven decision-making, where the system is thought to adjust the 34 representa,ons of sensory cues depending on the past reward encounters, to influence their 35 future behavioural choices and learning. Decades of studies across brain areas have 36 demonstrated that reward expecta,ons are, in turn, potent modulators of sensory ac,vity. For 37 example, s,mulus evoked responses in many sensory regions of the brain scale with the 38 quan,ty of expected reward 1-5 . This modula,on is o_en interpreted as represen,ng subjec,ve 39 value 6,7 . Such a system where sensory processing is fined-tuned flexibly may be crucial for 40 maximising returns in a dynamic and uncertain world 7 . 41 42 Decision and value-related modula,ons of sensory responses are featured prominently in 43 higher sensory areas 1,8 . However, recent studies indicate that even early stages of sensory 44 processing, especially in rodents, par,cipate in value-like representa,ons, showing ample 45 modula,ons associated with decision-making 9-11 . The olfactory system is an extreme case in 46 this regard, where apparent reward-related modula,on is readily observed as peripherally as in 47 the olfactory bulb 12,13 , the primary olfactory region situated just one synapse away from the 48 site of sensory transduc,on. This peripheral loca,on, along with the saliency of the olfactory 49 cues for rodents, makes the olfactory bulb an adrac,ve structure to study the mechanisms that 50 generate value-like signals in the brain 12 . 51 52 The nature of this apparent reward-related modula,on in the olfactory bulb remains 53 unresolved. For example, one study observed that evoked responses to rewarded vs. 54 unrewarded odours in the principal neurons of the olfactory bulb diverge only transiently as rats 55 learn to discriminate between the cues 12 . This may reflect the level of animal's engagement 14 , 56 where the learning-related modula,on corresponds mainly to the changes in the inputs from 57 the sensory periphery arising from sniff padern changes. Rodents indeed adjust the odour 58 sampling paderns exquisitely according to the behavioural contexts 15,16 . However, given that 59 the olfactory bulb is a major target of feedback and neuromodulatory projec,ons from many 60 brain regions, value-related informa,on could affect how the olfactory bulb represents odours. 61 For example, electrical and optogene,c s,mula,ons and pharmacological manipula,ons of 62 neuromodulatory and feedback inputs to the olfactory bulb change the gain of odour responses 63 in the principal neurons of this region [17][18][19][20][21][22] . Therefore, whether the reward-related signals in the 64 primary olfactory area simply reflect changes in the sensory input or internally generated 65 contextual signals need to be clarified. 66 67 Here, we show that the olfactory bulb exhibits robust and consistent reward-related signals 68 during a trace olfactory condi,oning paradigm, where mice discriminate between closely 69 related olfactory mixtures. This phenomenon is characterised by widespread inhibitory 70 responses following the rewarded odour presenta,on, in mitral cells but not tu_ed cells. This 71 divergence is not explained by the odour iden,ty or sampling strategy and reflects the 72 congruence of sensory drive and contextual signals. By imaging from specific subcellular 73 compartments of mitral cells, we demonstrate that the divergent responses first become 74 evident peri-soma,cally. Depth-resolved imaging from the dendrites of adult-born granule cells 75 suggests that the cell-type specific modula,on may involve an extrinsic drive to puta,ve mitral 76 cell-targe,ng granule cells. 77 78 Results 79 80 The olfactory bulb integrates both feedforward sensory s,muli, as well as long-range 81 projec,ons from other brain areas (Fig. 1A). The lader input is thought to convey behavioural 82 contextual signals to the olfactory bulb and tune ac,vity paderns flexibly. To study how the 83 behavioural context modulates the olfactory bulb output in olfactory decision making, we 84 trained head-fixed mice to perform an olfactory discrimina,on task ( Fig. 1). Water restricted 85 mice were trained to associate a rewarded odour (S+ odour) with a water reward, and an 86 unrewarded odour (S-odour) with no water delivery (Fig. 1B). Note that this paradigm includes 87 a trace period, as we reasoned that an early cessa,on in the feedforward signal may maximise 88 the chance of observing context-related ac,vity paderns. The mice were first trained to 89 discriminate between easily dis,nguishable odour mixtures, which comprised ethyl butyrate 90 (EB) and methyl butyrate (MB), mixed at 80%/20% ra,o versus a 20%/80% ra,o for the S+ vs. S-91 s,muli, respec,vely (Fig. 1C). When the mice reached a criterion of 80% accuracy (3 ± 0.9 days, 92 n = 6 mice, Fig. 1D), they were trained to discriminate between more similar odour mixtures 93 ("Difficult discrimina,on task"; 60%/40% mixture of ethyl butyrate and methyl butyrate versus a 94 40%/60% mixture). This is a task known to engage many components of the olfactory bulb 95 circuitry 23 . Well-trained mice discriminated between these similar mixtures in 1.63 ± 0.53 s 96 (Supplementary fig. 1), with comparable sniffing paderns for the S+ vs. S-odours 97 ( Supplementary Fig. 2), consistent with previous reports where similar odours and reward 98 ,ming were used 24,25 . 99 100 In the mice proficiently performing the difficult olfactory discrimina,on task, we studied the 101 responses of olfactory bulb output to the rewarded vs. unrewarded odours. The calcium 102 indicator GCaMP6f was expressed in mitral and tu_ed cells using Tbx21-Cre mice crossed with 103 Ai95D mice 26,27 , and was imaged using a two-photon microscope (n=428 ROIs in 6 mice, and 104 n=150 ROIs in 3 mice, respec,vely; Fig. 1E-G). Mitral and tu_ed cells were dis,nguished by 105 depth (Fig. 1E). Tu_ed cells responded largely similarly to both odours (mean ΔF/F during odour 106 = 0.628 ± 0.135 and 0.655 ± 0.148 for S+ and S-respec,vely; p = 0.777, Wilcoxon rank-sum test; 107 mean ΔF/F post odour = 0.203 ± 0.264 and 0.237 ± 0.264 for S+ and S-respec,vely; p = 0.149, 108 Wilcoxon rank-sum test; Fig. 1F). Peculiarly, responses of the mitral cell somata to the rewarded 109 odour were characterised by widespread inhibitory responses (Mean ΔF/F S+ = -0.048 ± 0.058; 110 S-= -0.022 ± 0.054; p < 0.001, Wilcoxon rank-sum test; Fig.1G) . This dominance of inhibi,on for 111 the S+ odour was present soon a_er the odour onset, but was par,cularly pronounced during 112 the post-odour period (Mean ΔF/F S+ = -0.048 ± 0.095; S-= 0.034 ± 0.102; p < 0.001, Wilcoxon 113 rank-sum test; Fig. 1G).
115
The late onset of the reward-associated inhibi,on in mitral cells raises the ques,on regarding 116 the underlying drive: Is the inhibitory component locked to the an,cipatory motor output, or to 117 the odour? To analyse this, we divided the rewarded trials into two sets based on the animals' 118 reac,on ,mes ("early onset" vs. "late onset"), and reverse-correlated the GCaMP6f signals to 119 the onsets of an,cipatory signals ("lick-aligned average" ; Fig. 2). If the peak of inhibi,on in the 120 averages occur at the same ,me for the early lick sets and late lick sets, it would imply the 121 inhibi,on is locked more to the behavioural output (Fig. 2B). This analysis revealed, in contrast, 122 that the ,me of peak inhibi,on is shi_ed depending on the reac,on ,me (Pearson's correla,on 123 coefficient = -0.555, p = 0.026; N = 15 fields of view, 6 mice; Fig. 2C-E), indica,ng that the 124 inhibi,on is locked to the odour. 125 126 The prevalence of inhibitory responses in mitral cells following the rewarded odour 127 presenta,on is striking, but this level of inhibitory dominance has not been reported previously, 128 even though several studies already studied how mitral cells respond to odours during difficult 129 odour discrimina,on paradigms 28-30 . The difference here may be the short dura,on of odour 130 pulse used, followed by a two seconds-long trace period. It is possible that, with a longer odour 131 presenta,on, the feed-forward component may dominate over any modulatory influences in 132 the olfactory bulb (Fig. 3A). To test this possibility, in well-trained mice, we presented the 133 odours for a longer period (4 s), making the task a delay task (Fig. 3B). In this condi,on, mitral 134 cells responded to the rewarded and unrewarded odours similarly ( Fig. 3C-E). Notably, both 135 responses were characterised by widespread inhibitory component (% of ROIs showing 136 significant inhibi,on = 18.1 for S+ and 11.9 for S-, and 35.2 for S+ and 23.9 for S-in early and 137 late ,me windows, respec,vely, n = 5 mice). The divergent responses may therefore originate 138 from a conjunc,on of olfactory and contextual signals. 139 140 To test if the behavioural state of the animal is crucial for the response divergence in mitral 141 cells, we used two pseudo-condi,oning paradigms using the same odours (Fig. 4A,B). In the first 142 case ("Disengagement"), the water was delivered every trial, approximately 15 seconds before 143 the odour presenta,on (Fig. 4B). In the second case ("Random associa,on"), we delivered the 144 water on randomly selected trials, so that both 60/40 and 40/60 odour mixtures were followed 145 by water 50% of the ,me (Fig. 4B). These two paradigms decouple the odour-reward 146 associa,on, while inducing different levels of engagement in the head-fixed mice 31 . Imaging 147 sessions took place a_er the mice, previously trained on the fine discrimina,on task, were 148 switched to, and experienced at least one session of the new paradigm (Fig. 4C). In both control 149 paradigms, the head-fixed mice showed no preferen,al licking for the 60/40 mixture (average 150 an,cipatory licks for disengagement paradigm = 1.5 ± 1.6 and 0.7 ± 0.8 on 60/40 and 151 40/60,respec,vely; p = 0.999, 1-way ANOVA with post-hoc mul,ple-comparisons; average 152 an,cipatory licks for random associa,on paradigm = 6.9 ± 6.1 and 5.5 ± 4.3 on 60/40 and 40/60, 153 respec,vely; p = 0.890, 1-way ANOVA with post-hoc mul,ple-comparisons, Fig. 4D).
154
Importantly, disengagement and randomized paradigms differed in the general levels of 155 an,cipatory licks (average an,cipatory licks for all trials = 1.1 ± 1.3 and 6.2 ± 5.2 for 156 disengagement and random associa,on paradigms, respec,vely; p = 0.0021, 1-way ANOVA with 157 post-hoc mul,ple-comparisons), indica,ng that different levels of behavioural engagement were 158 indeed achieved by these paradigms. In both cases, the mitral cell somata responded similarly 159 to the two odour mixtures (mean ΔF/F for disengagement = -0.02 ± 0.05 and -0.01 ± 0.06 160 during odour for 60/40 and 40/60, respec,vely; p = 0.465; for post odour = 0.05 ± 0.09 and 0.05 161 ± 0.09; p=0.553; mean ΔF/F for random associa,on = -0.03 ± 0.09 and -0.04 ± 0.10 during 162 odour; p = 0.259; post odour = -0.01 ± 0.14 and 7.7 x 10 -5 ± 0.14; p=0.617, Wilcoxon rank-sum 163 test; Fig. 4E-G). Note that the inhibi,on during the post-odour, an,cipatory period that is 164 normally present in discrimina,ng mice was generally reduced in the two control paradigms. 165 Together, these results indicate that the observed divergent responses in mitral cell somata are 166 state-dependent, and not explained by the odour iden,,es. 167 168 What is the cellular origin of the widespread inhibi,on associated with the rewarded odour? 169 Previous studies showed that a variety of feedback and neuromodulatory projec,ons to the 170 olfactory bulb modulate the physiology of olfactory bulb neurons 18-20,32-34 . Further, several 171 studies showed that such modula,ons manifest differently for mitral cells and tu_ed cells 172 17,18,29,35 . Recent works indicate that mitral cells receive more potent feedback modula,on from 173 the piriform cortex 17,18 . Thus, even though it is beyond the scope of the current work to 174 systema,cally inves,gate all sources, the piriform cortex is a reasonable candidate for the 175 source of the contextual signal resul,ng in the mitral cell-specific, reward-related inhibi,on we 176 observe. 177 178 To test the involvement of the piriform cortex, we pharmacologically inac,vated the ipsilateral 179 anterior piriform cortex while the head-fixed mice performed the difficult olfactory 180 discrimina,on task (Fig. 5A). This was achieved by infusing the GABAA antagonist, muscimol, 181 through an implanted canula. Muscimol and control sessions were carried out on alternate 182 days, but the same fields of view were sampled for the two condi,ons, so that the responses of 183 the same ROIs could be compared directly. The infusion of muscimol disrupted the behavioural 184 performance significantly (behavioural accuracy = 64.0 ± 14.5 % during muscimol sessions; 92.8 185 ± 7.8 % during control sessions; p = 0.004, Wilcoxon rank-sum test; n=6 control sessions and 6 186 muscimol sessions, 3 mice; Fig. 5F). When responses of mitral cells were imaged in this 187 condi,on, divergence in the rewarded vs. unrewarded odour responses were significantly 188 reduced (Mean ΔF/F during odour = -0.007 ± 0.073 and -0.023 ± 0.091 for S+ and S-189 respec,vely; p = 0.174, Wilcoxon rank-sum test; Mean ΔF/F post-odour = 0.067 ± 0.124 and 190 0.079 ± 0.171 for S+ and S-respec,vely; p = 0.252, Wilcoxon rank-sum test, Fig. 5G-I). This was 191 characterised by a reduc,on in the inhibitory responses evoked by the rewarded s,mulus 192 (normalised S+ ─ S-difference = -0.007 ± 0.156 and 0.043 ± 0.142 in control and muscimol 193 sessions respec,vely; p = 0.008, Wilcoxon rank-sum test; Fig. 5J), and during the post-odour 194 phase (normalised S+ ─ S-difference = -0.263 ± 0.175 and -0.040 ± 0.168 in control and 195 muscimol sessions respec,vely; p = 3.53 x 10 -18 , Wilcoxon rank-sum test; Fig. 5J). Together,196 these data indicate that an intact piriform cortex and/or accurate behavioural performance, is 197 required to observe the widespread inhibitory responses associated with the rewarded odour. 198 199 The results so far indicate that the widespread inhibitory responses associated with the 200 rewarded odours come from sources extrinsic to the olfactory bulb. One of the major targets of 201 such long-range projec,ons within the olfactory bulb is the granule cells. These cells contact 202 mitral cells on their lateral dendrites at a deeper por,on of the external plexiform layer. If the 203 granule cells convey the contextual signals to mitral cells, the divergent responses may be 204 observable perisoma,cally, but not in the superficial compartment (Fig. 6A). To test this, we 205 compared the GCaMP6f signals from the apical dendrites of mitral cells in the glomeruli, vs. 206 signals from the somata, which reflect signals derived from all subcellular compartments. Since 207 tu_ed cells and mitral cells both send their apical dendrites to the glomeruli, to study signals 208 from mitral cells in isola,on, we used Lbhd2CreERT2::Ai95D mice, where GCaMP6f is expressed 209 predominantly in mitral cells 24 (Fig. 6A,B).
211
Imaging from the superficial plane, the apical dendrites showed no significant differences 212 between responses to S+ and S-odours (mean ΔF/F during odour = 0.307 ± 0.439 and 0.328 ± 213 0.466 for S+ and S-respec,vely; p = 0.687; post odour = 0.338 ± 0.512 and 0.382 ± 0.549 for S+ 214 and S-respec,vely; p = 0.423, Wilcoxon rank-sum test, Fig. 6C). As before, signals from the 215 mitral cell somata imaged in the Lbhd2-CreERT2::Ai95D mice were characterised by the 216 widespread inhibitory component (mean ΔF/F during odour = -0.058 ± 0.077 and -0.034 ± 0.061 217 for S+ and S-respec,vely; p = 1.86 x 10 -4 ; post-odour = -0.024 ± 0.130 and 0.038 ± 0.131 for S+ 218 and S-respec,vely; p = 1.08 x 10 -5 , Wilcoxon rank-sum test; Fig. 6D) . Together, these data 219 suggest that the reward-related inhibi,on in mitral cells originates perisoma,cally. 220 221 If the inhibi,on in response to the rewarded cue in mitral cells is mediated via granule cells, we 222 should observe a greater GCaMP6f signal change to S+ odours specifically in the granule cells 223 that target mitral cells (Fig. 7A). The granule cells whose dendrites ramify in the deeper por,on 224 of the external plexiform layer are thought to synapse with mitral cells 36-38 , where mitral cell 225 lateral dendrites are found. These mitral cell-targe,ng granule cells are, however, present 226 intermixed with tu_ed cell-targe,ng granule cells. 227 228 To dis,nguish the puta,ve mitral cell-targe,ng granule cells from those that target tu_ed cells, 229 the depth of external plexiform layer needs to be dis,nguished accurately in vivo. Towards this 230 end, we crossed Lbhd2-CreERT2 mice with Ai14 mice to express tdTomato preferen,ally in 231 mitral cells. We reasoned that despite ,ssue curvature or non-uniform thickness of the external 232 plexiform layer, this method would allow us to accurately separate the deeper from the 233 superficial por,ons based on the density and distribu,on of the tdTomato expression. Indeed, 234 the deep por,on of the external plexiform layer showed higher density of thin red fluorescent 235 processes ( Fig. 7B,C), while at more superficial depths, we observed occasional fluorescence 236 from thick processes, likely corresponding to the primary dendrites of mitral cells. 237 238 To study if the divergent odour responses in mitral cells can be explained by the evoked ac,vity 239 of puta,ve mitral cell-targe,ng granule cells, we turned to adult-born granule cells that develop 240 their dendrites in the deep external plexiform layer (Fig. 7D,E). Adult-born granule cells are 241 thought to be cri,cal for refining odour responses in mitral and tu_ed cells when mice need to 242 discriminate between similar odours 39-43 . Further, since the mature adult-born granule cells 243 form dendro-dendri,c synapses with mitral cell lateral dendrites, where GABA release can occur 244 locally 44 , we sought to image directly from dendri,c gemmules. Due to their small size, we 245 were cau,ous to exclude images from sessions that showed mo,on artefact, which was 246 determined by correla,ng the structural fluorescence padern to the baseline period and 247 discarded those that showed low correla,on (Supplementary Fig. 3). As a result, 70% 248 (1343/1917 trials) of the acquired data was discarded. Deep dendri,c gemmules showed more 249 frequent inhibi,on compared to superficially located dendri,c gemmules (Fig 7F, 250 Supplementary Fig. 4). This may reflect the reduced excitatory drive in the deep granule cells, 251 due to the prevalent reward-related inhibi,on in mitral cells. Thus, we analysed how the evoked 252 amplitude distribu,on in the deep vs. superficial dendri,c gemmules rela,ve to that of the 253 presynap,c counterparts, that is, against the distribu,on of evoked responses in mitral cells and 254 tu_ed cells, respec,vely. The S+ vs. S-tuning showed a close overlap between tu_ed cells and 255 superficial gemmules of adult-born granule cells. On the other hand, S+ vs. S-tuning distribu,on 256 of deep gemmules could not be explained by the mitral cell tuning distribu,on (Fig. 7G). There 257 was a tendency for these gemmules to respond more posi,vely to the rewarded odours than 258 would be predicted from mitral cell ac,vity alone. In other words, our data suggests that mitral 259 cell-targe,ng granule cells may receive an addi,onal excitatory drive associated with the 260 rewarded odour. 261 262 Discussion 263 264 In this study, we observed a cell-type specific reward-associated inhibi,on in the primary 265 olfactory area of the mouse. This inhibi,on is cell-type specific and subcellular specific: first, it 266 manifests in the somata of mitral cells but not in tu_ed cells, and second, it appears in the 267 somata but not in their apical dendri,c tu_ in the input layer. This subcellular specificity 268 suggests that the genera,on of this phenomenon involves circuit components at a deeper layer 269 of the olfactory bulb. Further, the results of pseudo-condi,oning and pharmacological 270 manipula,ons suggest that the mitral cell-specific, reward-related inhibi,on arises from an 271 acquired congruence of sensory and contextual signals. Thus, our study supports the no,on 272 that value-related modula,on of olfactory signals is a characteris,c of olfactory processing in 273 the primary olfactory area and provides constraints on possible underlying mechanisms. 274 275 Many ques,ons remain regarding the origin of the reward-related signals to the olfactory bulb. 276 Many brain regions send long-range projec,ons to the olfactory bulb and are, therefore, 277 candidate drivers of the reward-related inhibi,on we observed. One important source of 278 reward-related signals to OB is the direct or indirect feedback projec,ons from olfactory 279 cor,ces. Value-like modula,on of olfactory responses occurs in many parts of the brain: It has 280 been observed in the prefrontal cortex 4,45 , orbitofrontal cortex 4,45 , hippocampus 46 , olfactory 281 tubercle 47-50 , piriform cortex 4,51 , and anterior olfactory nucleus 4 , although there may be 282 regional differences, for example, in the long-term stability of expression 45 . Of par,cular 283 interest is the piriform cortex, which serves as a gateway for processed signals for modula,on of 284 the mitral cells 17 . While only a subset of pyramidal neurons from the piriform cortex project to 285 the olfactory bulb 52 , a recent imaging study from olfactory bulb-projec,ng fibres showed value-286 like ac,vity when the task depended on olfactory cues 53 . It is unclear why we did not observe 287 the widespread reward-related modula,on in tu_ed cells, even though some value-like ac,vity 288 is present in the anterior olfactory nucleus 4 , a region known to have modulatory influence over 289 tu_ed cells 17 . Since the anterior olfactory nucleus has mul,ple compartments 54 with each with 290 dis,nct long-range connec,vity 55 , it will be crucial for future studies to resolve how these 291 subregions it cooled down to room temperature, the solu,on was injected intraperitoneally (~ 100 μl).
379
Mice that were treated with the tamoxifen diet (2 mg/kg) were exposed to this for 2-4 days, 380 based on their ini,al weight, a_er which they were switched back to normal food (threshold to 381 switch to normal food: 80% ini,al body weight). Tamoxifen intake was calculated based on the 382 amount of diet food provided and amount of diet food le_ a_er switching back to normal food. 383 384 385 Surgery 386 All recovery surgeries were conducted in an asep,c manner. For the cranial window and 387 headplate implanta,ons, 9-11 week-old male mice were deeply anesthe,sed with isoflurane (3-388 5% for induc,on, 1-2% for maintenance; IsoFlo, Zoe,s Japan Long odour discriminaDon 441 The mice proficient at the difficult discriminai,on task were trained to discriminate between the 442 same odour mixtures but with 4 seconds of odour dura,on. The response window and reward 443 ,ming on S+ trials was the same between the two paradigms. 444 445 Disengagement paradigm 446 In this paradigm, the same odour mixtures as the difficult discrimina,on were used, but the 447 water reward was delivered every trial, approximately 15 seconds before the odour onset. The 448 ,me window used for measuring the an,cipatory licks was iden,cal to that of the fine 449 discrimina,on paradigm. The first session was considered a transi,on session and excluded 450 from imaging analysis. 451 452 Random associaDon paradigm 453 In this paradigm, the water reward was presented in 50% of the trials, regardless of the odour 454 iden,ty. This decoupled the odour iden,ty and reward, but kept mice engaged, indicated by the 455 an,cipatory licks. followed by PFA solu,on (4% dissolved in phosphate buffer). For mice that were implanted with 470 a cannula, 500 nL DiI (Invitrogen, V22885) was injected prior to the perfusion to mark the 471 cannula ,p loca,on. Coronal sec,ons of 100 µm thickness were cut on a vibratome (5100 mz-472 Plus, Campden Instruments, Leicestershire, UK) and counterstained using DAPI (D9542, Sigma-473 Aldrich). Images were acquired using a Leica SP8 confocal microscope using a ×10 (NA 0.40 Plan-474 Apochromat, 506407, Leica) objec,ve. 475 476 Immunohistochemistry 477 Free floa,ng olfactory bulb sec,ons from above were first blocked in blocking solu,on (0.025 M 478 Tris-HCl, 0.5 M NaCl, 0.2% triton X-100, 7.5% normal goat serum, 2.5% BSA, pH = 7.5) for 60min 479 at room-temperature. Sliced were subsequently stained with chicken an,-GFP (Abcam, 480 ab13970; 1:500 in blocking solu,on) at 4°C over-night. Slices were washed three ,mes in TBS 481 (0.025 M Tris-HCl, 0.5 M NaCl, pH = 7.5) and incubated in goat an,-chicken Alexa-488 (Abcam, 482 ab150169; 1:1000 in TBS supplemented with 0.2% triton X-100) for 2 hours at room-483 temperature. All slices were counter stained with DAPI (D9542, Sigma-Aldrich). Images were 484 acquired using a Leica SP8 confocal microscope using a 10X (NA 0.40 Plan-Apochromat) or a 40X 485 (NA 1.10 Plan-Apochromat, 506357, Leica) objec,ve. 486 487 In vivo calcium imaging 488 All the calcium data presented in this manuscript were obtained from awake mice. Two-photon 489 fluorescence of GCaMP6f and tdTomato were measured simultaneously with a custom-made 490 miscroscope (INSS, UK) fided with a 25x objec,ve (Nikon N25X-APO-MP1300, 1.1 N.A.) or a 16x 491 objec,ve (Nikon N16XLWD-PF, 0.8 NA), and high-power laser (930 nm; Insight DeepSee, MaiTai 492 HP, Spectra-Physics, USA) at depths 50-400 μm below the surface of the olfactory bulb. Images 493 from a single plane were obtained at ~30 Hz with a resonant scanner. In each trial, 400 image 494 frames were acquired, with 100 frames before odour s,mulus to obtain a baseline. Each day, 495 the stage coordinates were chosen rela,ve to a reference loca,on, which was determined by 496 the surface blood vessel padern. Fields of view were 512 μm x 512 μm for apical dendrites, 256 497 μm x 256 μm for tu_ed and mitral cell somata, 128 μm x 128 μm and for adult-born granule cell 498 gemmules. Calcium data during difficult discrimina,on and disengaged experiments were 499 obtained from Tbx21-Cre::Ai95D mice (Figs. 1 and 4). 6 male mice were used for somata 500 imaging. All 6 were used to image mitral cell somata, while a subset (3 mice) were used to 501 image tu_ed cell somata. To obtain calcium data from different subcelluar compartments of 502 mitral cells, we used LBHD2-CreERT2::Ai95D mice (Fig. 6). For long odour discrimina,on, 503 random associa,on, and muscimol infusion experiments, calcium data was obtained from both 504 Tbx21-Cre::Ai95D and LBHD2-CreERT2::Ai95D mice (Figs. 3-5). Finally, LBHD2-CreERT2::Ai14 505 mice were used in to record red (tdTomato, mitral cells) and green (calcium indicator, 506 gemmules) fluorescent signals during adult-born granule cell imaging experiments (Fig. 7). 507 508 Data analysis 509 All data was analyzed offline using custom MATLAB (MathWorks, USA) rou,nes. To calculate the 510 behavioural accuracy, the number of licks during a 3 second window from final valve opening 511 un,l reward presenta,on was counted for each trial (an,cipatory licks). Correct response to the 512 rewarded odour was a minimum of 2 an,cipatory licks, and correct response to the unrewarded 513 odour was less then 2 an,cipatory licks. Behavioural accuracy was calculated as the percentage 514 of correct trials from the total number of trials. 515 516 To calculate the sniffing frequency and speed of inhala,on, the sniffing signal was first filtered (1 517 Hz high-pass and 30 Hz low-pass) and normalised (z-score). Inhala,on peaks were detected 518 using the findpeaks MATLAB func,on. Sniff onsets were determined by searching back in ,me 519 from each detected inhala,on peak to the point where the signal crossed a threshold value. The 520 detected onsets and peaks were then used to calculate the frequency (as 1/inter-onset ,me) 521 and speed of inhala,on (as onset-to-peak ,me). 522 523 Image analysis 524 For each field of view, the imaging data was manually curated based on mo,on ar,facts and 525 dri_ over ,me. Data with mo,on ar,facts and/or dri_ were mo,on corrected using the 526 NoRMCorre toolbox (Pnevma,kakis and Giovannucci, 2017) and, when unsuccessfully 527 corrected, excluded from analysis. Regions of interest (ROIs) were manually drawn using ImageJ 528 (NIH, Bethesda, USA) based on the average field of view from each imaging session and 529 exported for usage in MATLAB. Average pixel value from each ROI was offset with a value from 530 the darkest region in the frame (e.g. a blood vessel). To account for bleaching over the course of 531 the imaging session, the mean pixel values for all trials were concatenated and detrended using 532 the MATLAB func,on detrend, then reshaped back into an array (individual trials x frames) 533 before rela,ve fluorescence change was obtained. For each ROI, the change in fluorescence 534 (ΔF/F) was calculated by subtrac,ng the mean pixel value from the baseline period (1 second 535 before odour s,mulus onset), and dividing by the baseline value. Odour evoked responses were 536 calculated as the mean fluorescence change during the odour s,mulus presenta,on, and post-537 odour evoked responses as the mean fluorescence change between odour s,mulus offset and 538 reward presenta,on. For the 'long odour' and 'random associa,on' experiments, the ,me 539 windows to calculate the evoked responses were based on the fine odour discrimina,on 540 experiments. 541 542 Lick-aligned average 543 Rewarded trials were analysed for each imaging session. Onsets of an,cipatory licks were 544 defined as the average ,me of first two licks observed a_er the start of odour presenta,on. 545 Rewarded trials were grouped into early vs late lick trials if the an,cipatory lick onsets occurred 546 before or a_er the median onset ,me, respec,vely. Within each group, calcium transients were 547 aligned to the an,cipatory lick onset ,me for each rewarded trial, and averaged. 548 549 Quality assessment abGC imaging 550 Small abGC gemmules makes them suscep,ble to mo,on ar,facts in behaving animals. To 551 objec,vely assess the quality of the imaged trial, the tdTomato signals from the MC dendrites in 552 LBHD2-CreERT2::Ai14 mice were analysed. Rolling averages of 5 frames (step size: 1 frame) were 553 made and compared against the average of 50 frames obtained during the baseline period to 554 compute the correla,on coefficient. If the mean correla,on value for the period analysed 555 (between the odour offset and onset of water reward) was below 95% of the mean value during 556 the baseline period, the trial was rejected. Separate quality check was conducted for the odour 557 and post-odour phase. Further, trials where the baseline correla,ons deviated significantly were 558 considered outliers and rejected. This was assessed using the Matlab func,on isoutlier. This 559 quality control method resulted in 16 accepted fields of view from 5 mice for the odour period, 560 and 13 fields of view from 4 mice for the post-odour period. On average, a given accepted fields 561 of view yielded 3.2 ± 1.4 ROIs (3.8 ± 1.5 ROIs for deep FOVs, and 2.9 ± 1.4 ROIs for superficial 562 fields of view for the magnifica,on used (128 µm x 128 µm frame size). 563 564 565 External plexiform layer depth determinaDon based on red fluorescence 566 Depth within the external plexiform layer was es,mated using the red fluorescence signal from 567 mitral dendrites in Lbhd2-CreERT2::Ai14 mice, which is dense in the deeper por,on. Z-stack 568 ranging from the superficial layer to mitral cell layer spanned 250 µm (100 frames averaged 569 every 4 µm) obtained from the same x-y loca,on as the func,onal imaging was used. Since the 570 fibre-like structures are the relevant signals, the averaged frame from each depth was passed 571 through a filter available as a plug-in in ImageJ ("Tubeness"; 80 ), with the sigma parameter set to 572 2 µm. 573 574 Normalised difference 575 For each trial, the average value of rela,ve fluorescence change was calculated for the odour 576 period (first 1 second a_er the odour onset) and the post-odour period (1-3 s a_er the odour 577 onset). The normalized difference between S+ and S-response amplitudes for the odour period, 578 as well as the post-odour period, is calculated as follows: 579 580 where i denotes the trial index, n is the number of trials, x is the evoked fluorescence change in 583 response to the rewarded odour, and y is the evoked fluorescence change in response to the 584 unrewarded odour. 585 586 587 StaDsDcs 588 Divergent responders 589 To determine if a ROI showed a divergent response, for each ROI odour evoked and post-odour 590 response amplitudes for S+ and S-trials were tested for sta,s,cal significance using the 591 Wilcoxon rank sum test. Summary transients presented in figures show mean ± SEM, unless 592 otherwise stated. 593 594 Discrimina,on ,me 595 The method for determining the discrimina,on ,me was modified from 81 . Cumula,ve 596 histograms of the detected licks were calculated for all trials using 50 ms ,me bins. For each 597 848 849 850
898
Behavioural paradigms to decouple reward associa0on while disengaging mice (middle) or engaging mice 899 (Random associa0on). In disengagement sessions, reward was delivered every trial, preceding odour 900 presenta0ons. In random associa0on, reward followed both mixtures of EB and MB 50% of the 0me. C,
901
Timeline of experiments. Mice first performed fine olfactory discrimina0on, then went through either 902 disengagement or random associa0on sessions. Imaging took place from day 2 in both cases. D, Number of 903 an0cipatory licks (licks 3 seconds' window from odour onset) for the two odours for three behavioural | 2023-08-22T13:12:19.106Z | 2023-09-15T00:00:00.000 | {
"year": 2023,
"sha1": "48b0cd80711a505f09b7997100e9f96791671c88",
"oa_license": "CCBYNC",
"oa_url": "https://www.biorxiv.org/content/biorxiv/early/2023/08/18/2023.08.17.553686.full.pdf",
"oa_status": "GREEN",
"pdf_src": "BioRxiv",
"pdf_hash": "1ad66f6f8435d4eaa415399c1bec2bfe5c78b68f",
"s2fieldsofstudy": [
"Biology",
"Psychology"
],
"extfieldsofstudy": [
"Biology"
]
} |
250665566 | pes2o/s2orc | v3-fos-license | X-ray fluorescence microscopy of olfactory receptor neurons
We report a x-ray fluorescence microscopy study of cells and tissues from the olfactory system of Xenopus laevis. In this experiment we focus on sample preparation and experimental issues, and present first results of fluorescence maps of the elemental distribution of Cl, K, Ca, P, S and Na both in individual isolated neural cells and in cross-sections of the same tissue.
Introduction
The sense of smell is concerned with the parallel and simultaneous detection of a multitude of molecular structures. Olfactory signal processing includes transport of odorants through the mucus to the transduction compartments, either stereocilia or microvilli, issuing from the dendrite of an olfactory sensory cell into the mucus [1]. As both stereocilia and microvilli have diameters between 100 and 200 nm, conventional optical microscopy is not sufficient to elucidate their fine structure. The olfactorial signal pathways presents a tremendous analytical challenge, regarding chemical and spatial resolution. The aim of the present investigations was to see to which extent x-ray fluorescence microscopy and x-ray spectromicroscopy is suitable at the presently achievable resolution and sensitivity to address this problem. In order to understand complex biomedical issues involving inorganic elements, the chemical species (chemical specification) of the elements must be considered: oxidation state, coordination, and/or complex or molecular structure [2,3].
2.
Sample preparation x-ray microscopy experiments X-ray fluorescence and x-ray spectromicroscopy are in principle well suited methods to achieve these high requirements to study the biological structures of interest. However, a major challenge, is given by the need to prevent highly mobile molecular species such as ions, from rearrangement during sample fixation and preparations. Preservation of biological samples from X-ray damage during the measurements is a second important issue, excluding high resolution experiments in aqueous solution without fixation. While cryo microscopy is the best solution towards the goals of structural isomorphism and reduced radiation damage, it is also technically the most challenging one. Here we used freeze drying method after rapid cryogenic plunging as well as embedding samples in styrene-methacrylate mixture and cutting 1 µm thin slices for analysis, see : Two analytical procedures for cell preparation and X-ray fluorescence measurement at room temperature. After freeze-drying protocol, cell and tissue samples were prepared in two different ways: direct measurement without any fixation or with the classical electron microscopy embedding preparation using a styrene-methacrylate mixture.
Cryo-preserving and freeze drying method: The sample of the olfactory receptor neurons (ORNs) were freshly isolated from the nasal epithelium of larvae Xenopus laevis (stage 54-56)using mechanical trituration in divalent free buffer solution and short incubation in papain [4,5]. Isolated cells were carefully transferred into a Ringer solution containing 0.05 M ammonium acetate buffer, pH 7.8 (1:1 w:w). The cells were suspended in 5 µl solution were placed on a 200 nm thick Si 3 N 4 membrane, which had been coated previously with 0.1 % laminin and 0.01 % poly-L-lysine. All cells used in this study were clearly identified, prior to the measurements as ORNs on the basis of their characteristic morphology, having a single thick dendrite with a knob-like swelling from which emanated 3-10 cilia. Cells naturally attached to the substrate after approximately 3 min. The samples were rapidly frozen in liquid ethane cooled in liquid nitrogen, and subsequently quickly transferred in liquid nitrogen until freeze-drying at -70 • C for two days. Samples are then very slowly warmed up to room temperature over 24 h and stored over silica gel in a desiccator.
Embedding the neural tissue for X-ray microscopy: Samples isolated from nasal epithelium of larvae Xenopus laevis were rapidly frozen in a mixture of propane: isopentane (2:1) cooled with liquid nitrogen to -196 • C in an aluminum mesh and freeze-dried at -70 • C for three days and stored at room temperature over silica gel in a desiccator. Freeze-dried samples were infiltrated with ether in a vacuum-pressure chamber and embedded in styrene-methacrylate using a technique specifically developed for analysis of diffusible elements. 1 µm thick sections were cut dry by glass knives, mounted on adhesive 100-mesh hexagonal Cu grids (G-100 hex, Canemco and Marivac Inc.), coated with carbon, and stored over silica gel.
X-ray fluorescence: The elemental distribution and elemental speciation of isolated neural receptor cells with intact ciliae was carried out by scanning x-ray microscopy, using the combined capabilities of the scanning transmission microscope (STXM) at ESRF ID21 beamline, as well as the nanoprobe imaging system at ID22NI beamline, in two seperate beamtimes. The specification of these beamlines are given elsewhere [6,7]. The following ions are of special importance in the neural receptor cells: Cl, K, Ca, P, S, Na as well as the trace elements Zn, Fe and Cu. The best fluorescence results, without x-ray damage, have been obtained at a photon energy of 4.1keV energy at ID21. Typical fluorescence maps of individual cells (analyzed with PyMca software) and cross section of the embedded olfactory epithelium are shown in [4]. The main result of this study was the localization of Cl, mainly in the soma of the receptror cell, but also in patches along the dendrite. We hypothesise that these patches of high chloride concentration may correspond to acidic intracellular compartments. This can be deduced in particular from the single cell maps (Fig. 2). While Cl was expected to be located primarily to the cilia, it appears an intracellular gradient and possibly a transport mechanism exist in this kind of receptor cell. This is consistent with recent immunocytochemical results indicating N a + -K + -2Cl − co-transporters to be located to the somata and dendrites [8].
Chloride was also observed to accumulate in globular structures close to the plasma membrane. The formation of spurious NaCl salt as an artefact of the preparation can be excluded from the simultaneously measured Na map. In our analysis Na, K and Ca were distributed to variable extents in the different parts of the cell, their distributions showing some overlap (Fig. 2). These differences show that the distribution is not dominated by a single functional architecture such as homogeneously expressed and membrane associated transporters or co-transporters. The comparison of the two presented methods of sample preparation shows that individual ORNs are better suited for elemental mappings than tissue slices. On the other hand, thin tissue slices can potentially help to address questions such as the odorant transport within the mucus. Even if the preparation of dissociated ORN is more invasive, it certainly gives reliable results as to the element distributions. | 2022-06-28T04:19:41.804Z | 2009-01-01T00:00:00.000 | {
"year": 2009,
"sha1": "97e1c45da8f133384fab09aca893a29327b6915a",
"oa_license": null,
"oa_url": "https://doi.org/10.1088/1742-6596/186/1/012083",
"oa_status": "GOLD",
"pdf_src": "IOP",
"pdf_hash": "97e1c45da8f133384fab09aca893a29327b6915a",
"s2fieldsofstudy": [
"Biology"
],
"extfieldsofstudy": [
"Physics"
]
} |
225553871 | pes2o/s2orc | v3-fos-license | Intraoperative image guidance for the surgical treatment of adult spinal deformity
Operative management of adult spinal deformity (ASD) has been increasing in recent years secondary to an aging society. The advance of intraoperative image guidance, such as the development of navigation and robotics systems has contributed to the growth and safety of ASD surgery. Currently, intraoperative image guidance is mainly used for pedicle screw placement and the evaluation of alignment correction in ASD surgery. Though it is expected that the use of navigation and robotics would result in increasing pedicle screw accuracy as reported in other spine surgeries, there are no well-powered studies specifically focusing on ASD surgery. Currently, deformity correction relies heavily on preoperative planning, however, a few studies have shown the possibility that intraoperative image modalities may accurately predict postoperative spinopelvic parameters. Future developments of intraoperative image guidance are needed to overcome the remaining challenges in ASD surgery such as radiation exposure to patient and surgeon. More novel imaging modalities may result in evolution in ASD surgery. Overall there is a paucity of literature focusing on intraoperative image guidance in ASD surgery, therefore, further studies are warranted to assess the efficacy of intraoperative image guidance in ASD surgery. This narrative review sought to provide the current role and future perspectives of intraoperative image guidance focusing on ASD surgery.
Introduction
Adult spinal deformity (ASD) is a degenerative disease with three-dimensional deformity in the alignment of the spinal column throughout the aging process (1,2). Clinical presentation of ASD varies greatly depending on the type and severity from minimal or no symptoms to severe back and leg pain, with malalignment resulting in a disability of standing, walking, eating, sleeping (3). It can have a debilitating impact on overall health often exceeding the disability of more recognized chronic diseases (4,5). The prevalence of spinal deformity is high, with reports of an incidence of 65% in those older than 60 years (6). Along with an aging society, the demand for the treatment of ASD has been increasing.
The history of deformity surgery began in the early 20th century and the challenge to correct deformity began in the late 20th century with Harrington rods and hooks used to distract and compress across deformity (7,8). However, inherently, surgical treatment was not popular in ASD because it was associated with a relatively high morbidity rate resulting in a large amount of blood loss, long operative Review Article on Current State of Intraoperative Imaging duration, and complications. However, improvements in anesthesia and critical care, surgical techniques, and instrumentation has led to remarkable advances in ASD surgery with patients able to undergo safer, faster, and more reproducible surgery (9). Around the same time, using radiographic parameters, the Scoliosis Research Society (SRS)-Schwab classification established a framework for characterizing spinopelvic malalignment and providing realignment targets for surgeons, which has led to an explosion in ASD volume (10).
The goal of ASD surgery is to correct the deformity based on appropriate spinopelvic parameters as these parameters have been found to correlate with patientreported outcomes (11,12). Current surgical techniques for ASD are based on posterior fusion with pedicle screws and rods, if necessary, in conjunction with lateral and anterior fusion or osteotomy (9) (Figure 1). Intraoperative image guidance is essential in those procedures. Newer image modalities such as navigation or robotics are gaining traction in ASD surgery as well as other spine surgeries.
The aim of the present narrative review is to assess the current role and future perspectives of intraoperative image guidance focusing on ASD surgery.
Pedicle screw placement
Segmental pedicle screw fixation is the anchor to correct deformity in ASD surgery. It enables deformity correction in multiple planes with rigid fixation (13). Intraoperative image guidance is particularly useful in ASD surgery where there is a three-dimensional deformity with dysmorphic pedicles that makes visualizing normal anatomical landmarks for pedicle screw insertion difficult. Pedicle screws are traditionally placed using a freehand technique, mostly with fluoroscopic guidance (14,15). Currently, navigation technology has been widely used in various spine surgeries and most studies reported to increase the accuracy of pedicle screw placement ( Table 1). Several studies have suggested that the utilization of navigation resulted in improved accuracy of pedicle with kyphosis due to tuberculosis or Scheuermann's disease) and showed pedicle breaches were significantly fewer in patients using navigation group (2%) compared to those in fluoroscopy group (23%) (17). While the navigated cohort in this study did include adult patients up to 52 years old, the average of this cohort was 19.6 years, and the data was not granular enough to perform analysis on the ASD cohort in isolate. Jin et al. reported the accuracy rate of pedicle screw placement was significantly higher in patients using navigation (79% with breach less than 2 mm) compared to that in patients using fluoroscopy (67% with breach less than 2 mm) for 32 patients with dystrophic neurofibromatosis type 1-associated scoliosis (27). However, these are studies focusing on primarily pediatric deformity surgery, and not specifically ASD surgery. This patient population is similar to that of an ASD cohort in terms of three-dimensional deformity and dysmorphic pedicles with tilt and rotation, however, there are differences in the diameter or yielding of the pedicle and the presence of osteopenia/osteoporosis. To our knowledge, there is currently a lack of literature focusing specifically on isolated ASD surgery, though there are a couple of studies including ASD patients as a small portion of their subjects. Further investigations focusing on ASD surgery are warranted to clarify the benefits of navigation for the accuracy of pedicle screw placement in ASD surgery. In addition to image guidance, robotic technology is also rapidly gaining traction (Figure 2). Some studies reported the utility of robotics statistically increased pedicle screw accuracy in spine surgery, though the overall findings are still controversial ( Table 1). Little literature has investigated the pedicle screw accuracy focusing on deformity surgery using robotics. Macke et al. retrospectively examined robotic-assisted pedicle screw placement in 50 patients with adolescent idiopathic scoliosis and reported that the proper use of image-guided robot-assisted surgery can improve the accuracy and safety of thoracic pedicle screw placement, resulting in pedicle screws with a breach of greater than 2 mm were 7.2% (28). Further studies are needed to assess the advantages of robotics in ASD surgery, specifically related to pedicle screw placement accuracy.
Alignment correction
Alignment correction is the nucleus of ASD surgery. With pedicle screws connected to dual rods, the deformity is corrected using various techniques including translation, distraction-compression, rod de-rotation, direct vertebral de-rotation, cantilever, in situ bending, or vertebral coplanar (31). The intraoperative film was calibrated by the validated way using pelvic incidence and sacral to bicoxofemoral axis distance (32). They showed intraoperative measurements of TPA, T4PA, and T9PA strongly correlated with postoperative global alignment. However, this process remains cumbersome, can take several minutes, and increases the risk of contamination of the sterile field with the introduction of numerous, nonsterile moving parts. There is still a paucity of literature on this topic, therefore, further studies are needed to predict postoperative alignment exactly by intraoperative image guidance. The advancement of intraoperative image guidance may enable more precise and efficient assessment of it.
Future perspectives of intraoperative image guidance in ASD surgery
In the past two decades, ASD surgery has made rapid progress in parallel with the advancement of intraoperative imaging, image guidance, and robotics. Surgical treatment had limited applicability to ASD patients before, however, recently ASD patients may now benefit from less invasive treatments with outcomes that demonstrate an improved the quality of life (33). Nevertheless, there are some challenging problems remaining, notably that approximately 70% of patients experienced at least one complication postoperatively, and approximately 30% required at least one revision procedure (34).
In terms of intraoperative image guidance, decreasing radiation exposure is of great importance for both patients, surgeons, and operating room staff (35). In addition to radiation during surgery, ASD patients are often exposed to radiation in the form of pre-and post-operative full-length standing radiographs, and CT scans to monitor alignment over time. On the other hand, orthopedic surgeons, and in particular spine surgeons, are routinely exposed to intraoperative radiation resulting in higher cancer risk compared to surgeons in other fields (36). Several studies demonstrated the use of navigation significantly, which decreased radiation exposure compared to the use of conventional fluoroscopy use in spine surgery ( Table 2). Usually, many more pedicle screws are inserted in ASD surgery compared to other spine surgeries. Therefore, the decrease in radiation exposure due to navigation use may be particularly advantageous in ASD surgery, though there is a paucity of literature focusing specifically on this subset of patients.
Although the development in intraoperative image guidance is rapidly progressing in recent years, future advancements are anticipated. Currently, there is little compatibility amongst navigated tools across companies. Additionally, adoption of image guided and navigated operating room setups may be hindered by the initial cost and maintenance of purchasing of the system (43). If these issues are solved, these newer modalities will expand over even more. Moreover, novel intraoperative image guidance, such as augmented reality (AR), are in development and may work in conjunction with future robotics technology. This may enable surgeons to obtain the information of real-time navigation without having to look away from the patient.
Conclusions
The advance of intraoperative image guidance and robotics, along with all advances associated with surgical treatment, has led to the improved safety and effectiveness of ASD surgery in recent years. Navigation and robotic system are expected to improve outcomes in ASD surgery as reported in other spine surgeries, however, there is currently a paucity of literature focusing on intraoperative image guidance in this patient population. Further studies with a specific focus on ASD surgery are warranted to assess the efficacy of intraoperative image guidance in ASD surgery. Ethical Statement: The authors are accountable for all aspects of the work in ensuring that questions related to the accuracy or integrity of any part of the work are appropriately investigated and resolved.
Open Access Statement: This is an Open Access article distributed in accordance with the Creative Commons Attribution-NonCommercial-NoDerivs 4.0 International License (CC BY-NC-ND 4.0), which permits the noncommercial replication and distribution of the article with the strict proviso that no changes or edits are made and the original work is properly cited (including links to both the formal publication through the relevant DOI and the license). See: https://creativecommons.org/licenses/by-nc-nd/4.0/. | 2020-10-30T08:05:12.642Z | 2021-01-01T00:00:00.000 | {
"year": 2021,
"sha1": "9bd88a0ba74dce2b057904c74a5788051e7277d4",
"oa_license": "CCBYNCND",
"oa_url": "https://atm.amegroups.com/article/viewFile/47315/pdf",
"oa_status": "GOLD",
"pdf_src": "Adhoc",
"pdf_hash": "d47e2a61a05937289e22989c32be9c8ac283f933",
"s2fieldsofstudy": [
"Medicine",
"Engineering"
],
"extfieldsofstudy": [
"Medicine"
]
} |
119346316 | pes2o/s2orc | v3-fos-license | Twonniers: Interaction-induced effects on Bose-Hubbard parameters
We study the effects of the repulsive on-site interactions on the broadening of the localized Wannier functions used for calculating the parameters to describe ultracold atoms in optical lattices. For this, we replace the common single-particle Wannier functions, which do not contain any information about the interactions, by two-particle Wannier functions ("Twonniers") obtained from an exact solution which takes the interactions into account. We then use these interaction-dependent basis functions to calculate the Bose--Hubbard model parameters, showing that they are substantially different both at low and high lattice depths, from the ones calculated using single-particle Wannier functions. Our results suggest that density effects are not negligible for many parameter ranges and need to be taken into account in metrology experiments.
I. INTRODUCTION
Ultracold atoms in optical lattices have been a recent topic of significant interest, as they can be used to perform quantum simulations of fundamental models of many-body physics, which are often difficult to access using traditional condensed matter systems [1][2][3]. The perfect periodicity of optical lattices allows to mimic the crystalline environments electrons experience in solids and unprecedented control over the kinetic properties of the atoms is possible by tuning the lattice depths. Furthermore, the interaction properties between the ultracold atoms can be changed using techniques like Feshbach resonances. This has opened up many new avenues of research, particularly in the field of condensed matter and atomic physics, and made it possible to study quantum phases and quantum phase transitions over a wide range of parameters [1][2][3][4].
Theoretically, ultracold atoms in optical lattices can be described by a Bose-Hubbard model [5][6][7][8], which stems from a mapping of the continuous system to the lattice by using site localized single-particle Wannier functions. The static and dynamics properties of the gas are then described by two main parameters: the hopping term, which accounts for bosons tunneling between neighboring sites, and the on-site interaction term, which accounts for the repulsive energy when two particles sit at the same lattice site. The competition between these parameters (commonly determined by calculating overlap integrals using single-particle Wannier functions) characterizes the Mott-insulator/superfluid transition [1].
However, while mathematically convenient, single particle Wannier functions neglect certain physical effects, such as the broadening of the localized wave functions * Electronic address: rashi.sachdeva@oist.jp due to repulsive on-site interactions when two or more bosons occupy the same lattice site. This can have significant effects when trying to make precision measurements [9] or when using optical lattices for metrology [10], as the energy scales that govern the behavior of the atoms are typically small.
Recently, a number of theoretical efforts have been made to incorporate the effects of interaction on the Wannier functions using mean-field and numerical approaches [11][12][13][14][15]. In addition, there has been strong experimental evidence of the broadening of Wannier function at high fillings, when high-resolution spectroscopy showed non-uniform frequency shifts for different occupation numbers per site [9]. It is therefore important to include the effects of modified densities due to the repulsive interactions when calculating the Bose-Hubbard parameters. In this work we suggest to do this by using the exact two-particle wave functions ("Twonniers"), obtained after solving the two-particle Schrödinger equation with contact interaction. For comparison, we also perform calculations using the single particle Wannier functions. To the best of our knowledge, this is the first study where the expansion is directly performed in terms of the two-particle wave functions, which has an implicit dependence on repulsive atom-atom interactions.
Our presentation is organized as follows. In Sec. II we provide a brief review of the conventional way of calculating the Hubbard parameters using the single-particle Wannier function approach. Then, in Sec. III we introduce the two-particle wave functions that include the interaction effects by solving the two-particle Schrödinger equation with contact interaction. These wave functions are used in Sec. IV to calculate the parameters of the modified Bose-Hubbard Hamiltonian, which are interpreted in Sec. V in comparison to those obtained from single-particle Wannier functions. Finally, we discuss possible applications and conclude in Sec. VI.
II. THE BOSE-HUBBARD MODEL
The starting point for our analysis is the Hamiltonian for a Bose gas, given bŷ where the single-particle term includes the kinetic energy and the optical lattice potential, Here m is the atomic mass. The term including the pointlike interactions is given bŷ where g = 4π 2 a s /m is the interaction strength related to the s-wave scattering length, a s . The bosonic field operators,Ψ andΨ † , can be expanded into a series of orthonormal functions, f i (r), and bosonic annihilation and creation operators,â i andâ † i , for each lattice site aŝ A convenient and common choice for the orthonormal functions in a lattice potential are the well-known Wannier functions [16,17], which are localized at the individual lattice sites. The single-particle Wannier function at lattice site i in the Bloch band α is defined as and the components in each direction can be written in terms of the Bloch functions φ α k (x) as where N x is the number of lattice sites along the xdirection (equivalent expressions exist for the other spatial directions), and x 0 i is the center of the i-th trap. It is important to note that the Wannier functions are not eigenfunctions of the system and that, as singleparticle functions, they do not contain any information about possible scattering effects due to multi-particle occupancy of a site. Also, for small interaction energies the particles can be considered to be confined in the lowest Wannier orbitals because the energy separation between the lowest and first excited band is quite large compared to interaction energy. We work in this regime and from now onwards will drop the band index α.
The hopping amplitude in the Bose-Hubbard model can then be calculated as where only the nearest-neighbor overlaps are taken into account, and the interaction part of the Hamiltonian leads to the onsite interaction amplitude
III. TWO-PARTICLE WAVE FUNCTIONS
The effect of the repulsive scattering interaction depends on both the interaction strength g and the density distribution of the wave function (see Eq. (3)). Therefore, it is important to choose the correct form for the orthonormal functions with which one performs the expansion: since the interactions are local and the functions are localized the density distribution should take the interaction into account if two (or more) particles are at the same lattice site. We will therefore in the following replace terms of the form f i (r)f i (r) by two-particle Wannier functions, but leave terms of the form f i (r)f j (r) (i = j) to be described by single-particle Wannier functions.
To find the two-particle Wannier functions we solve the Schrödinger equation for two particles in a sinusoidal potential, V L (r), interacting via a point-like potential. The Hamiltonian is given bŷ and its corresponding delocalized eigenfunctions Φ j (r 1 , r 2 ) can be used as a basis to construct the localized (two-particle) functions Since the interactions raise the energies, we use the eigenfunctions of the two lowest bands. To determine the coefficients c j , we assume that the particles are well localized at each lattice site, using as the criteria for localization the minimization of the second moment [18] This allows us to define the single-particle single-site densities from the two-particle wave functions as |W i (r, r)|.
In order to fulfill the orthogonality condition in Eq. (4) this density needs to be normalized as which also assures the fulfilment of the particle statistics, The red (solid) line corresponds to the twoparticle single site density obtained after numerically solving the Schrödinger equation with the Hamiltonian (9) for 9 traps, with lattice depth V0 = 1.5Er and scattering length as = 100a0, where a0 is the Bohr radius. The blue (dashed) line corresponds to the square of single-particle Wannier functions for the same lattice parameters. The lattice depth is given in units of the recoil energy Er = π 2 2 /2ma 2 , where a is the lattice spacing of the sinusoidal optical lattice potential. The inset shows a zoom-in on the tails of the densities, clearly showing the broadening of two-particle density compared to the density of the single-particle Wannier function.
To compare the single particle and two-particle Wannier functions, we show in Fig. 1 their respective densities computed in a one-dimensional potential V L (x) = V 0 sin 2 (πx/a). One can clearly see that, as expected, the repulsive interaction leads to a broadening of two-particle Wannier function, which eventually results in significant change in the Bose-Hubbard parameters. However, one can also see that the wings of the two particle Wannier function at the position of the neighbouring lattice sites are suppressed, which is due to the orthogonality requirement between two of the modified Wannier functions.
In the next section, we use this two-particle wave function and density to construct the different terms in the Hamiltonian and compare them to the ones using only single-particle Wannier function solutions.
IV. MODIFIED BOSE-HUBBARD HAMILTONIAN
The effects of the interactions between the particles are fully contained in the interaction termĤ I , which, after inserting the expansion of Eq. (4), takes the form As we are only interested in the ground state, the Wannier functions and the two-particle wave functions based on Eq. (9) can be chosen to be real and we will therefore neglect the complex conjugates below. The parameters U ijkl can then be calculated using the substitution which should be compared to the standard way of calculating using single-particle Wannier functions Here we have introduced the labels W and w which will be used below to distinguish, respectively, terms calculated from the two-particle Wannier function density or from single-particle Wannier functions. The hopping term in the Bose-Hubbard model depends only on the single-particle Wannier functions as it comes from the non-interacting part of the Hamiltonian (2), and it is therefore is not affected by these substitutions.
To explicitly identify the different physical processes that are summarized in the interaction term, we will in the following group the different terms into four categories. The first one is the one where two particles are at the same site and interact with each other. The associated terms includeâ † iâ † iâ iâi and their corresponding amplitude is given by which under the substitutions of Eqs. (15) and (16) becomes The second group corresponds to terms with operatorsâ † iâ † iâ jâj , (i = j), which describe the joint tunneling of two particles between two neighbouring lattice sites, i.e. the particles hop together from one lattice site to another. The coupling amplitudes associated with this process are given by and become after substitution U w iijj = g dr |w i (r)| 2 |w j (r)| 2 .
The next effect is associated with terms includinĝ a † iâ † jâ iâj , and it can be interpreted as two indistinguishable processes: the interaction between particles (a) as = 100a0 . The insets shows the behavior for shallow lattices. For the numerical calculation 9 traps have been taken into account. Bottom row: Ratios of (c) Uiiii, (d) Uiijj, and (e) Uiiij, calculated with the two methods (single-particle and two-particle Wannier functions) for as = 100a0 (solid blue) and as = 400a0 (dashed red).
at neighbouring sites or cross tunneling of particles. As these processes only involve a single particle at each site, one gets U W ijij = U w ijij = U w iijj . Finally, the last effect is associated with terms includingâ † iâ † iâ iâj , which describes single-particle tunneling between an empty and an already occupied neighbouring trap. The coupling amplitudes for this process are given by which, after the substitutions, become
V. RESULTS AND DISCUSSIONS
In the following we will numerically compute and compare the interaction parameters for the single-particle and the two-particle Wannier function approach. To avoid complications from the regularized delta function in three dimensions, all calculations are done in one dimension, assuming a tight harmonic confinement of the atoms in the transverse direction (of frequency ω ⊥ ). However, all calculations are conceptually straightforward to extend to higher dimensions. Adjusting the coupling constant g to one dimension can be done via 19]. In the following we choose ω ⊥ = 2π × 10 4 Hz. The results for two different values of the scattering length (a s = 100 a 0 and a s = 400 a 0 ) and as a function of the lattice depth are shown in Fig. 2. It can be seen that the overlap integrals U iiii , which describe the on-site interaction, are generally in good agreement with each other for both approaches. The biggest deviations appear for shallow lattices (see Fig. 2(c)), where U W iiii is smaller than U w iiii . The difference stems from the fact that the repulsive interaction leads to a broadening of two particle density and consequently a reduction in its maximal amplitude, which directly translates into a smaller magnitude of the interaction coefficient for the two-particle Wannier approach. For deeper lattices, i.e. larger poten- tial energies, the broadening is reduced and the two quantities have similar values. The crossing between U iiii and J, which is visible in the inset of Fig. 2(a), corresponds to the parameter range where tunneling starts to dominate over the interaction effects. Since at the crossing point the two relevant values of U iiii differ by about 10%, an effect on the Mott-transition point can be expected.
Similar differences between the two methods can also be noted for the overlap integrals for the correlated pair tunneling, U iijj , where for shallow lattices the integral based on the two-particle Wannier functions is larger than the one based on the single-particle functions. Here the extended size of the localised functions due to the repulsive interactions leads directly to a larger overlap between neighboring sites. On the other hand, for deeper lattices, the pair-tunneling coupling calculated from the two-particle functions becomes an order of magnitude smaller than that from the single-particle functions. This is due to the fact that even at higher lattice depths the single particle Wannier function density and the two particle density have different behaviour in their tails, although their bulk density becomes almost identical. In this regime, the magnitude of the tail of the single particle Wannier density is higher than the one of the two particle density, leading to a larger overlap between neighboring densities, and thus to higher values of U iijj (see also Fig. 2(d)). Finally, the density dependent couplings U iiij show a difference for shallow lattices, which can be explained in the same way as for the interaction terms above (see Fig. 2(e)).
These results are consistent with the situation where the interaction strength is changed while keeping the lattice depth constant (see Fig. 3). The on-site interaction and interaction-mediated tunneling terms, U iiii and U iiij , do not show much difference between the two methods, but the two-particle tunneling coupling, U iijj is much more severely affected. For a comparatively deep lattice (V 0 = 20E r , Fig. 3(b)) the two-particle tunneling amplitude calculated using the two-particle Wannier approach increases faster than the one based on the single-particle Wannier functions, and the two methods do not coincide anywhere in the plotted parameter regime. However, for a shallower lattice (V 0 = 10E r , Fig. 3(a)) a crossing can be seen, as the two curves associated to U iijj are closer together. This leads to the conclusion that the effects of the interactions can have significant influence on the parameters of the Bose-Hubbard model, and should be taken into account in particular in metrology experiments. It also provides justification for the use of extended Bose-Hubbard models [20,21], which take the two-particle tunnelling and the cross tunnelling terms into account [22][23][24].
VI. POSSIBLE APPLICATIONS AND CONCLUSIONS
To summarize, we have calculated the parameters for the Bose-Hubbard model by consistently including onsite density effects. This was done by replacing the commonly used single-particle Wannier functions by two particle Wannier functions, which result in a broadening of the density due to repulsive interactions. Given the experimental control parameter of the optical lattice depth and the scattering lengths, we have shown that in certain regimes the Bose-Hubbard parameters show substantial deviation from the results using single-particle Wannier functions and that terms such as the correlated pair tunnelling can be become important, even though they are usually neglected.
These results are hence of principle interest for current and future experiments in the field of ultracold atoms in optical lattices, especially to account for non-uniform shifts in atomic clock frequencies due to the collision of atoms. In a recent experiment by Campbell et al. [9], the atomic clock shift of 87 Rb was measured, and found to decrease with increasing number of atoms per site. Other works have also shown that the clock frequency shift is directly proportional to the onsite interaction strength [25,26]. When calculated using single particle Wannier functions, the onsite interaction term is in-dependent of the occupancy of lattice sites, and hence cannot explain the decrease of the clock shift with increasing occupancy. However, the presented technique takes into account the effect of repulsive interactions implicitly, and the resulting broadening of the two-particle single-site density and the decrease of the magnitude of onsite interaction term U iiii , can explain the decrease of clock shift. | 2017-07-26T10:22:30.000Z | 2017-07-26T00:00:00.000 | {
"year": 2017,
"sha1": "4888b20dcc418bf747f03292ed1c1560dcf7346e",
"oa_license": "CCBY",
"oa_url": "http://link.aps.org/pdf/10.1103/PhysRevA.96.063611",
"oa_status": "HYBRID",
"pdf_src": "Arxiv",
"pdf_hash": "4888b20dcc418bf747f03292ed1c1560dcf7346e",
"s2fieldsofstudy": [
"Physics"
],
"extfieldsofstudy": [
"Physics"
]
} |
256276800 | pes2o/s2orc | v3-fos-license | Low-density lipoprotein balances T cell metabolism and enhances response to anti-PD-1 blockade in a HCT116 spheroid model
Introduction The discovery of immune checkpoints and the development of their specific inhibitors was acclaimed as a major breakthrough in cancer therapy. However, only a limited patient cohort shows sufficient response to therapy. Hence, there is a need for identifying new checkpoints and predictive biomarkers with the objective of overcoming immune escape and resistance to treatment. Having been associated with both, treatment response and failure, LDL seems to be a double-edged sword in anti-PD1 immunotherapy. Being embedded into complex metabolic conditions, the impact of LDL on distinct immune cells has not been sufficiently addressed. Revealing the effects of LDL on T cell performance in tumor immunity may enable individual treatment adjustments in order to enhance the response to routinely administered immunotherapies in different patient populations. The object of this work was to investigate the effect of LDL on T cell activation and tumor immunity in-vitro. Methods Experiments were performed with different LDL dosages (LDLlow = 50 μg/ml and LDLhigh = 200 μg/ml) referring to medium control. T cell phenotype, cytokines and metabolism were analyzed. The functional relevance of our findings was studied in a HCT116 spheroid model in the context of anti-PD-1 blockade. Results The key points of our findings showed that LDLhigh skewed the CD4+ T cell subset into a central memory-like phenotype, enhanced the expression of the co-stimulatory marker CD154 (CD40L) and significantly reduced secretion of IL-10. The exhaustion markers PD-1 and LAG-3 were downregulated on both T cell subsets and phenotypical changes were associated with a balanced T cell metabolism, in particular with a significant decrease of reactive oxygen species (ROS). T cell transfer into a HCT116 spheroid model resulted in a significant reduction of the spheroid viability in presence of an anti-PD-1 antibody combined with LDLhigh. Discussion Further research needs to be conducted to fully understand the impact of LDL on T cells in tumor immunity and moreover, to also unravel LDL effects on other lymphocytes and myeloid cells for improving anti-PD-1 immunotherapy. The reason for improved response might be a resilient, less exhausted phenotype with balanced ROS levels.
Introduction
The discovery of immune checkpoints and development of their specific inhibitors was acclaimed as a major breakthrough in cancer therapy. Especially blocking the inhibitory receptor PD-1 on immune cells and its ligand PD-L1 on immune and tumor cells has been shown to be associated with an enhanced overall survival in metastatic disease of various tumor entities. However, only a limited patient cohort shows sufficient response to therapy (1). Hitherto, numerous biomarkers have been described, predicting response to checkpoint inhibition (2). Recently, cholesterol has been newly identified as a biomarker for the efficacy of PD-1 inhibition (3)(4)(5). Consistent with our own results, Perrone et al., Galli et al. and Tong et al. retrospectively showed, that baseline hypercholesterolemia was associated with better outcomes in patients treated with anti-PD-1 checkpoint therapy. In our preliminary exploratory approach (6), we also prospectively demonstrated a positive association.
However, cholesterol seems to be a double-edged sword in tumor immunity, and it`s role in the tumor environment is not fully understood, as other authors have discussed opposing effects. Ma et al. reported a cholesterol induced exhaustion of CD8 + T Cells in the tumor microenvironment and furthermore, Khojandi et al. observed a promoted resistance to cancer immunotherapy by oxidized lipoproteins, amongst others mediated by suppression of T cell immunity (7,8). The reason for these seemingly paradox findings may be due to the embedment of cholesterol in different complex metabolic conditions. Cholesterol has been identified as a biomarker in cachexia and the metabolic syndrome (9,10). Furthermore, hypercholesterolemia has been associated with atherosclerosis, Alzheimer`s disease, cancer and may exacerbate autoimmune diseases by inducing hyper-activated T cells (11)(12)(13)(14)(15).
Up to date, mainly macrophages have been perceived as a link between cholesterol and different diseases, however there is growing evidence for T cells also playing a crucial role (16). Although the details of cholesterol homeostasis have been investigated mainly in hepatocytes and macrophages, the mechanisms of cholesterol biosynthesis, uptake, esterification, and efflux also apply to T cells (11,12). Furthermore, it has been acknowledged that T cells express the LDL receptor, however it is not clear, if cholesterol uptake is conducted exclusively via the LDL receptor (17). Cholesterol maintains quiescence in naïve T cells and also paradoxically regulates exit from quiescence by modulating TCR nanocluster formation besides effecting signaling molecules (18)(19)(20). T cell activation induces an increase of intracellular cholesterol for proliferation, however self-regulation is secured by negative feedback pathways (21)(22)(23)(24). Moreover, cholesterol is also involved in the differentiation and stabilization of the different T cell subsets. While Th1, Th17, gdT, and cytotoxic T cells require high cholesterol levels, Th2 cells do not (25)(26)(27)(28). Paradoxical effects have also been observed in Tregs and memory T cells (29,30). There are indications that CD8 memory T cells might require suppression of the cholesterol pathway, while contrarily CD4 memory T cells depend upon enhanced cholesterol levels (11,31,32).
So, being confronted with very paradox findings in complex environments, we aimed to straightforwardly investigate the effects of cholesterol on T cell subsets. We focused on LDL, since LDL emerged as the most significant serum lipid associated with response to immunotherapy in our and another preceding study (6,33).
The LDL dosages for treatment referring to medium control were chosen according to LDL serum levels and their estimated tissue levels in responders (LDL high ) and non-responders (LDL low ) to anti-PD-1 checkpoint therapy (6,34).
We analyzed the T cell phenotype, considering checkpoint markers, activation markers, co-stimulatory markers und effector versus memory markers. Furthermore, we investigated T cell metabolism including mitochondrial metabolism, cholesterol uptake, ROS accumulation, cell respiration and acidification. In order to further explore the functional relevance of our findings in the context of tumor immunity and PD-1 blockade, we established a co-culture model with T cells migrating into colorectal cancer HCT116 tumor spheroids.
Cell culture
Buffy coats from healthy donors were obtained from the Department of Transfusion Medicine (University Hospital Regensburg) in form of remnants from routine platelet donations. The donations were approved by the Institutional Ethics Committee of the University of Regensburg (vote number 13-101-0240; 13-101-0238) and are in accordance with the Declaration of Helsinki.
Single-cell metabolic assays
Cytosolic reactive oxygen species (ROS) were determined after surface marker staining by applying 10 µM 2′,7′-dichlorofluorescin diacetate (Sigma Aldrich, D6883) for 20 minutes in a cell culture incubator at 38°C in FACS wash buffer in air tight tubes. Cells were washed with 3 ml cold PBS, resuspended in FACS wash buffer and measured immediately.
Mitochondrial content was assessed by staining with MitoTracker Green FM (Thermo Fisher Scientific, M7514). Cells were incubated with 15 nM MitoTracker and 1.3 µM cyclosporine A in RPMI1640 supplemented with 2 mM L-Glutamine for 1 hour at 37°C in a cell culture incubator. Surface staining was performed afterwards.
Tetramethylrhodamine methyl ester (TMRM) (Thermo Fisher Scientific, T668) is a membrane-permeable, cationic, red-orange fluorescent dye that is enriched in active mitochondria. Cells were incubated with 10 µM TMRM and 1.3 µM cyclosporine A in RPMI1640 supplemented with 2 mM L-Glutamine for 30 minutes at 37°C in a cell culture incubator. Surface staining was performed afterwards.
Cholesterol was determined by Filipin staining after surface marker staining by adding 50 µg/ml Filipin in 500 µl PBS for 45 minutes at room temperature. Cells were washed with FACS wash buffer, resuspended, and measured immediately.
Monitoring of oxygen consumption and pH in-vitro
Cellular oxygen consumption and pH levels in culture medium were determined non-invasively by the PreSens technology (PreSens Precision Sensing GmbH). 0.8x10 6 T cells with anti-CD3/CD28 Dynabeads (with a cell to bead ratio of 1:1, Thermo Fisher Scientific) were seeded in 24-well OxoDish ® OD24 plates without fixation in 1 mL medium under cell culture conditions for the indicated period of time. Data were analyzed using PreSens SDR_v38 software.
Real-time live cell imaging
After 24h co-culture with pre-activated T cells, spheroids were washed and transferred to a fresh 96-Well Poly-HEMA plate with 200 µl RPMI 1640 (GIBCO, 31870-025), 10% fetal calf serum (Sigma, F7524), 2 mM glutamine (PAN Biotech, P04-80100)) and 25 IU/ml IL-2, and labelled with 20 µl/ml Cyto3D ™ Live-Dead Assay Kit dye (TheWell Bioscience, BM01). Plates were incubated in the Incucyte ZOOM live-cell imager (Essen Bioscience, Welwyn Garden City, UK) at 37°C and 5% CO2 and images were acquired (4x or 10x objective) at the indicated time points. Data were analyzed with the Incucyte ZOOM 2020B software (Essen Bioscience) by creating a thresholdbased mask for the calculation of the green object total area (GOTA) of viable cells (green = viable, red = dead).
Statistical analysis
Depending on normal or non-normal distribution, RM one-way ANOVA with Geisser-Greenhouse correction and Dunnett`s multiple comparison test or Friedman test with Dunn`s multiple comparison test was performed. Significance was indicated as p < 0.05 *, p < 0.01 **, p < 0.001 *** referring to control. Data were corrected for multiple testing according to Benjamini and Hochberg as indicated (retrieved from https://statistikguru.de/rechner/ adjustierung-des-alphaniveaus.html).
Presence of LDL high balances the metabolic activity in CD4 + and CD8 + T cells
Kishton et al. acknowledged effector T cells to exhibit an enhanced metabolic activity in-vitro, characterized by a high glycolytic activity and reactive oxygen species (ROS) production, resulting in a strong proliferation and cytokine production during expansion. However, upon in-vivo transfer, these cells showed a poor persistence and anti-tumor activity.
On the contrary, T cells exhibiting a balanced metabolic activity and a memory phenotype in-vitro were associated with a high antitumor activity and increased persistence in-vivo (37,38). Furthermore, Gicobi et al. recently demonstrated that resilient T cells, which were resistant in a harsh tumor microenvironment and responsive to immunotherapy, compensated for excessive ROS to maintain metabolic fitness and preserve high cytotoxic capacity (39).
Therefore, we were intrigued to see, if the presence of LDL high was linked to a balanced T cell metabolism.
T cell oxygen consumption was significantly reduced in the presence of LDL high , indicating a reduced turnover by oxidative phosphorylation. Furthermore, the CD4 + T cell subset showed less acidification by trend, however glycolytic activity was preserved ( Figures 1A-H).
T cell proliferation was strongly impaired by LDL high after 96 h, cell counts x 10 5 /ml hardly differed from pre-proliferation cell counts after 48 h (Figures 1I, J).
ROS accumulation was significantly reduced at both time points in both subsets ( Figures 1K, L). However, we did not find any significant differences in the entire CD4 + and CD8 + T cell subset, respectively, concerning the mitochondrial mass, the mitochondrial membrane potential and intracellular cholesterol after 48 h and 96 h (Supplemental Data 3 and 6).
The synopsis of the findings provided strong indications for a balanced metabolism in the presence of LDL high in both T cell subsets. As a balanced metabolic activity has been associated with a central memory phenotype, we investigated for a phenotypical shift in the T cell subsets (37).
LDL high induces a central memory phenotype in the CD4 + T cell subset
Central memory T cells (T CM ) have been shown to exhibit a superior persistence and anti-tumor immunity compared to effector memory T cells (T EM ) (40). T CM were associated with a favorable prognosis in oral squamous cell carcinoma and gastric cancer. Furthermore, a predominance of T CM predicted response to checkpoint therapy in Merkel cell carcinoma (41-43).
CD4 + and CD8 + T cells were stimulated with activating beads and IL-2 in the presence of medium control, LDL 50µg/ml (LDL low ) or LDL 200µg/ml (LDL high ), respectively. Memory markers were analyzed after 48 h and 96 h.
CD4 + T cells shifted toward CD45RO + CD62L + central memory phenotype in the presence of LDL high after 96 h (Figure 2A).
We also observed a trend towards a T CM phenotype in the CD8 + subset after 96 h, however the data were not significant. No effects were seen after 48 h.
A preliminary experiment also revealed the up-regulation of CD45RO + CCR7 + cells in the CD4 + T cell subset by LDL ( Figure 2C), however there was only a limited fraction of CD62L/ CCR7 double positive cells ( Figure 2D), which may be traced back to a mixed memory phenotype and alternatively shedding of CD62L (44, 45).
Furthermore, completing the phenotype, the expression of CD44 was significantly enhanced by LDL high in the CD4 + subset after 96 h ( Figure 2E). Besides its function as an activation marker and high expression on memory cells, CD44 can promote survival and memory cell development in Th1 cells (46). No effects were seen regarding the expression of CD27, CD28 and FOXP3. T cells were mostly positive for CD27 and CD28, and FOXP3 was barely expressed under all conditions (Supplemental Data S1-5: FACS gating, statistics and data points).
In a further step, we investigated the impact of LDL on memory cell-modulating cytokines. IL-21 has been associated with the induction of a central memory phenotype (47,48) and IL-7 and IL-15 have been linked to maintenance of a long-term memory survival (49). Autocrine production of IL-7 and IL-15 has been reported (50,51). Moreover, IL-10 has been linked to suppression of memory development and memory cell responses (52,53). We could not find any evidence for memory cell induction and maintenance by IL-21, IL-7 and IL-15. Secretion of IL-21 was downregulated and IL-7 and IL-15 could not be detected in the presence of LDL high .
However, secretion of IL-10 was significantly impaired by LDL high , possibly thereby enabling memory formation ( Figure 2F, Supplemental Data S6, 7).
Besides downregulation of ROS and shift towards of a central memory phenotype, the mitochondrial membrane potential has been acknowledged to identify cells with a balanced metabolism and an enhanced stemness for cellular therapy (54). Subtyping for T CM phenotype after 96 h revealed a significant reduction of the mitochondrial membrane potential in the CD8 + subset ( Figure 2B), perhaps indicating resilient T cells with a high cytotoxic capacity (39). Depending on normal or non-normal distribution, RM one-way ANOVA with Geisser-Greenhouse correction and Dunnett`s multiple comparison test or Friedman test with Dunn`s multiple comparison test was performed. Significance was indicated as p < 0.05 *, p < 0.01 **, p < 0.001 *** referring to control. Data were corrected for multiple testing according to Benjamini and Hochberg.
In a further step we also investigated phenotypical changes concerning exhaustion, activation, and co-stimulatory surface markers (Table 1).
Presence of LDL
high is associated with a less exhausted phenotype in CD4 + and CD8 + T cells besides upregulation of the co-stimulatory marker CD154 (CD40L) in the CD4 + T cell subset CD4 + and CD8 + T cells were stimulated with activating beads and IL-2 in the presence of medium control, LDL 50µg/ml (LDL low ) or LDL 200 µg/ml (LDL high ), respectively. Surface checkpoint markers, costimulatory markers and activation markers were analyzed after 48 h and 96 h.
In the group of checkpoint markers, LDL high induced a significant down-regulation of the fraction (%) of PD-1 positive cells in both T cell subsets after 48 h and in the CD4 + subset also after 96 h ( Figure 3A).
The expression (MFI) of PD-1 was significantly reduced in the CD4 + subset after 48 h and in both subsets after 96 h ( Figure 3B). The fraction (%) of LAG-3 + T cells was significantly reduced in both T cell subsets after 96 h ( Figure 3C), the expression (MFI) of LAG-3 was reduced temporarily in the CD4 + subset after 48h. High expression of PD-1 and LAG-3 have been shown to be associated with a loss of T Significance was indicated as p < 0.05 *, p < 0.01 ** referring to control. All data were corrected for multiple testing according to Benjamini and Hochberg. cell function (55)(56)(57)(58). Downregulation of these exhaustion and suppression markers, especially LAG-3, may enhance the efficacy of PD-1 blockade (59)(60)(61). Intriguingly, in the presence of LDL low , but not LDL high , PD-L1 was significantly up-regulated in the CD4 + subset after 48 h and 96 h. Although the expression of PD-L1 on CD4 + T cells was associated with an improved PFS in NSCLC patients (62), PD-L1 signaling on human memory CD4 + T cells induced a regulatory phenotype (63). The expression of all other checkpoint markers was not significantly affected by LDL (Supplemental Data S1-5).
In the group of co-stimulatory markers the fraction and expression of CD154 (CD40L) was significantly up-regulated in the CD4 + T cell subset after 96 h (Figures 3D, E). Interaction of CD154 with CD40 has been demonstrated to mediate anti-tumoral immune responses by enhancing the immunogenic cell death of tumor cells, activation of antigen presenting cells, production of proinflammatory factors, co-stimulation of CD4 + and CD8 + T cells, and the tumor cell susceptibility to T cell lysis (64,65).
After 48 h, OX40 + T cells were temporarily reduced in the CD4 + subset under both conditions containing LDL, however no differences were seen after 96 h. The expression of all other co-stimulatory markers was not significantly affected by LDL. Furthermore, in the group of activation markers and other markers, expression of CD25 and the LDL receptor were temporarily impaired and CD95 temporarily up-regulated under both conditions containing LDL referring to control after 48 h. No differences were seen after 96 h (Supplemental Data S1-5).
As the T cells exhibited a less exhausted phenotype, we were also intrigued to investigate cytokine secretion. In monoculture, we could not reveal any significant differences for IFNg, TNFa, granzyme B, IL-17 and IL-4, however the production of perforin was enhanced in both T cell subsets by trend (Supplemental Data S3, 6, 7).
To complete our understanding of the phenotypical und functional alterations induced by LDL, we investigated in a further step functional properties in a spheroid model.
LDL high augments checkpoint blockade in a tumor spheroid co-culture model
Spheroids were generated for 4 days. In parallel, T cells were preactivated with anti-CD3/CD28 beads and IL-2. After 2 days of stimulation with anti-PD-1, LDL high or LDL high + anti-PD-1 versus medium control, T cells were added to the spheroids and allowed to infiltrate for 24 h. Subsequently, the co-cultured spheroids were stained with a viability dye (red = dead, green = viable). Fluorescence was monitored for further 48 h (Figure 4, Supplemental Data S10, Video S1).
We did not find any significant differences concerning viability subsequently to the sole addition of an anti-PD-1 antibody or LDL high in comparison to medium control. However, combination of LDL200µg/ml with an anti-PD-1 antibody induced a significant reduction of the normalized spheroid green object total area A B D C FIGURE 4 Referring to control, viable cells were significantly reduced in the HCT116 spheroid tumor model in the presence of LDL high in combination with an anti-PD-1 antibody. Tumor spheroids were generated for 4 days with HCT116 colon carcinoma cells. T cells were freshly isolated and stimulated with anti-CD3/CD28 beads, IL-2 and treated with anti-PD-1, LDL high or LDL high + anti-PD-1 versus medium control.
Discussion
Recently, a debate has been launched on the impact of cholesterol in anti-PD-1 immunotherapy. As the tumor environment is mostly acidic, hypoxic, and glucose-deficient, lipids remain as an important source of energy for tumor cells and immune cells. However, lipid metabolism is exhibiting contradictory roles in tumor immune response and besides other lipids, cholesterol emerges as a doubleedged sword in tumor immunity (66). In the context of cholesterol and immunotherapy, an association with response (3-6) to therapy versus treatment failure (7,8) was delineated. Other authors interpreted the chain of causation differently and discussed chronic inflammation in first place, inducing T cell exhaustion, thus leading to cancer and hypercholesteremia as part of the metabolic syndrome, the latter again enhancing T cell exhaustion in the sense of a vicious circle (67).
So, being confronted with very paradox findings in complex environments, we aimed to straightforwardly investigate the effects of LDL on T cell subsets. We focused on LDL, since LDL emerged as the most significant serum lipid associated with response to immunotherapy in our and another preceding study (6,33).
The LDL dosages for treatment referring to medium control were chosen based on LDL serum levels and their estimated tissue levels in responders (LDL high ) and non-responders (LDL low ) to anti-PD-1 checkpoint therapy (6,34). In-vitro, we observed the enhancement of a central memory phenotype, downregulation of IL-10 secretion and up-regulation of CD40L in the CD4 + T cell subset. A balanced metabolism, indicated by lowered ROS levels, a preserved glycolytic flux, and a less exhausted phenotype under T cell activation were acknowledged in both T cell subsets, however a significant downregulation of the fraction of both, PD-1 and LAG-3 after 96 h, was only observed in the CD4 + subset. All potentially beneficial effects were merely significant (or more pronounced) in presence of LDL high . T cell transfer into a colorectal cancer HCT116 spheroid model revealed a significant reduction of the spheroid viability in presence of LDL high plus anti-PD-1.
Zuzao et al. have shown that functioning CD4 immunity is essential for response to anti-PD-1 checkpoint therapy. Patients with a high proportion of CD4 + T cells with a central memory phenotype and a low PD-1/LAG-3 co-expression, were responsive to immunotherapy and moreover, a functional CD4 immunity supported the recovery of CD8 immunity, by, amongst others, secreting IFNg and priming dendritic cells via CD40L (59, 68). These findings have been confirmed by further studies, also considering tumor infiltrating T CM and T CM related genes (41-43). The phenotype described by Zuzao et al. is nearly identical to the effects we have seen, however under our conditions (presence of LDL and stimulation) the T cells were mostly positive for CD27 and CD28 as also described by Liu et al. (40). Furthermore, Zuzao et al. did not describe the expression of CCR7 or CD44.
Besides IL-7 and IL-21, IL-15 is one of the commonly known memory inducing cytokines. Interestingly, during CAR T cell development, addition of merely IL-15 enhanced similar beneficial effects, amongst others reduction of exhaustion, the preservation of a less differentiated memory cell phenotype and a superior anti-tumor response in-vivo (69). Analysis of memory inducing cytokines was negative in our experimental setting, however further investigation of the LDL induced signaling cascade might be expedient and moreover, the strongly impaired secretion of IL-10 may enable memory phenotype formation.
Moderate levels of ROS, generated from mitochondria and NADPH oxidases were shown to be crucial for T cell signaling, however excess amounts of ROS resulted in mutation and cell damage and were furthermore associated with T cell exhaustion and immunosuppression in the tumor milieu. Cellular anti-oxidants have been reported to be essential for maintaining anti-tumor immunity. T CM express higher anti-oxidant levels than T EM , enabling an enhanced control of tumor growth (70)(71)(72). In presence of LDL high we observed significantly reduced ROS levels, however maybe also due to the moderated oxygen consumption and presumably decreased oxidative phosphorylation (OXPHOS). A more resilient, less exhausted phenotype and cytotoxic capacity of T cells have been shown to be determined by balancing ROS (39).
However, observing further metabolic features of central memory induction, we were not able to detect a significantly enhanced mitochondrial mass or a lower mitochondrial membrane potential in the treated T cell populations, maybe due to incubation time or alternatively to culture conditions. Merely subtyping CD8+ T cells for a CM-like phenotype revealed a significantly reduced mitochondrial membrane potential after 96 h. Further research should be conducted, to define the spare respiratory capacity and the role of OXPHOS and fatty acid oxidation (FAO) in LDL treated T cells (37,73). Nevertheless, the state of the LDL high treated T cells induced a superior anti-tumor effect in the HCT116 spheroid model.
Mechanisms, how CD4 + T cells can contribute to anti-tumor immunity have been described. Growth arrest of cancer cells can be achieved by inducing senescence through cytokines like IFNg. Furthermore, CD4 + T cells can induce direct cytotoxicity in MHC II expressing tumor cells (74). Similarly, CD40L can develop cytotoxic effects via CD40 (75). Cytotoxicity via CD40 could be an imaginable mechanism in this model, as HCT116 has been shown to express CD40 (76) and the CD4 + subset significantly up-regulated CD40L.
Upregulation of CD154 has already been acknowledged on platelets in hypercholesteremia (77). Familiar functions associated with CD154 are of anti-tumorigenic nature, ranging from stimulation of antigen presenting cells, activation of immune effector cells, favorable modulation of the tumor environment, enhancement of the immunogenicity of malignant cells, besides the already mentioned direct action against tumor cells by inducing their apoptosis. Furthermore, the CD40-CD40 ligand pathway plays a critical role during rescue of exhausted CD8 T cells (78). Stimulation of this pathway is under consideration for immunotherapy (79). However, as a ligand to newly identified integrins, CD154 may also play a role in cancer pathogenesis, which may be one of the reasons for paradox effects seen with high cholesterol levels in immunotherapy (80).
Furthermore, concerning the paradox effects of cholesterol in literature, LDL also induces potentially inhibitory markers on the T cell subsets and depending upon ligand or receptor expression in the tumor milieu they may have an immunosuppressive effect, or be of no consequence. A significant up-regulation of PD-L1 was identified on the CD4 + subset under the LDL low condition, however also by trend in the presence of LDL high . As already mentioned, PD-L1 signaling on CD4 + memory cells by cross-linking was demonstrated to evoke highly suppressive cells (63), however the expression of PD-L1 on immune cells can on the other hand be predictive of response in some tumor entities (81). For instance, patients with a higher proportion of PD-L1 + T cells at baseline had an improved objective response to PD-1 inhibitor therapy in melanoma and lung cancer (82).
Also, TIGIT was up-regulated in both T cell subsets in the presence of LDL high by trend. Although the upregulation of TIGIT can exert immunosuppressive features in tumor immunity (83, 84), literature revealed TIGIT + CD8 + subsets with cytotoxic properties (85).
Conclusions
In this study we showed that LDL skewed human CD4 + T cells into a memory phenotype, balanced T cell metabolism and reduced exhaustion marker expression in both subsets besides inducing the up-regulation of the co-stimulatory marker CD40L in the CD4 + subset. The changes resulted in an enhanced anti-tumor response in a HCT116 spheroid model under combination therapy with LDL high and an anti-PD-1 antibody.
Further research should be conducted to achieve more understanding regarding changes in T cell metabolism and cell signaling by LDL. Moreover, also the effect of LDL on other lymphocyte populations and myeloid cells needs to be unraveled, in order to sufficiently optimize immunotherapy and adoptive cell transfer. Finally, also the effect of HDL on T cell function and metabolism in immunotherapy has not been understood and needs to be investigated.
Institutional Review Board Statement
Buffy coats from healthy donors were obtained from the Department of Transfusion Medicine (University Hospital Regensburg) in form of remnants from routine platelet donations. The donations were approved by the Institutional Ethics Committee of the University of Regensburg (vote number 13-101-0240; 13-101-0238) and are in accordance with the Declaration of Helsinki.
Publisher's note
All claims expressed in this article are solely those of the authors and do not necessarily represent those of their affiliated organizations, or those of the publisher, the editors and the reviewers. Any product that may be evaluated in this article, or claim that may be made by its manufacturer, is not guaranteed or endorsed by the publisher. | 2023-01-27T14:30:36.855Z | 2023-01-27T00:00:00.000 | {
"year": 2023,
"sha1": "8c0049c07b1de51008c8752d6d526c5948d07fdf",
"oa_license": null,
"oa_url": null,
"oa_status": null,
"pdf_src": "Frontier",
"pdf_hash": "8c0049c07b1de51008c8752d6d526c5948d07fdf",
"s2fieldsofstudy": [
"Biology",
"Medicine"
],
"extfieldsofstudy": [
"Medicine"
]
} |
115356745 | pes2o/s2orc | v3-fos-license | Mode and climatic factors effect on energy losses in transient heat modes of transmission lines
Electrical energy losses increase in modern grids. The losses are connected with an increase in consumption. Existing models of electric power losses estimation considering climatic factors do not allow estimating the cable temperature in real time. Considering weather and mode factors in real time allows to meet effectively and safely the consumer’s needs to minimize energy losses during transmission, to use electric power equipment effectively. These factors increase an interest in the evaluation of the dynamic thermal mode of overhead transmission lines conductors. The article discusses an approximate analytic solution of the heat balance equation in the transient operation mode of overhead lines based on the least squares method. The accuracy of the results obtained is comparable with the results of solving the heat balance equation of transient thermal mode with the Runge-Kutt method. The analysis of mode and climatic factors effect on the cable temperature in a dynamic thermal mode is presented. The calculation of the maximum permissible current for variation of weather conditions is made. The average electric energy losses during the transient process are calculated with the change of wind, air temperature and solar radiation. The parameters having the greatest effect on the transmission capacity are identified.
Introduction
Electricity consumption is constantly growing in the world [1,2]. There is made systematic introduction of new generating capacity to reduce electricity deficit. There is a problem of insufficient capacity of transmission lines in economically developed areas in the maximum load consumption hours. With current increasing there is an increase of energy losses. To increase the transmission capacity and to reduce the losses made the construction of new lines is developed. The event is financially costly and time-consuming to be implemented. One solution to the problem is the usage of transmission lines dynamic thermal mode assessment. This method makes possible to increase the capacity of electric power systems equipment. [3][4][5][6].
When operating the power system to prevent malfunction associated with the overheating of the power lines when there is an increase in transmission power, static thermal models were developed [7][8][9]. The models are used to estimate the conductors' temperature and maximum transmission capacity of power lines when designing and operation. The heat absorbed with a conductor includes: the heat from the flow of current and heat from the sun-light conductor surface ( Figure 1). Heat transfer from the conductor into the environment is by convection (Qc) and the thermal radiation (Qr) [9,10]. Static models of the thermal mode of overhead lines conductors produce evaluation of conductor heating and maximum current based on the worst cooling conditions [11][12].
When calculating the permissible load of lines in a stationary mode, the permissible temperature of an aluminum conductor with a steel core is assumed equal to 70 °C. Ambient temperatures can vary from 20°С to 40°С. As a general rule, the average temperature for the hottest month is taken for the calculations. The wind speed is 0.2 m/s (calm). Solar radiation power is considered 1000 W/m 2 . This power corresponds the most to radiation at noon. Real weather conditions when there is operation of power lines differ significantly from the worst cooling conditions the most time of the year. The fact consideration makes possible to increase the line capacity. [1,13,14]. According to the documents in the electricity a consumer should receive a high quality electricity and in full. This requirement shall not be violated, despite the emerging emergency situations in the system. In case of accidents, some of the lines are disconnected. Line disconnection occurs due to the conductor excessive heating. This fact makes necessary to use thermal dynamical state of the conductors of overhead power lines. This method will allow increasing the capacity of existing lines without modernization in compliance with all safety requirements. [15,16]. The conductor temperature is monitored with direct or indirect temperature control devices. Indirect control devices are ambient temperature recording devices in real time, wind speed and direction devices, solar heating, and the degree of slack arrow tension of the conductors [16]. The findings are transferred to the calculation systems.
Mathematical Simulation
When there is no possibility of installing devices of the conductor temperature direct control one must use mathematical models that consider the mode-climatic factors and calculate the conductor temperature in real time. The mathematical model should be based on a heat balance equation of the conductor in the transient temperature conditions [9]: where Т and Тamb are absolute temperature of the conductor and ambient temperature; Аc and АL are constant coefficients; k is exponent that depends on convection conditions; αforc is heat transfer coefficient of forced convection; εп is conductor surface emissivity for the infrared radiation; C0 = 5.67·10 -8 W/(m 2 ·К 4 ) is constant blackbody radiation; Θ and Θamb are temperature of conductors, respectively, and ambient temperature in ºC; T and Tamb are the same but in K (absolute temperatures); As is absorptivity of the conductor surface for the solar radiation; qsol is solar radiation flux density on the conductor; dcn is the conductor diameter; ΔР0=I 2 r0 are active power losses in the conductor per unit length at Θ = 0 ºC; I is current in the cable; r0 is per unit length resistance of the cable at Θ = 0 ºC; α is temperature coefficient of resistance. The heat capacity per unit length С and convection heat transfer coefficient αforc are defined with [9,15]: where kv is coefficient characterizing the wind attack angle; Р is atmospheric pressure; v is wind speed; Сsp.Al, Сsp.st, are specific heat capacity mass of aluminum and steel; МAl, Мст is weight of aluminum and the steel portion of the conductor per unit length.
On the basis of the method of least squares, equation (2) can be transformed to the form: Coefficients А1, А2, А3 are given in [9,15]. Equation (5) can have different solutions depending on the type of equation roots The case of real roots of equation (6) is of practical interest On the basis of (7) solution (5) can be written as: where Θ0 is conductor temperature at time t=0 (initial condition).
Real roots (7) of equation (6) take the values Θ1>Θ2. Solution (8) is valid only when Θ0 > Θ2. Calculations show that the temperature Θ2 has strongly negative values not exceeding the ambient temperature.
Parameter Tn, determines the time scale (inertia) of the process. This parameter is equivalent to the time constant in standard exponential function. However, the quantitative meaning of this parameter is more complicated.
The average temperature Θav and power losses ΔW in the three-phase line of length l during the time Tp is determined with the equations
Results of numerical simulation
The developed approach for the analysis of transient thermal modes of overhead transmission lines is implemented in the form of an algorithm and a calculation program. Numerical simulating was conducted for the conductor mark ACSR Lynx 175 mm 2 . The conductor parameters and conditions of the numerical experiment are presented in Table 1. Data given in Table 2 were obtained by carrying out a numerical experiment. The most interesting results are the graphs of the achievement of a stationary mode with a change in any parameter. The above numerical experiment shows that the parameter Тn when changing one of the parameters that affects the conductor temperature remains constant provided that the remaining parameters are not changed. Besides the possibility of determining the final temperature of transition temperature mode a proposed model allows to derive average temperature during the conductor cooling or heating. Figure2 shows the curve of the conductor heating and cooling for a time interval 60 min and the wind speed 0.2 m/s. Current changes instantaneously from 0А till 200А, from 200А till 500А and from 500А till 0А. In practice, the current changes constantly. This change can occur in a wide range. The last fact demonstrates the need for using a dynamic thermal model of conductors. Figure3 -Figure8 show the effect of weather conditions on the maximum line current and average energy losses during the transition process. One considers the effect of climatic factors on the permissible current in detail. An analysis is started with the ambient temperature. The parameter during the day can be varied over a wide range.
Effect of ambient temperature (Figure3) on the load current of the conductors shall be considered provided that the conductor temperature does not exceed 70 °C, and wind speed is 0.2 m/s. Solar light intensity is set equal to 0 W/m 2 .
Figure3 shows the necessity to reduce the current if other parameters remain unchanged. Based on the obtained dependence of the current on the air temperature, it can be seen that at a temperature -40°C the current exceeds 1.5 times the current at 20 °C. The temperature of 20°C is accepted as the base. When the temperature is 40 °C the current is less than the basic one to 95А. Thus, the improvement in the capacity of transmission lines and its reliability is possible when considering the actual ambient temperature. Figure4 shows the dependence of electric energy losses on the ambient temperature. At the initial time period the conductor temperature was equal to ambient temperature of 15 °C. Later the current was changed to the maximum value. The maximum is to be taken in such a way and the conductor temperature does not exceed 70 °C. The estimated time was 60 min, and the length of the line was assumed to be 50 km. Wind speed is 0.2 m/s, the solar radiation is not considered. Figure4 shows that losses are reduced with an increase in the ambient temperature. It is due to the fact that when the ambient temperature increases the current flowing through the conductor must be reduced, so that the conductor temperature would not exceed 70 °C. When the temperature is -40 °C the current is significantly higher than the current is at +40 °C. When calculating the average energy losses, the square of the current is taken into account. Therefore, the graph has the same form as the maximum permissible current.
To calculate the maximum current load in the Electrical Installation Code (EIC), it is proposed to use coefficients that correspond to a certain temperature. Temperatures indicated in the EIC range from -5°C till +50°C (Table 3). Time sampling is in increments of 5°C. The disadvantage of this change is that it is not possible to assess the permissible current in real time due to the fact that in reality the ambient temperature is not an integer. Minimum EIC temperature is limited to -5°C, although the Siberian, Ural and Far Eastern federal regions have average temperature of the winter months is -9 and below. This fact is also not in favor of using the proposed EIC factors when calculating the permissible current and temperature of the conductor. When temperature is -40°C difference between the currents is 103.333 А, and it is 17.76%, i.e. the lines are under loaded. It is due to the fact that as the temperature decreases, the cooling conditions improve. Within the temperature range from -5°C till +40°C differences are no more than 4%. When temperature is +50 differences are 25А or 9.59%. The analysis showed that the proposed method for calculating is preferable when calculating the permissible current in real mode. As in the initial data temperatures may take any value. Also the fact is in favor of this method that it is possible to produce a solution at low temperatures over a wide range. An important factor affecting the capacity of lines is solar radiation. When absorbing solar radiation there is additional conductor heating. Limited allowable current is determined under the condition that the conductor temperature does not exceed temperature of 70 °С, and wind speed is not more than 0.2 m/s, ambient temperature is 15°С. The ratio of the light intensity and current carrying capacity of the conductors is shown in Figure 5. When the solar radiation is changed by 500 W/m the current change is 43 А. This feature makes possible to increase the load on the line in winter with the evening maximum of consumption. When the conductor heating is increased with solar radiation, the maximum current is reduced. This fact leads to reduction in both energy losses ( Figure 6) proportional to the square of the current. One important parameter of the environment when calculating the permissible current and the conductor temperature is wind speed, as wind equal to ambient temperature significantly contributes to the conductor cooling. The dependence of the change in the current load on wind speed is shown in Figure 7. The minimum wind speed at calculation was assumed equal to 0.2 m/s, since there is always movement of air masses near the conductors. The wind speed value is classified as calm. The above dependence shows that an increase in wind speed leads to an increase in the permissible current. This fact is explained with the fact that increasing wind speed improves cooling conditions, and it in turn makes possible to load additionally the overhead transmission line. Figure7 shows the wind changes from 0.2 m/s (calm) till 17.1 m/s (strong wind), as in most cases the wind speed changes in the range during the year. With an increase in wind speed from 0.2 m/s till 17.1 m/s the current is increased in 3.14 times. Current increase leads to an increase in electric power loss (Figure 8).
Conclusions
Thermal power lines calculations are important. Electric power losses, transmission capacity of electrical networks and sag depend on conductor temperature. In its turn the conductor temperature depends on the current load, ambient temperature, wind speed and solar radiation. Due to the fact that the active resistance of the conductor has temperature dependence, it is a nonlinear element. So it is necessary to calculate resistance based on the heat balance equations.
Mode and climatic factors continuously change over time. This fact makes relevant to the calculation of the dynamic thermal mode of the lines. Considering transient thermal modes is necessary for a fair determination of the maximum temperature of conductors. It makes possible to predict more accurately the possible maximum current load. The proposed method is an analytical solution of the heat balance equation of conductors in a transient mode allows determining both as current as electric power losses during the transit thermal process.
The difference between the maximum current value when there is no solar radiation and at 500 W/m 2 is 8.99%, electrical energy average losses are 17.02%. When changing conductor temperature from -40°C till +40° C difference between the currents is 46.8%, average losses are 68.9%. When the wind changes from calm to strong current difference is 68.2%, average losses are 90.3%. The presented data shows that solar radiation affects the transmission capacity in lesser extent. The next most important parameter is the ambient temperature. The wind has the biggest effect. Thus, in determining the ability to transmit more power through existing lines ambient temperature and wind should be considered. | 2019-04-16T13:27:37.188Z | 2017-01-01T00:00:00.000 | {
"year": 2018,
"sha1": "f3a67a5b0842a5245429ac580724a4a7d7edfb78",
"oa_license": null,
"oa_url": "https://doi.org/10.1088/1742-6596/944/1/012016",
"oa_status": "GOLD",
"pdf_src": "IOP",
"pdf_hash": "3ebb3c209f57a8bee308fd0a69991ba7c918add4",
"s2fieldsofstudy": [
"Engineering",
"Environmental Science"
],
"extfieldsofstudy": [
"Physics",
"Materials Science"
]
} |
204835347 | pes2o/s2orc | v3-fos-license | Histone Deacetylase Expressions in Hepatocellular Carcinoma and Functional Effects of Histone Deacetylase Inhibitors on Liver Cancer Cells In Vitro
Hepatocellular carcinoma (HCC) is a leading cause for deaths worldwide. Histone deacetylase (HDAC) inhibition (HDACi) is emerging as a promising therapeutic strategy. However, most pharmacological HDACi unselectively block different HDAC classes and their molecular mechanisms of action are only incompletely understood. The aim of this study was to systematically analyze expressions of different HDAC classes in HCC cells and tissues and to functionally analyze the effect of the HDACi suberanilohydroxamic acid (SAHA) and trichostatin A (TSA) on the tumorigenicity of HCC cells. The gene expression of all HDAC classes was significantly increased in human HCC cell lines (Hep3B, HepG2, PLC, HuH7) compared to primary human hepatocytes (PHH). The analysis of HCC patient data showed the increased expression of several HDACs in HCC tissues compared to non-tumorous liver. However, there was no unified picture of regulation in three different HCC patient datasets and we observed a strong variation in the gene expression of different HDACs in tumorous as well as non-tumorous liver. Still, there was a strong correlation in the expression of HDAC class IIa (HDAC4, 5, 7, 9) as well as HDAC2 and 8 (class I) and HDAC10 (class IIb) and HDAC11 (class IV) in HCC tissues of individual patients. This might indicate a common mechanism of the regulation of these HDACs in HCC. The Cancer Genome Atlas (TCGA) dataset analysis revealed that HDAC4, HDAC7 and HDAC9 as well as HDAC class I members HDAC1 and HDAC2 is significantly correlated with patient survival. Furthermore, we observed that SAHA and TSA reduced the proliferation, clonogenicity and migratory potential of HCC cells. SAHA but not TSA induced features of senescence in HCC cells. Additionally, HDACi enhanced the efficacy of sorafenib in killing sorafenib-susceptible cells. Moreover, HDACi reestablished sorafenib sensitivity in resistant HCC cells. In summary, HDACs are significantly but differently increased in HCC, which may be exploited to develop more targeted therapeutic approaches. HDACi affect different facets of the tumorigenicity of HCC cells and appears to be a promising therapeutic approach alone or in combination with sorafenib.
Introduction
Hepatocellular carcinoma (HCC) is the fourth leading cause of cancer-related death worldwide and has a rising incidence [1]. Despite the burden HCC causes, knowledge on the molecular mechanisms of the development and progression of this disease is still limited and treatment options are not optimal. In most cases, HCC is diagnosed in already advanced stages with limited therapeutic options [2]. Sorafenib is the first-line treatment for advanced-stage HCC patients [3]. Although treatment with this multi-target tyrosine kinase inhibitor is associated with overall survival benefits in this group of HCC patients, the response rate is not satisfactory. Therefore, new therapeutic targets as well as an understanding of the molecular mechanisms associated with sorafenib resistance are highly needed to improve treatment options for HCC patients.
It is increasingly recognized that cancer development and progression is significantly affected by epigenetic mechanisms. Among these, histone deacetylases (HDACs) have been shown to play a key role in different hallmarks of cancer including resistance to apoptosis and chemotherapy resistance [4,5].
Previous studies have shown the overexpression of individual HDACs in HCC (subtypes) and their impact on HCC progression. For example, Ler et a. found that HDAC1 and HDAC2 were upregulated in the majority of HCC tissues, and that this upregulation was associated with cancer-specific mortality [6]. Quint et al. described an increased expression of HDACs 1-3 in HCC and a high concordance of expression levels with each other, but only HDAC2 expression had an impact on patient survival [7]. Furthermore, the targeted inhibition of defined HDACs such as HDAC4, HDAC5 or HDAC6 has been shown to inhibit HCC growth and metastasis [8][9][10]. Therefore, the application of HDAC inhibitors (HDACi) is an emerging approach with promising results in preclinical settings [11].
Suberoylanilide hydroxamic acid (SAHA) is an irreversible pan HDACi, which was approved for the treatment of cutaneous T-cell lymphoma [12]. SAHA effectively inhibits class I and II HDACs with higher IC50 for HDAC 4, 7 and 9 [13]. Trichostatin A (TSA) is a reversible pan HDACi with higher affinity and thus inhibition of class I, HDAC5 and class IIb HDACs and less effectiveness against HDAC 4,7,9 and 11 [14,15]. In HCC, SAHA and TSA have been shown to induce different cell death molecular cascades [11,[16][17][18][19]. However, studies on further anti-tumorigenic effects as well as effects on chemotherapy resistance are very sparse.
The aim of this study was to systemically analyze the expression of all 11 classical (zinc-dependent) HDACs in HCC cells and tissues. Furthermore, we evaluated the effects of TSA and SAHA on different facets of the tumorigenicity of human HCC cells in functional assays, while also examining the combined effects of HDACi with sorafenib in wildtype as well as sorafenib-resistant HCC cells.
HDAC Expression in HCC Cells and Tissues
First, we analyzed the mRNA expression of all HDAC class I (HDAC 1, 2, 3 and 8), class IIa (HDAC 4, 5, 7 and 9), class IIb (HDAC 6 and 10) and class IV (HDAC 11) in four different human HCC cell lines (Hep3B, HepG2, Huh7 and PLC) and primary human hepatocytes (PHH) with quantitative PCR. The expression levels of all 11 HDACs were significantly higher in all HCC cell lines compared to PHH ( Figure 1A). Only in HepG2 cells, HDAC9 was not significantly increased compared with PHH ( Figure 1A). Similarly, the expression of HDAC classes I and II was significantly increased in the 2 murine HCC cell lines Hepa1-6 and Hepa129 compared with primary murine hepatocytes ( Figure 1B). Only HDAC11 levels were not increased or even lower, respectively, in murine HCC cells compared with hepatocytes ( Figure 1B). The expression levels of different HDACs in HCC patients were analyzed using the Oncomine TM human cancer microarray database [20]. In one dataset comprising 445 HCC patients (Roessler Liver 2 [21]), HDAC 1, 2 (class I) and HDAC 4 , 5 and 9 (class IIa) but not HDAC 3, 6 and 11 were found to be significantly upregulated in HCC as compared to non-HCC liver tissues ( Figure 2A). (HDAC 8, 7 and 10 expression data were not available in this dataset.) In a second dataset comprising 75 HCC patients (Wurmbach Liver [22]), HDAC 1 and 2, HDAC 4 and 5 and HDAC11 were significantly upregulated, whereas HDAC 3, 8, 9, 6 and 10 were not altered compared to non-neoplastic liver tissue ( Figure 2B). (No data were available for HDAC7 in the Wurmbach Liver dataset.) Next, a third dataset with 185 HCC patient mRNA expression data was analyzed for HDAC expression in HCC (Guichard Liver, [23]). Here, HDAC3 (class I) and HDAC5 and 7 (class IIa) were significantly upregulated in patient HCC tissues, whereas HDAC 1, 2, 7, 10 and 11 expression levels did not significantly differ from non-tumorous liver tissues ( Figure 2C). (HDAC 8, 4 and 6 expression data were not available in this dataset.) The expression levels of different HDACs in HCC patients were analyzed using the Oncomine TM human cancer microarray database [20]. In one dataset comprising 445 HCC patients (Roessler Liver 2 [21]), HDAC 1, 2 (class I) and HDAC 4, 5 and 9 (class IIa) but not HDAC 3, 6 and 11 were found to be significantly upregulated in HCC as compared to non-HCC liver tissues ( Figure 2A). (HDAC 8, 7 and 10 expression data were not available in this dataset.) In a second dataset comprising 75 HCC patients (Wurmbach Liver [22]), HDAC 1 and 2, HDAC 4 and 5 and HDAC11 were significantly upregulated, whereas HDAC 3, 8, 9, 6 and 10 were not altered compared to non-neoplastic liver tissue ( Figure 2B). (No data were available for HDAC7 in the Wurmbach Liver dataset.) Next, a third dataset with 185 HCC patient mRNA expression data was analyzed for HDAC expression in HCC (Guichard Liver, [23]). Here, HDAC3 (class I) and HDAC5 and 7 (class IIa) were significantly upregulated in patient HCC tissues, whereas HDAC 1, 2, 7, 10 and 11 expression levels did not significantly differ from non-tumorous liver tissues ( Figure 2C). (HDAC 8, 4 and 6 expression data were not available in this dataset.) There was a strong variation in HDAC expression in the non-tumorous liver tissues. Previous studies revealed a pathological imbalance between the acetylation and deacetylation of histones in liver fibrosis [24]. Therefore, we used the University of California Santa Cruz (UCSC) Xena platform with a dataset of 50 non-tumorous liver tissue samples [25] to analyze the correlation between the RNA expression levels of the different HDACs and collagen I (alpha 1), the most abundant extracellular matrix protein in liver fibrosis [25,26]. This analysis revealed a significant correlation of the expression of HDAC 1 (class I), HDAC 4, 7 and 9 (class IIa), as well as HDAC 6 (class IIa) and HDAC 11 (class IV), with the expression of collagen I in non-tumorous liver tissue (Table 1). These data indicate that HDAC expression is increased in fibrotic liver tissue and, thus, could be an explanation for the high variation in HDAC levels in the non-tumorous liver tissues of HCC patients. Furthermore, this finding could explain why the different HDACs are not consistently upregulated in HCC compared to non-tumorous liver tissues in the different datasets of patients. There was a strong variation in HDAC expression in the non-tumorous liver tissues. Previous studies revealed a pathological imbalance between the acetylation and deacetylation of histones in liver fibrosis [24]. Therefore, we used the University of California Santa Cruz (UCSC) Xena platform with a dataset of 50 non-tumorous liver tissue samples [25] to analyze the correlation between the RNA expression levels of the different HDACs and collagen I (alpha 1), the most abundant extracellular matrix protein in liver fibrosis [25,26]. This analysis revealed a significant correlation of the expression of HDAC 1 (class I), HDAC 4, 7 and 9 (class IIa), as well as HDAC 6 (class IIa) and HDAC 11 (class IV), with the expression of collagen I in non-tumorous liver tissue (Table 1). These data indicate that HDAC expression is increased in fibrotic liver tissue and, thus, could be an explanation for the high variation in HDAC levels in the non-tumorous liver tissues of HCC patients. Furthermore, this finding could explain why the different HDACs are not consistently upregulated in HCC compared to non-tumorous liver tissues in the different datasets of patients. Correlation analyses were conducted with the University of California Santa Cruz (UCSC) Xena platform using a dataset of 50 normal liver tissue samples to analyze the correlation between the RNA In addition to the non-tumorous liver tissues, there was also a high variation in the expression levels of all HDAC classes within the HCC tissues. We wanted to analyze whether the variation in the different HDACs occurs independent from each other or whether there is a correlation between the expression levels of the different HDACs in HCC tissues. Therefore, we applied RT-qPCR analysis to determine the mRNA expression levels of the different HDACs in eleven human HCC tissue samples. Similar to in the in silico analysis, the mRNA expression levels of all HDACs showed a high variation in the HCC tissues of the different patients ( Figure S1). Interestingly, we found a significant correlation between the expression levels of all four HDAC class IIa members (HDAC 4, 5, 7 and 9) ( Table 2). Moreover, there was a significant correlation of HDAC 2 (class I), HDAC 10 (class IIb) and HDAC 11 (class IV) with most other HDACs. In contrast, the expression levels of HDAC 1 and HDAC 3 (class I) or HDAC 6 (class IIb) did not correlate with the expression of other HDACs (Table 2). The Pearson correlation coefficients are listed in the boxes. The blue boxes indicate correlations that are statistically significant (p < 0.05).
Next, we used GEO/GSE datasets (https://www.ncbi.nlm.nih.gov/gds) to gain insights into the gene expression of HDACs during HCC development. Once, a precancerous dataset comparing heterozygous and homozygous Mdr2 knockout mice was analyzed. Previous studies had shown that the Mdr2-KO mouse is a valid model for human HCC development [27]. In this dataset, only HDAC 5 mRNA expression was slightly increased in the liver of homozygous as compared to heterozygous knockout mice ( Figure 3A). The expression levels of the other HDACs did not show significant differences ( Figure 3A). Moreover, HDAC expression levels were analyzed in a GEO/GSE dataset containing data on Trim24-deficient HCC samples and non-tumorous control liver tissues. Furthermore, Trim24-deficient mice spontaneously develop HCC [28]. In addition, in this model, the expression levels of most HDACs did not differ significantly between HCC and non-tumorous wild-type liver tissue ( Figure 3B). Only HDAC 7 expression was slightly higher in tumorous as compared to non-tumorous mouse livers ( Figure 3B). Together, these data indicated that the upregulation of HDACs does not occur during HCC development but rather in advanced liver cancer.
Correlation of HDAC Expression with Clinical Prognosis of HCC Patients
Next, we wanted to assess the correlation between tumorous HDAC expression and survival of HCC patients using the "SurvExpress" Biomarker validation for the cancer gene expression database [29]. Based on the prognostic index, also known as the risk score, patients were stratified into "lowrisk" and "high-risk" groups [29]. Computational stratification revealed a significant overexpression of HDAC 1 and 2 (class I), as well as HDAC 4, 7 and HDAC 9 (class IIa), in the high-compared to low-risk groups in the LIHC-TCGA HCC dataset (n = 361) ( Figure 4A-E). Furthermore, the analysis of this dataset revealed a reduced overall survival in HCC patients with high HDAC 1 and 2 or HDAC
Correlation of HDAC Expression with Clinical Prognosis of HCC Patients
Next, we wanted to assess the correlation between tumorous HDAC expression and survival of HCC patients using the "SurvExpress" Biomarker validation for the cancer gene expression database [29]. Based on the prognostic index, also known as the risk score, patients were stratified into "low-risk" and "high-risk" groups [29]. Computational stratification revealed a significant overexpression of HDAC 1 and 2 (class I), as well as HDAC 4, 7 and HDAC 9 (class IIa), in the high-compared to low-risk groups in the LIHC-TCGA HCC dataset (n = 361) ( Figure 4A-E). Furthermore, the analysis of this dataset revealed a reduced overall survival in HCC patients with high HDAC 1 and 2 or HDAC 4, 7 and 9 expression ( Figure 4A-E).
Effect of HDAC Inhibition on the Viability of HCC Cell Lines and Primary Hepatocytes
Next, we wanted to systematically analyze the effects of HDAC inhibition on the tumorigenicity of different HCC cell lines. To dissect the cytotoxic and functional effects, we first determined the dose range of toxicity of the HDAC inhibitors SAHA and TSA. The analysis of LDH release into the supernatant and microscopical analysis revealed that 72 h incubation with SAHA doses up to 1 µM and TSA doses up to 0.25 µM, respectively, did not induce toxic effects in HCC cells ( Figure 5A,B and Figure S3A,B). Fluorescence-activated cell sorting (FACS) analysis (propidium iodide/annexin) showed a dose-dependent induction of necrosis and apoptosis in HCC cells after treatment with SAHA ( Figure 5C) or TSA ( Figure 5D) for 24 h. Notably, treatment with 4-10 fold higher doses of both HDACi for 72 h did not cause toxic effects in primary human hepatocytes ( Figure 5E). Furthermore, in the TCGA Liver Cancer dataset (n = 422), computational stratification revealed the significant overexpression of HDAC 1 and 2 as well as HDAC 7 and HDAC 9 in the high-risk group and a correlation of this overexpression with reduced overall survival ( Figure S2).
Moreover, the analysis of the Hoshida Golub Liver GSE10143 dataset (n = 162) showed the higher expression of HDAC 1, HDAC 2, and HDAC 9 in the high-risk group, and that this overexpression correlated with poor survival ( Figure S2). In summary, these results indicate that enhanced HDAC expression in HCC cells is associated with a poor prognosis.
Effect of HDAC Inhibition on the Viability of HCC Cell Lines and Primary Hepatocytes
Next, we wanted to systematically analyze the effects of HDAC inhibition on the tumorigenicity of different HCC cell lines. To dissect the cytotoxic and functional effects, we first determined the dose range of toxicity of the HDAC inhibitors SAHA and TSA. The analysis of LDH release into the supernatant and microscopical analysis revealed that 72 h incubation with SAHA doses up to 1 µM and TSA doses up to 0.25 µM, respectively, did not induce toxic effects in HCC cells ( Figure 5A,B and Figure S3A,B). Fluorescence-activated cell sorting (FACS) analysis (propidium iodide/annexin) showed a dose-dependent induction of necrosis and apoptosis in HCC cells after treatment with SAHA ( Figure 5C) or TSA ( Figure 5D) for 24 h. Notably, treatment with 4-10 fold higher doses of both HDACi for 72 h did not cause toxic effects in primary human hepatocytes ( Figure 5E).
Functional Effects of HDAC Inhibition on HCC Cell Lines
Next, we analyzed the effect of HDACi on HCC cells in functional assays using subtoxic SAHA and TSA concentrations. A TSA dose of 0.1 µM significantly impaired the growth of HepG2 and Hep3B cells and, at a dose of 0.25 µM TSA, abrogated the proliferation of both HCC cells lines ( Figure 6A). The maximal SAHA dose of 1 µM reduced the growth of Hep3B cells to approximately 60% of
Functional Effects of HDAC Inhibition on HCC Cell Lines
Next, we analyzed the effect of HDACi on HCC cells in functional assays using subtoxic SAHA and TSA concentrations. A TSA dose of 0.1 µM significantly impaired the growth of HepG2 and Hep3B cells and, at a dose of 0.25 µM TSA, abrogated the proliferation of both HCC cells lines ( Figure 6A). The maximal SAHA dose of 1 µM reduced the growth of Hep3B cells to approximately 60% of the control cells, while the same SAHA dose only slightly affected the growth of HepG2 cells ( Figure 6A). Next, we analyzed the impact of HDACi on the migration of HCC cells in transwell Boyden chamber assays and observed that SAHA and TSA significantly reduced the directed migration of HCC cells ( Figure 6B). Furthermore, SAHA and TSA treatment dose-dependently reduced the number and size of colonies formed by HCC cells in colony formation assays ( Figure 6C; Figure S3C,D).
Cancers 2019, 11, x FOR PEER REVIEW 9 of 20 the control cells, while the same SAHA dose only slightly affected the growth of HepG2 cells ( Figure 6A). Next, we analyzed the impact of HDACi on the migration of HCC cells in transwell Boyden chamber assays and observed that SAHA and TSA significantly reduced the directed migration of HCC cells ( Figure 6B). Furthermore, SAHA and TSA treatment dose-dependently reduced the number and size of colonies formed by HCC cells in colony formation assays ( Figure 6C; Figure S3C,D). Next, we analyzed the effect of HDACi on features of cellular senescence. We found that SAHA treatment caused a dose-dependent induction of p21 and promyelocytic leukemia protein (PML) expression in HCC cells ( Figure 6D). Furthermore, we found a significant increase in ß-galactosidase (ß-Gal)-positive cells after SAHA treatment ( Figure 6E). In contrast, TSA treatment did not change p21 and PML expression levels in HCC cells and did not significantly affect the number of ß-Gal-positive HCC cells (Figure 6F,G). In summary, these data also indicate that subtoxic HDACi concentrations exhibit a significant inhibitory effect on the growth and migratory activity of HCC cells.
Effects of HDAC Inhibition on HCC Cell Lines in Combination with Sorafenib
Next, we wanted to analyze the effect of HDAC inhibition on HCC cells in combination with sorafenib, which is currently the only clinically established pharmacological therapy for HCC. Initially, we analyzed the effect of sorafenib on HDAC expression in HCC cells. For this analysis, we wanted to apply non-toxic sorafenib doses to avoid unspecific effects. In line with previous studies, the analysis of LDH release into the supernatant and microscopical analysis showed that sorafenib doses up to 1 µM did not affect the viability of HCC cells for up to 48h ( Figure S4A). At this subtoxic dose, sorafenib did not significantly alter HDAC expression in HCC cells ( Figure 7A). However, the combined treatment of HCC cells with 1 µM sorafenib and the HDAC inhibitor SAHA significantly enhanced LDH release into the supernatant as compared with treatment using the same SAHA or sorafenib doses alone ( Figure 7B; Figure S4B). In combination with higher sorafenib doses (1 µM and 2 µM), the synergistic effect with SAHA was even more prominent ( Figure 7B). FACS analysis confirmed that combined the treatment of HCC cells with sorafenib and SAHA synergistically induced cell death and apoptosis ( Figure 7C,D; Figure S4C,D). These data indicate that HDACi can enhance the anti-tumorigenic efficacy of sorafenib.
Next, we wanted to analyze whether HDACi can also affect efficacy for sorafenib treatment in sorafenib-resistant HCC cells. For this, we used sorafenib-resistant (SR) Hep3B cells (Hep3B-SR) that we had established in a previous study and that proliferate in the presence of up to 10 µM sorafenib [30]. FACS analysis revealed that a sorafenib dose of 4 µM did not significantly affect the number of annexin-positive Hep3B-SR ( Figure 7E). SAHA had similar effects in Hep3B-SR as in non-resistant Hep3B cells ( Figure 7F). However, the combination of SAHA and sorafenib treatments induced the number of annexin-positive cells significantly more than SAHA and sorafenib doses alone ( Figure 5G). Microscopical analysis confirmed the increased combined toxic effect of SAHA and sorafenib on Hep3B-SR cells ( Figure 7H).
Together these data indicate that HDAC inhibition could be a potential therapeutic strategy to enhance the anti-tumorigenic efficacy of sorafenib in HCC cells.
In the search for the underlying mechanisms by which HDACi enhance the efficacy of sorafenib in HCC cells, we assessed the effect of HDACi on Kirsten rat sarcoma (KRAS) expression because we recently showed that wild type KRAS is dysregulated in HCC and promotes sorafenib resistance [30]. However, KRAS mRNA expression was not significantly altered in HDACi treated HCC cells ( Figure S5).
The sensitivity of human HCC cells to sorafenib has also been found to be associated with reactive oxygen species production and oxidative stress [31]. Cytochrome P450 2E1 (CYP2E1) is a critical mediator of oxidative stress in hepatocytes and HCC cells [32][33][34], and it has been shown that histone modification is involved in CYP2E1 gene expression in HCC cells [35]. Here, we found a dose-dependent induction of CYP2E1 expression by SAHA in HCC cells ( Figure 8A). In contrast, a sorafenib dose of 2 µM had no significant effects on CYP2E1 expression levels ( Figure 8A). However, combined treatment with sorafenib and SAHA had a significantly higher inducing effect on CYP2E1 expression than treatment with the same doses of sorafenib and SAHA alone ( Figure 8A). Furthermore, the expression of p47phox, an established marker for oxidative stress [36], was significantly enhanced in HCC cells by combined treatment with sorafenib and SAHA compared to the effect of the two drugs alone ( Figure 8B). Potentially, the inducing effects on CYP2E1 and oxidative stress, respectively, contribute to the impact of HDACi on the sensitivity of HCC cells to sorafenib. combined treatment with sorafenib and SAHA had a significantly higher inducing effect on CYP2E1 expression than treatment with the same doses of sorafenib and SAHA alone ( Figure 8A). Furthermore, the expression of p47phox, an established marker for oxidative stress [36], was significantly enhanced in HCC cells by combined treatment with sorafenib and SAHA compared to the effect of the two drugs alone ( Figure 8B). Potentially, the inducing effects on CYP2E1 and oxidative stress, respectively, contribute to the impact of HDACi on the sensitivity of HCC cells to sorafenib.
Discussion
The first aim of this study was to systematically assess the expression levels of all classical HDACs in HCC cells and tissues compared to hepatocytes and non-tumorous liver tissues, respectively. We observed a significant upregulation of all members of the HDAC classes I, II and IV
Discussion
The first aim of this study was to systematically assess the expression levels of all classical HDACs in HCC cells and tissues compared to hepatocytes and non-tumorous liver tissues, respectively. We observed a significant upregulation of all members of the HDAC classes I, II and IV in HCC cell lines. The highest upregulation was observed for HDAC 2 (class I) as well as HDAC 7 and HDAC 9 (class II), with mRNA levels in part 100-fold higher than in hepatocytes. Furthermore, in three different HCC patient datasets, comprising a total of 705 patients, the expression of some HDACs was significantly enhanced in HCC compared with non-tumorous liver tissue. However, with the exception of HDAC 4 and HDAC 5, there was no unified picture of upregulation of the different HDACs in HCC compared to non-tumorous tissues in the three datasets. As one potential mechanisms, we observed a significant correlation of several HDACs with collagen expression in the non-tumorous liver tissues of HCC patients. This finding is in line with previous observations of pathologically altered histone acetylation and deacetylation regulated by HDACs in liver fibrosis [16]. Since HCC develops in most cases in a fibrotic liver, already increased HDAC levels in non-tumorous (fibrotic) liver tissue might explain that there was no unified picture of upregulation of HDACs in the different HCC patient datasets. Furthermore, it has to be noted that we did not observe a significant upregulation of HDACs in mouse models of early or pre-cancerous HCC stages. This indicates that the upregulation of HDAC expression occurs in most cases in advanced HCC stages.
In addition to the non-tumorous liver tissues, in HCC tissues, we also observed a strong variation in the expression of all HDACs within different patients. Currently, we can only speculate whether factors such as the etiology of liver disease or other molecular mechanisms are causing these differences. One limitation of our study is that we focused our systemic analysis of the expression of the 11 classical HDAC members in HCC on the mRNA level. Still, previous studies assessing single HDACs have shown an upregulation of the protein expression of different HDACs, such as HDAC1 [6], HDAC2 [7], HDAC3 [37], HDAC4 [10], HDAC5 [8], HDAC9 [38], HDAC6 [39] and HDAC11 [40] in human HCC tissues compared with non-tumorous liver tissues. Furthermore, the analysis of the protein atlas (https://www.proteinatlas.org/, 09/2019) showed the strong expression of HDAC 1, 2 and 3 (class I), as well as HDAC 4 and 5 (class IIa) and HDAC 10 (class IIb), in most HCC tissues ( Figure S6; for HDAC 7 and HDAC 11 no data were available in the protein atlas). Together, these findings confirm the upregulation of different HDACs in HCC also on the protein level.
Interestingly, we observed a significant correlation of the expression levels of all class IIa (HDAC 4,5,7,9) as well as HDAC 10 and HDAC 11 levels in individual patients. This might indicate that there are high and low HDAC expressers and potentially patients in whom HDACs particularly affect disease progression or who particularly respond to therapy, respectively. However, these findings need to be confirmed in further studies with larger patient cohorts and it also has to be noted that HDACs and their activity are not only regulated at the transcriptional level [41]. Despite the concordance in the upregulation of most HDACs in HCC, it further has to be noted that it is likely that individual HDACs affect HCC progression differently. Here, we observed that in three large datasets of HCC patients, a high expression of HDAC class II members HDAC 4, HDAC 7 and HDAC 9 as well as HDAC class I members HDAC 1 and HDAC 2 significantly correlated with poor patient survival. Similarly, previous studies suggested high HDAC 1 [6,42] and HDAC 2 [6] expression as biomarkers for poor prognosis of HCC patients. A most recent study by Wang et al. based on immunohistochemical analysis found higher HDAC 4 expression in advanced HCC in 49 tissue samples examined [43]. Another recent study performed the HDAC 9 immunohistochemistry of 37 HCC tissue samples and found that patients with higher HDAC 9 expression had poorer prognosis [38].
In summary, these data indicate the high expression of some class I and class IIa HDAC members as predictors of poor outcome in HCC patients.
The second aim of this study was to analyze the effect of two different pharmacological HDAC inhibitors (HDACi) on HCC cell lines in functional in vitro analyses. Most previous studies assessing HDACi in HCC focused on (different modes of) cell killing. Here, we wanted to assess the functional effects of HDACi also in the subtoxic range to determine further insights on the role of HDACs in HCC. HDACi, TSA and SAHA, both significantly reduced proliferation, colony formation and migratory activity of HCC cells in subtoxic concentrations. Still, the effects of the two HDACi quantitatively differed, which may be related to their dissimilar effectiveness against different HDAC classes [13,14]. Moreover, we observed that SAHA but not TSA promoted features of cellular senescence in HCC cells. In some tumors such as rhabdomyosarcoma [44] and urothelial carcinoma [45], HDAC inhibition has been shown to affect senescence. Fan. et al. described that the knockdown of HDAC5 led to an up-regulation of p21 in HCC [8]. However, to the best of our knowledge, no further studies have assessed the impact of individual HDACs on cellular senescence and the effects of systemic HDACi on senescence has not yet been demonstrated in HCC cells. As comprehensively reviewed by Ramakrishna et al., senescence appears to act as a double-edged sword in HCC development and progression [46]. Further studies are necessary to elucidate whether this is also the case for SAHA effects on cellular senescence in HCC cells.
Finally, we analyzed the combined effect of HDACi and sorafenib on HCC cells. Previous studies have shown that the combination of sorafenib and HDACi resminostat improved overall survival compared to sorafenib monotherapy [47]. Moreover, another study found that the combination of sorafenib and resminostat helped to overcome sorafenib resistance [48]. Still, the molecular mechanism of these combined treatments is only incompletely understood. Yuan et al. proposed that HDACi could sensitize cancer cells to sorafenib treatment by regulating the acetylation level of Beclin-1 and herewith enhancing apoptosis [49]. He et al. showed that the combination of HDACi and sorafenib prevented HCC cell proliferation via implication in G0/G1 cell cycle arrest by the upregulation of the expression of p21 as well as the downregulation of certain cyclin-dependent kinases (CDK) and cyclins [50]. Lachenmayer et al. identified autophagy as a potential mechanism to make HCC cells susceptible to sorafenib treatment after resistance [51]. Soukupova et al. observed that resminostat shifted cancer cells to a more epithelial phenotype which might result in sensitization to sorafenib resistance [52]. Here, we observed that HDACi significantly enhanced the sensitivity of HCC cells to cell death induction by sorafenib. Moreover, we assessed HDACi effects in sorafenib-resistant HCC cells [30] and found that HDACi can overcome sorafenib resistance.
Cells and Cell Culture
The HCC cell lines HepG2 (ATCC HB-8065), PLC (ATCC CRL-8024), Hep3B (ATCC HB-8064) and Huh7 were cultured as described [53]. Primary human hepatocytes were isolated by the Biobank of the Department of General, Visceral and Transplant Surgery in Ludwig-Maximilians University using a two-step collagenase perfusion technique with modifications [54].
The murine Hepa129 cell line originates from a C3H/HeN mouse and was obtained from the NCI-Frederick Cancer Research and Development Center (DCT Tumor Repository). The murine Hepa1-6 cell line (ATCC CRL-1830) was also used. Isolation and culture of primary murine hepatocytes (PMH) were performed as described [55].
For stimulation experiments, cells were treated with trichostatin A (TSA) (Cayman Chemical, Ann Arbor, MI, USA), suberoylanilide hydroxamic acid (SAHA) (Cayman Chemical) and sorafenib (Biovision, Milpitas, CA, USA) at the concentrations and for duration as indicated.
Human Liver Tissues
Paired human HCC tissues and corresponding non-tumorous liver tissues were obtained from patients after partial hepatectomy. Double-coded human liver tissue used in this study was provided by the same Biobank as above. This Biobank operates under the administration of the Human Tissue and Cell Research (HTCR) Foundation. The framework of the HTCR Foundation [56], which includes obtaining written informed consent from all donors, has been approved by the ethics commission of the Faculty of Medicine at the LMU (approval number 025-12) as well as the Bavarian State Medical Association (approval number 11142) in Germany.
In Silico Analysis
The "SurvExpress-Biomarker validation for cancer gene expression" database (http://bioinformatica. mty.itesm.mx:8080/Biomatec/SurvivaX.jsp) was used for the analysis of hepatocellular carcinoma LIHC-TCGA HCC, TCGA Liver Cancer and Hoshida Golub Liver GSE10143 datasets as described [29]. Kaplan Meier curves describe the overall survival of cancer patients with high HDACs expression as compared to low HDACs expression.
The University of California Santa Cruz (UCSC) Xena platform (available from: https://xenabrowser. net/) was used to calculate the correlation between HDAC and collagen I expression in non-tumorous liver tissue using a dataset from The Cancer Genome Atlas (TCGA) (n = 50).
Furthermore, GEO datasets (GEO profiles) of (pre-)cancerous mouse models were used to analyses RNA expression levels for HDACs. Once, the murine Mdr2 knockout HCC model in both heterozygous (hetero, n = 6) and homozygous (homo, n = 6) knockouts was used. The Mdr2-KO mouse serves as a model for beta-catenin-negative subgroup of human HCCs characterized by down-regulation of multiple tumor-suppressor genes [27]. Moreover, the Trim24-KO murine HCC model was used to determine gene expressions in another GEO dataset in wild-type as compared to Trim24 knockout mice. Trim24 knockout mice spontaneously develop HCCs [28].
Immunohistochemical protein expression levels from HDACs in HCC tissue were depicted from the Protein Atlas Consortium website (https://www.proteinatlas.org/).
Analysis of mRNA Expression by Quantitative RT-PCR
RNA isolation from cells and tissues and subsequent reverse transcription were performed as described [57]. Quantitative real-time-PCR was performed by applying LightCycler technology (Roche) as described [57], using specific sets of primers as listed in Table 3. Furthermore, for detection of human p47phox and murine HDAC9 QuantiTect Primer Assays (Quiagen, Hilden, Germany) were used. Amplification of cDNA derived from 18S rRNA was used for normalization.
Clonogenic Assay
Clonogenic assays were performed to analyze stem cell behavior and attachment-dependent colony formation and the growth of cancer cells as described previously [30]. Briefly, cells were seeded at low density (1000 cells/well in a 6-well plate) in triplicates, treated with HDACi at the indicated concentrations and incubated for 10 days at 37 • C. Subsequently, cells were rinsed 2 times with PBS and fixed for 30 min with 6% v/v glutaraldehyde and stained with 0.5% crystal violet simultaneously, washed with tap water and dried before microscopical analysis. The number and size of colonies were calculated with CellSens Dimension Software (Olympus Soft Imaging Solutions GmbH, Münster, Germany).
Analysis of Cell Proliferation
Cell proliferation was measured using the xCELLigence system (Roche) according to the manufacturer's instructions. CyQUANT®NF Cell Proliferation Assay Kit was also used for proliferation analysis following the manufacturer's instructions (Invitrogen, Carlsbad, CA, USA). Cells were seeded at a density of 1000 or 2000 cells/well in a 96-well plate.
Analysis of Cell Migration
The migratory activity of HCC cells was quantified using the Cultrex 96 Well Cell Migration assay after treatment with HDAC inhibitors for 4 h (Trevigen, Gaithersburg, MD, USA) as described [58].
Analysis of Cellular Senescence
Cells were stained with Senescence β-Galactosidase Staining Kit (Cell Signaling, Danvers, MA, USA) following the manufacturer's instructions. In brief, after 72 h treatment, cells were washed twice with PBS, fixed with the supplied fixation solution, washed twice again and incubated for 16 h with staining solution at 37 • C, washed again with PBS and water and subsequently dried overnight.
Statistical Analysis
Values are presented as the mean ± SEM. A comparison between groups was made using the Student's unpaired t-test or one-way ANOVA respectively. A p value < 0.05 was considered statistically significant. Correlation analysis was performed using univariate Pearson's correlation coefficient (2-sided). A p value < 0.05 was considered statistically significant. All calculations were performed using the statistical computer package GraphPad Prism version 6.00 for Windows (GraphPad Software, San Diego, CA, USA).
Conclusions
Our study further confirms the potential beneficial use of HDACi in the treatment of HCC-alone or in combination with sorafenib. Importantly, in addition to promoting cell killing, HDACi appear to impair different facets of the tumorigenicity of HCC cells even at subtoxic doses. Generally, HDAC expression is increased in HCC but there appear to be differences between individual patients and HDAC classes. Furthermore, HDACi showed qualitatively and quantitatively different inhibitory effects on HCC cells in vitro. Together, the reason for these differences as well as their impact on HCC development and progression need to be further elucidated in future studies and could be exploited to develop more targeted therapeutic approaches.
Supplementary Materials: The following are available online at http://www.mdpi.com/2072-6694/11/10/1587/s1, Figure S1: HDAC expression in tumorous liver tissue samples of HCC patients, Figure S2: HDAC expression levels and prognosis of HCC patients, Figure S3: Effects of HDAC inhibition on the viability of HCC cells, Figure S4: Effects of sorafenib alone and in combination with SAHA on the viability of HCC cells, Figure S5: Kras expression in HCC cells after SAHA and TSA treatment, Figure S6: Immunohistochemical staining of HDACs in HCC tissues from Human Protein Atlas. | 2019-10-23T13:06:37.873Z | 2019-10-01T00:00:00.000 | {
"year": 2019,
"sha1": "501297ea164e2cef2262fe7523d3d8386f261c36",
"oa_license": "CCBY",
"oa_url": "https://www.mdpi.com/2072-6694/11/10/1587/pdf",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "d2bc07ad86a501627bfa9ddbca34bffc082bef07",
"s2fieldsofstudy": [
"Biology",
"Medicine"
],
"extfieldsofstudy": [
"Biology",
"Medicine"
]
} |
7080799 | pes2o/s2orc | v3-fos-license | A new approach to unbiased estimation for SDE's
In this paper, we introduce a new approach to constructing unbiased estimators when computing expectations of path functionals associated with stochastic differential equations (SDEs). Our randomization idea is closely related to multi-level Monte Carlo and provides a simple mechanism for constructing a finite variance unbiased estimator with"square root convergence rate"whenever one has available a scheme that produces strong error of order greater than 1/2 for the path functional under consideration.
INTRODUCTION
We have recently developed a general approach to constructing unbiased estimators, given a family of biased estimators. It turns out that the conditions guaranteeing its validity are closely related to those associated with multi-level Monte Carlo methods; see Rhee and Glynn (2012) for details and a more complete discussion of the theory. In this paper, we briefly describe the idea in the setting of computing solutions of stochastic differential equations and provide an initial numerical exploration intended to shed light on the method's potential effectiveness. As we will see below, the conditions under which our estimator produces an algorithm with "square root convergence rate" essentially coincide with the conditions required by multi-level Monte Carlo to converge at the same rate.
In particular, suppose that we wish to compute an expectation of the form α = Ek(X), where X = (X(t) : t ≥ 0) is the solution to the SDE dX(t) = µ(X(t))dt + σ (X(t))dB(t), (1) B = (B(t) : t ≥ 0) is m-dimensional standard Brownian motion, k : C[0, ∞) → R, and C[0, ∞) is the space of continuous functions mapping [0, ∞) into R d . In general, the random variable (rv) k(X) can not be simulated exactly, because the underlying infinite-dimensional object X can not be generated exactly. Instead, one typically approximates X via a discrete-time approximation X h (·). For example, the simplest such approximation is the Euler time-stepping algorithm given by that defines X h at the time points 0, h, 2h, ..., with X h defined at intermediate values via (for example) linear interpolation. Because (2) is only an approximation to the dynamics represented by (1), the rv k(X h ) is only an approximation to k(X), and consequently k(X h ) is a biased estimator for the purpose of computing α. The traditional means of dealing with this is to intelligently select the step size h and number of independent replications R as a function of the computational budget c, so as to maximize the rate of convergence. However, as pointed out by Duffie and Glynn (1995), such biased numerical schemes inevitably lead to Monte Carlo estimators for α that exhibit slower convergence rates than the "canonical" order c −1/2 rate associated with Monte Carlo in the presence of unbiased finite variance estimators. However, several years ago, Giles (2008) introduced an intriguing multi-level idea to deal with such biased settings that can dramatically improve the rate of convergence and can even, in some settings, achieve the canonical "square root" convergence rate associated with unbiased Monte Carlo. His approach does not construct an unbiased estimator, however. Rather, the idea is to construct a family of estimators (indexed by the desired error tolerance ε) that has controlled bias. In this paper, we show how it is possible, in a similar computational setting, to go one step further and to produce (exactly) unbiased estimators. The remainder of this paper is organized as follows: We discuss the idea in Section 2 of this paper, while Section 3 is devoted to an initial computational exploration of this approach.
THE BASIC IDEA
We consider here a sequence (X h n : n >= 0) of discrete-time time-stepping approximations to X that are all constructed on a common probability space in such a way that: represents a function which is bounded by some constant multiple of f (·) as h n → 0. Assuming, as is often the case for such discretization schemes, that the scheme generates normal rv's that are intended to mirror the Brownian increments of the process B driving the SDE (as in the Euler scheme (1.2) above), the easiest way to algorithmically obtain an approximating sequence X h n to X in which the X h n 's are jointly defined on the same probability space is by successive binary refinement, so that h n = 2 −n . In this setting, the new Brownian motion values (B( j2 −(n+1) ): j odd) required at discretization 2 −(n+1) can be obtained from the existing values (B( j2 −n ) : j ≥ 0) by generating B((2k + 1)2 −(n+1) ) from its conditional distribution given B(k2 −n ) and B((k + 1)2 −n ). On the other hand, one's ability to obtain i and ii depends both on the path functional k and on one's choice of discretization scheme.
In particular, suppose that one has established that the discretization X h exhibits strong order r. This implies that Thus, if k is (for example) a "Lipschitz final value" expectation so that k(x) = g(x(1)) for some Lipschitz function g : R d → R, ii is satisfied. In addition, if k is further assumed to be smooth with |k(X)| integrable, then i is satisfied whenever the discretization X h is known to be of weak order 1 or higher. It should be noted that these conditions are (very) closely related to those that appear in the literature on multi-level Monte Carlo for SDEs.
Note that each of the k(X 2 −n )'s is a biased estimator for α = Ek(X). To obtain an unbiased estimator, observe that ii) implies the existence of p > 0 such that as n → ∞, and hence (in view of ii), We now introduce a rv N, independent of B, that takes values in the positive integers and has a distribution with unbounded support (so that P(N > n) > 0 for n ≥ 1). For such a rv N, Note that Z is an unbiased estimator for α. This suggests computing α by generating iid replicates of the rv Z. Of course, the "square root" convergence rate of such an estimator is not guaranteed. Given the role that finiteness of the variance plays in obtaining such convergence rates, we next study this issue. Set Finally, Glynn and Whitt (1992) prove that "square root convergence rate" ensues if varZ < ∞ and if the expected computational effort required per replication of Z is finite. The expected computational "work" required for each Z is (roughly) given by where t i is the incremental effort required to compute k(X 2 −i ) (given k(X 1 ), . . . , k(X 2 −(i−1) )), and hence can be expressed as An approximation to t i is t i = 2 i−1 (the number of additional Gaussian rv's needed to generate X 2 −i ). In order that (3) be finite, we require that γ > 1. Consequently, a square root convergence rate is ensured when 2r > 1 ( in which case we can, for example, choose γ = (1 + 2r)/2).
A PRELIMINARY COMPUTATIONAL INVESTIGATION
In this section, we implement our method and compare it to the multilevel Monte Carlo algorithm suggested in Giles (2008). We consider two examples:
The numerical scheme used to solve each of the above SDE's was the Milstein scheme; see Kloeden and Platen (1992). For the above problems, we expect r = 1. For the purpose of this paper, we do not try to optimize the distribution of N, and instead choose N so that P(N ≥ i) = 2 3i/2 for i ≥ 1. (In other words, we choose γ as the midpoint between 1 and 2r, although any choice in (1, 2) would provide "square root convergence rate".) To compare our method to the multi-level Monte Carlo (MLMC) mehtod, we take the view (as in Giles, 2008) that the root mean square error (RMSE) ε to be achieved by the algorithm is given. Giles (2008) provides a complete description of how to construct a MLMC estimator achieving approximate RMSE ε; we have implemented that version of MLMC here. For our unbiased estimator, we generate independent and identically distributed (iid) replications of the rv Z until such time as the approximate RMSE is less than or equal to ε. In other words, our estimator for α is where the Z i 's are iid replicates of Z, and is the first time that the sample RMSE of the sample mean drops below ε. We use the stopping rule N(ε) in order to permit easy computation for our estimator, although its use is somewhat unnatural for our estimator (since its use induces bias in our estimator). For each of our two examples, we provide two tables. The first table for each example concerns our new estimator (4); IRE stands for "intended relative error", and k% then corresponds to setting ε = (k/100)|α|. The 90% confidence interval is then obtained by taking 100 replications of (4) for a given ε, computing the sample mean and sample standard deviation of the 100 observations, and constructing a confidence interval based on the normal approximation. The column corresponding to RMSE is the square root of the average, over the 100 observations, of the square of (4) minus EZ. (Thus, RMSE is reporting the actual root mean square error of the estimator, rather than the intended RMSE that the estimator has been designed to attain asymptotically.) The final column, denoted "work", reports a 90% confidence interval for the expected number of normal rv's generated to construct (4), based on our 100 samples. The second table for each example provides a corresponding set of values for the MLMC estimator.
Our results are reasonably comparable to those associated with MLMC, despite the fact that we have done essentially no tuning to optimize the distribution of N. In addition, our estimator is (arguably) easier to implement than MLMC, since (in its current form) there are no algorithmic parameters that are estimated "on the fly" within the algorithm (in contrast to MLMC). Thus, the unbiased estimators introduced here offer a promising computational alternative to MLMC in the presence of SDE numerical schemes having a strong order greater than 1/2. | 2012-07-10T12:48:34.000Z | 2012-07-10T00:00:00.000 | {
"year": 2012,
"sha1": "8bd43d192af918c827b823043927f5406c65c460",
"oa_license": null,
"oa_url": "http://www.informs-sim.org/wsc12papers/includes/files/con549.pdf",
"oa_status": "GREEN",
"pdf_src": "Arxiv",
"pdf_hash": "fcc823e445edafcf48032680f6ca491d3c2dba90",
"s2fieldsofstudy": [
"Mathematics"
],
"extfieldsofstudy": [
"Computer Science",
"Mathematics",
"Economics"
]
} |
118836884 | pes2o/s2orc | v3-fos-license | Gravity in 2T-Physics
The field theoretic action for gravitational interactions in d+2 dimensions is constructed in the formalism of 2T-physics. General Relativity in d dimensions emerges as a shadow of this theory with one less time and one less space dimensions. The gravitational constant turns out to be a shadow of a dilaton field in d+2 dimensions that appears as a constant to observers stuck in d dimensions. If elementary scalar fields play a role in the fundamental theory (such as Higgs fields in the Standard Model coupled to gravity), then their shadows in d dimensions must necessarily be conformal scalars. This has the physical consequence that the gravitational constant changes at each phase transition (inflation, grand unification, electro-weak, etc.), implying interesting new scenarios in cosmological applications. The fundamental action for pure gravity, which includes the spacetime metric, the dilaton and an additional auxiliary scalar field all in d+2 dimensions with two times, has a mix of gauge symmetries to produce appropriate constraints that remove all ghosts or redundant degrees of freedom. The action produces on-shell classical field equations of motion in d+2 dimensions, with enough constraints for the theory to be in agreement with classical General Relativity in d dimensions. Therefore this action describes the correct classical gravitational physics directly in d+2 dimensions. Taken together with previous similar work on the Standard Model of particles and forces, the present paper shows that 2T-physics is a general consistent framework for a physical theory.
I. GRAVITATIONAL BACKGROUND FIELDS IN 2T-PHYSICS
Previous discussions on gravitational interactions in the context of 2T-physics appeared in [1][2] [3]. There it was shown how to formulate the motion of a particle in background fields (including gravity, electromagnetism, high spin fields) with a target spacetime in d+2 dimensions with two times. The previous approach was a worldline formalism in which consistency with an Sp(2, R) gauge symmetry produced some constraints on the backgrounds. Those restrictions should be regarded as gauge symmetry kinematical constraints on the background fields, which can be used to eliminate ghosts and redundant degrees of freedom by choosing a unitary gauge if one wishes to do so. Consistent with the notion of backgrounds, the Sp(2, R) constraints by themselves did not impose any conditions on the dynamics of the physical background fields that survive after choosing a unitary gauge.
In the present paper we construct the off-shell field theoretic action for Gravity in d + 2 dimensions, that not only reproduces the correct Sp(2, R) gauge symmetry kinematical constraints mentioned above when the fields are on-shell, but also yields the on-shell or off-shell dynamics of gravitational interactions. This d + 2 formulation of gravity is in full agreement with classical General Relativity in (d − 1) + 1 dimensions with one time as described in the Abstract.
We will use the brief notation GR d to refer to the emergent form of General Relativity, which is usual GR with some additional constraints that are explained below, while the notation GR d+2 is reserved for the parent theory from which GR d is derived by solving the kinematic constraints. So GR d can be regarded as a lower dimensional holographic shadow of GR d+2 that captures the gauge invariant physical sector that satisfies the Sp(2, R) kinematic constraints. There are however other holographic shadows of the same GR d+2 that need not look like GR d but are related to it by duality transformations. These shadows, and the relations among them, provide additional information about the nature of gravity that is not captured by the usual one-time formulation of physics.
The key element of 2T-physics is a worldline Sp(2, R) gauge symmetry which acts in phase space and makes position and momentum X M (τ ) , P M (τ ) indistinguishable at any worldline instant τ [3]. This Sp(2, R) gauge symmetry is an upgrade of worldline τ reparametrization to a higher gauge symmetry. It cannot be realized if target spacetime has only one time dimension. It yields nontrivial physical content only if the target spacetime X M includes two time dimensions. Simultaneously, this larger worldline gauge symmetry plays a crucial role to remove all unphysical degrees of freedom in a 2T spacetime, just as worldline reparametrization removes unphysical degrees of freedom in a 1T spacetime. Furthermore, more than two times cannot be permitted because the Sp(2, R) gauge symmetry cannot remove the ghosts of more than 2 timelike dimensions.
We could discuss the field theory for Gravity directly, but it is useful to recall some aspects of the worldline Sp(2, R) formalism that motivates this construction. The general 2T-physics worldline action for a spin zero particle moving in any background field is given by [1] S = dτ (∂ τ X M P M (τ ) − 1 2 A ij (τ ) Q ij (X (τ ) , P (τ )) ). (1.1) This action has local Sp(2, R) symmetry on the worldline [1]. The 3 generators of Sp(2, R) are described by the symmetric tensor Q ij = Q ji with i = 1, 2, and the gauge field is A ij (τ ) . The background fields as functions of spacetime X M are the coefficients in the expansion of Q ij (X, P ) in powers of momentum, Q ij (X, P ) = Q 0 ij (X) + Q M ij (X) P M + Q M N ij (X) P M P N + · · · .
In the current paper we wish to describe only the gravitational background. Therefore, specializing to a simplified version of [1] we take just the following form of Q ij (X, P ) In the last line £ V G M N is the Lie derivative of the metric, which is a general coordinate transformation of the metric using the vector V M (X) as the parameter of transformation The equivalence of the expressions in (1.6,1.7) is seen by replacing every derivative in (1.6) by covariant derivatives using the Christoffel connection Γ P M N , such as ∇ P V N = ∂ P V N + Γ N P Q V Q , and recalling that the covariant derivative of the metric vanishes ∇ K G M N = 0 : We can deduce that the above relations imply that G M N can be written as This is proven by inserting the expression for the Christoffel connection in There are an infinite number of solutions [1] that satisfy (1. 3-1.9). An example is flat spacetime This satisfies the Sp(2,R) relations (1.3-1.9). In this case the Sp(2, R) generators are simply This flat background has an SO(d, 2) global symmetry (Killing vectors of the flat metric η M N ) whose generators L M N = X M P N − X N P M commute with the dot products in (1.11).
The phase space X M , P M and the background fields W (X) , V M (X) , G M N (X) are restricted by the Sp(2,R) relations (1.3-1.9) as well as by the requirement of Sp(2, R) gauge invariance Q ij (X, P ) = 0 in the physical subspace. The latter is derived from the action (1.1) as the equation of motion for the gauge field A ij . This combination of constraints are just the right amount to remove ghosts from a 2T spacetime and end up with a shadow sub-phase-space (x µ , p µ ) with a 1T spacetime which describes the gauge fixed physical sector. There are no nontrivial solutions if the higher spacetime has fewer than 2 timelike dimensions. This is easy to verify for the flat example (1.10). Furthermore, if the higher spacetime has more than 2 timelike dimensions there are always ghosts. Hence the Sp(2, R) gauge symmetry demands precisely 2 timelike dimensions, no less and no more 2 .
The solution of (1.3-1.9) at the classical level was obtained in [1] [2], where it was shown that the worldline action (1.1) reduces (as one of the shadows) to the well known 1-time worldline action of a particle moving in an arbitrary gravitational background field g µν (x µ ) in d dimensions This 1T action has enough well known gauge symmetry to remove ghosts in 1T-physics. This remaining gauge symmetry is part of the original Sp(2, R) .
This fixing of gauges to a unitary gauge, demonstrates that the Sp(2,R) relations (1.3-1.9) have the right amount of gauge symmetry to remove ghosts. Hence the 2T-physics approach provides a physical theory for gravity formulated directly in the higher spacetime X M in d + 2 dimensions with two times in the form of the action (1.1), as long as the background fields W (X) , V M (X) , G M N (X) satisfy the Sp(2,R) kinematic constraints (1.3-1.9) that are also formulated directly in d + 2 dimensions.
2 A more general argument that applies to all backgrounds is the following. By canonical transformations that do not change the signature, the first two constraints Q 11 , Q 12 can always be brought to the flat form, while Q 22 has the backgrounds (second reference in [1]). Then non-trivial solutions require 2 times. Another point is that the signature of the Sp(2, R) parameters, which is the same as SO(1, 2) with 1 space and two times, determines the signature of the constraints and of the removable degrees of freedom from X M , P M .
Note however that the Sp(2,R) constraints are not enough to give the dynamical equations that the gravitational metric g µν (x) in (d − 1) + 1 dimensions should satisfy. To do this we must build a field theoretic action in d + 2 dimensions that not only gives correctly the Sp(2,R) kinematic constraints (1.3-1.9), but also gives dynamical equations in d + 2 dimensions for the metric G M N (X) , and auxiliary fields W (X) , V M (X) , which in turn correctly reproduce the equations of General relativity for the metric g µν (x) . This is what we will present in the rest of this paper.
II. GRAVITATIONAL ACTION
The first kinematic equation (1.3) will be imposed from the start, so the auxiliary field V M (X) will not be included as a fundamental one in the action, but instead will be replaced by V M = 1 2 ∂ M W consistent with (1.3). Recall that Q 11 = W (X) = 0 is one of the Sp(2, R) constraints of the worldline theory. To implement this constraint covariantly in d + 2 dimensions we follow the methods that were successful in flat space [4][5], namely include a delta function as part of the volume element δ (W (X)) d d+2 X in the definition of the action of 2T field theory 3 . The field W will appear in other parts of the action as well. In flat space W (X) was a fixed background W f lat (X) = X · X, but in the present case it is a field that will be allowed to vary as any other. In addition to W (X) and G M N (X) we will need also the dilaton field Ω (X) in order to impose consistency with the kinematic constraints (1.3-1.9) required by the underlying Sp(2, R) . The dilaton plays a similar role even in flat 2T field theory especially when d = 4 [5]. Our proposed action for the 2T gravity triplet G M N , Ω, W is 3 Some studies for conformal gravity in 4+2 dimensions using Dirac's approach to conformal symmetry [8]- [19] also use fields in 4+2 dimensions and include a delta function [17][19] (see also [13]). Their focus is conformal gravity aiming for and constructing a totally different action. While we have some overlap of methods with [17][19], we have important differences right from the start. They impose kinematic constraints as additional conditions that do not follow from the action, as we did also in our older work [2]. These are related to the conceptually more general Sp(2,R) constraints in 2T-physics. The new progress in 2T field theory since [4] [5] is to derive the constraints as well the dynamics from the action, without imposing them externally. In our present work the unusual piece of the action S W , with W a field varied like any other, are the new crucial ingredients in curved space that allows us to derive all Sp(2, R) constraints from the action, and leads to the new physical consequences.
Note that the last term in the action S W contains δ ′ (W ) rather than δ (W ) . The overall constant γ is a volume renormalization constant that also appears in flat 2T field theory ( [5][6] [7]), and is specified after Eq. (7.19). Demanding consistency with the Sp(2, R) kinematic constraints (1.3-1.9) will fix the constant a uniquely to As will be explained below, for this special value of a, the "conformal shadow" in d dimensions has an accidental local Weyl symmetry (even though the d + 2 theory does not have it).
The action above is a no scale theory. The dimensionful gravitational constant will develop spontaneously from a vacuum expectation value of the dilaton Ω = 0. The corresponding Goldstone boson as seen by observers in d dimensions is gauge freedom removable by the accidental Weyl gauge symmetry.
The various factors in the action involving powers of Ω are determined as follows. We assign engineering dimensions for X M , G M N , Ω, W, which are consistent with their flat counterparts in (1.10), as follows Accordingly, powers of the dilaton Ω are inserted as shown to insure that the action is dimensionless dim (S) = 0. The underlying reason for this is a gauge symmetry, that we called the 2T gauge symmetry in field theory [5], which becomes valid when the factors of Ω are included. The dimensions (2.6) will appear in the Sp(2, R) kinematic equations that follow from the action, and coincide precisely with the kinematic constraints (1.4,1.5) that are required by the worldline Sp(2, R) gauge symmetry. These turn into homogeneity constraints in flat space, when V M f lat = X M and X · ∂W f lat = 2W f lat and X · ∂G M N f lat = 0, which are consistent with dim (W ) = 2, dim (G M N ) = 0 respectively as given in (2.6). The consistency of the kinematic equations with each other (equivalently the gauge symmetry) restricts the form of self interactions of the scalar to the form where the arbitrary constant λ is dimensionless.
III. EQUATIONS OF MOTION FOR G M N
We first concentrate on S G . Using the variational formulas and doing integration by parts as needed, we obtain the following variation of S G with respect to the metric The last term will generate terms proportional to δ (W ) , δ ′ (W ) , δ ′′ (W ) as follows Additional terms in the action are needed to modify the expressions proportional to δ ′ (W ) , δ ′′ (W ) because requiring δ G (S G ) to vanish on its own would put severe and inconsistent constraints on G M N and Ω that are incompatible with the Sp(2, R) kinematic conditions in (1.3-1.9). This is the first reason for introducing the additional term S W which miraculously produces just the right structure of variational terms that make the Sp(2, R) constraints (1.3-1.9) compatible with the equations of motion derived from the action. Actually S W performs a few more miracles involving the variations of Ω and W as well, as we will see below.
Thus let us study the variation of S W with respect to δG M N After an integration by parts this gives We will also need the variation of S Ω with respect to δG M N , but this contains only δ (W ) The vanishing of the total variation δ G ( The vanishing expression M N as shown. Next, taking into account the remarks in the footnote 4 , we refine the three equations of motion implied by Eq.(3.9). Each field is expanded in powers of W (X) . For this, imagine parametrizing X M in terms of some convenient set of coordinates such that w ≡ W (X) is one of the independent coordinates. Denoting the remaining d + 1 coordinates collectively as u, schematically we can , Ω (X) = Ω (u, w) and W (X) = w. Then we may expand and similarly for Ω (u, w) = Ω (u, 0) + · · · . In 2T-field theory in flat space, the zeroth order terms analogous to G M N (u, 0) and Ω (u, 0) were the physical part of the field, while the rest, which we called the "remainder", was gauge freedom, and could be set to zero. In this paper we will assume that there is a similar justification for setting the remainders to zero (or some other convenient gauge choice) after the variation of the action has been performed as in (3.9-3.12).
A procedure for dealing with the remainders in this fashion could be justified in the case of 2T field theory in flat space 5 . In any case, setting all the remainders to zero is a legitimate solution of the classical equations of interest in this paper. Proceeding under this assumption, we keep only the zeroth order terms in the expansions (3.13). Then, in view of footnote (4), the three classical equations of motion implied by Eq.(3.9) are We see immediately from Eq.(3.12) that the equation of motion V If we can show that (−6 + ∇ 2 W + ∂W · ∂ ln Ω 2 ) = 2, then (3.16) reproduces the third Sp(2, R) constraint (1.5-1.9). This is proven as follows. The variation of the action with respect to Ω produces on-shell conditions for Ω; among these Eq.(4.6), . We insert this in (3.16) and then contract Eq.(3. 16) with G M N to obtain an equation for only ∇ 2 W, whose solution is a constant These lead to the on-shell value (−6 + ∇ 2 W + ∂W · ∂ ln Ω 2 ) = 6 (8a − 1) [(8a − 1) (d + 2) + 1] −1 , which takes the desired value of 2 provided a = d−2 8(d−1) as given by Eq. (2.5). With this unique a we obtain the on-shell values . (3.17) which is precisely the third Sp(2, R) kinematic constraint (1.5-1.9).
Hence, we have constructed an action consistent with the Sp(2, R) conditions (1.3-1.9), and the condition Q 11 = W (X) = 0. These were the necessary kinematic constraints to remove all the ghosts in the two-time theory for Gravity. They produce a shadow that describes gravity in (d − 1) + 1 dimensions as in Eq.(1.12) in the worldline formalism, and also in the field theory formalism as discussed before [2] and which will be further explained below.
The remaining field equation V with an energy-momentum source T M N (Ω, G) provided by the dilaton field The unique value of the constant a (2.5) will be required also by additional Sp(2, R) relations as will be seen below. Under the assumption that the dilaton field Ω is invertible (certainly so if it has a nonzero vacuum expectation value), we have divided by the field Ω to extract T M N . Once all the kinematic constraints obtained above and below are taken into account, this correctly reduces to General Relativity in d dimensions as a shadow (see below). So, S = S G + S Ω + S W is a consistent action that produces the correct gravitational classical field equations directly in d + 2 dimensions.
IV. EQUATIONS OF MOTION FOR Ω
We now turn to the variation of the action with respect to the dilaton Ω to extract its equations of motion. After integration by parts that produce δ ′ (W ) , δ ′′ (W ) terms, we obtain where we have added the vanishing expression Ω [16δ ′ (W ) + 8W δ ′′ (W )] = 0 to obtain a convenient form. Including δ Ω (S G ) , which contains only δ (W ) , we obtain the total variation As in the discussion before, we seek a solution when the remainders of the fields vanish. Then the three on-shell equations are ) which amounts to the Sp(2, R) kinematic constraints (1.3-1.4). The condition F (1) = 0 produces a kinematic constraint ∂W · ∂ ln Ω 2 = 8a (6 − ∇ 2 W ) for the field Ω as used in the derivation of Eq. (3.17). After inserting the on-shell value ∇ 2 W = 2 (d + 2) from Eq.(3.17) for the spacial value of a, the constraint becomes In the flat limit of Eq.(1.10) this reduces to F (1) ] Ω = 0, which is a homogeneity constraint on Ω consistent with the assigned dimension of the field Ω in Eq.(2.6).
Therefore, this is another consistency condition that requires the value of a in Eq.(2.5). We will see below when we study variations with respect to the field W, that there is a stronger and independent gauge symmetry argument that fixes uniquely the same value of a.
The dynamical equation for Ω is now determined by setting F (0) = 0 with the special a Here there is an interesting point to be emphasized. The precise coefficient of ΩR (which is 2a) is the one that would normally appear for the conformal scalar in d dimensions, but note that the Laplacian and the curvature R (G) in our case are in d + 2 dimensions not in d dimensions.
If the coefficient had been the one appropriate for d + 2 dimensions, namely − d 4(d+1) , then there would have been a local Weyl symmetry that could eliminate Ω (X) from the theory by a local Weyl rescaling. However, this is not the case presently. Nevertheless, we will identify later an accidental local Weyl symmetry for the "conformal shadow" in d dimensions (that is, not Weyl in the full d + 2 dimensions). This partially local "accidental" Weyl symmetry will indeed eliminate the fluctuations of Ω (X) in the shadow subspace, but still keeping some dependence of Ω in the extra dimensions. In this way, the special value of a will allow us to eliminate the massless Goldstone boson that arises due to spontaneous breakdown of scale invariance in the shadow subspace.
V. EQUATIONS OF MOTION FOR W
The part of the action S G + S Ω contains W only in the delta function, so its variation is proportional to δ ′ (W ) Varying W in S W produces terms proportional to δ ′ (W ) , δ ′′ (W ) and δ ′′′ (W ) as follows We have added the vanishing expression Ω 2 [12δ ′′ (W ) + 4W δ ′′′ (W )] = 0 to obtain a convenient form. Thus the δ W variation of the total action has the form δ W (S G + S Ω + S W ) = γ d d+2 X √ G δW Z (X) , which leads to the equation of motion Z (X) = 0 It is remarkable that, if we use the on-shell kinematic equations of motion for W and Ω Therefore minimizing the action with respect to W does not produce any new kinematic or dynamical on-shell conditions for the fields. Hence, the on-shell value of W (X) is arbitrary, indicating the presence of a gauge symmetry only for the special value of a = d−2 8(d−1) .
VI. OFF-SHELL GAUGE SYMMETRY
Let us now prove that indeed there is an off-shell gauge symmetry without using any of the kinematic or the dynamical equations of motion. A gauge transformation of the total action has the form δ with local functions α (X) , β (X) that will be determined below in terms of Λ (X). We collect the coefficients of δ (W ) , δ ′ (W ) , δ ′′ (W ) in the gauge transformation δ Λ S after using the delta function identities wδ ′ (w) = −δ (w) , wδ ′′ (w) = −2δ ′ (w) and wδ ′′′ (w) = −3δ ′′ (w) . This gives We first analyze the term proportional to δ ′′ (W ) . After inserting the off-shell quantities V (2) M N , F (2) , Z (3) if Eqs.(3.12,4.7,5.7) we see that the δ ′′ (W ) term can be written as a total divergence 6 plus a term proportional to δ ′ (W ) : The total divergence can be dropped in Therefore, in the gauge transformation (6.2) the part proportional δ ′′ (W ) can be eliminated at the expense of adding U (1) δ ′ (W ) to the part proportional to δ ′ (W ) . Now we have 3 functions (α, β, Λ) at our disposal to fix to zero the 2 remaining terms of the gauge transformation (6.2), namely Clearly there is freedom to fix α, β in terms of an arbitrary Λ to insure the off-shell gauge symmetry of the action δ Λ S = 0.
The analysis of the equations of motion in the previous section had indicated that W (X) was arbitrary on-shell. The discussion in this section shows that this freedom extends to also off-shell, since according to (6.1), we can use the gauge freedom Λ (X) to choose W (X) arbitrarily as a function of X.
VII. GENERAL RELATIVITY AS A SHADOW
From the gauge transformations (6.1) we see that the gauge symmetry indicates that W (X) is gauge freedom, so it can be chosen arbitrarily as a function of X M before restricting spacetime by the condition W (X) = 0 in d + 2 dimensions. This freedom is related to the production of multiple d dimensional shadows of the same d + 2 dimensional system.
Our action is also manifestly invariant under general coordinate transformations in d + 2 dimensions, which can be used to fix components of the metric G M N (X) . This freedom will also be used in the production of shadows.
To proceed to generate a shadow of our theory in d dimensions it is useful to choose a parametrization of the coordinates X M in d + 2 dimensions in such a way as to embed a d dimensional subspace x µ in the higher space X M . There are many ways of doing this, to create various shadows with different meanings of "time" as perceived by observers that live in the fixed shadow x µ . This was discussed in the past for the particle level of 2T-physics and recently for the field theory level [6] [7]. A particular parametrization which is useful to explain massless particles and conformal symmetry in flat space [8]- [10] as a shadow of Lorentz symmetry in flat (d + 2) dimensions was commonly used in our past work. We will call this the "conformal shadow". The parametrization in this section, which should be understood to correspond to one particular shadow, is a generalization of the conformal shadow to curved space.
We choose a parametrization of X M in terms of d + 2 coordinates named (w, u, x µ ). In the new curved space (w, u, x µ ) , where the basis is specified by ∂ M = (∂ w , ∂ u , ∂ µ ) , we use general coordinate transformations to gauge fix d+2 functions among the G M N (w, u, x µ ) , namely G wu = 1, G uu = G wµ = 0, so that the metric takes the following form In this basis we make a choice for W (X) which specifies the conformal shadow. Namely we take W (X) = w as one of the coordinates We compute ∂ M W (X) in this basis and find This determines G ww (w, u, x µ ) = 4w. (7.5) Next we apply the Sp(2, R) kinematical constraint (1.9) which was also derived in field theory in Eq.(3.17). We will use the equivalent form in (1.6), Then we get V M ∂ M = 2w∂ w − 1 2 ∂ u , and the kinematic constraint (1.6) takes the form We check that G ww = 4w, G wu = 1, G uu = G wµ = 0, all satisfy these kinematical conditions automatically, while the remaining components, G µu , G µν , must depend on u, x and w only in the following specific form G µν (w, u, x µ ) = e 4uĝµν x, e 4u w , (7.8) G µu (w, u, x µ ) = e 4u γ µ x, e 4u w . (7.9) As explained following Eq.(3.13), in an expansion in powers of w only the zeroth order term is kept in our solution. So, for our purposes here G µν (w, u, x µ ) = e 4u g µν (x) and G µu (w, u, x µ ) = e 4u γ µ (x) are independent of w. Even though we have already used up all of the gauge freedom of general coordinate transformations to fix d + 2 functions of (w, u, x µ ) as in Eq.(7.1), there still remains general coordinate symmetry to reparameterize arbitrarily the subspace (u, x µ ) in such a way that the form of the metric in Eq.(7.1) remains unchanged. This allows us to fix d functions of (u, x µ ) arbitrarily as gauge choices. Therefore, for the w independent components of the metric at w = 0 we can make the gauge choice We remain only with the degrees of freedom of the metric g µν (x) in d dimensions given by There still remains gauge symmetry for general coordinate transformations in the x µ subspace. In this form it is easy to compute the determinant of G M N , given in (7.1). This gives det ( As a final check we compute that ∇ 2 W = 2 (d + 2) is also satisfied as required by Eq.(3.17), as follows The metric G M N (X) given in Eqs. (7.1,7.5,7.10,7.11) shows that, after imposing the kinematic constraints at the classical level, the conformal shadow is described only in terms of the degrees of freedom g µν (x) in d dimensions.
We now go through similar arguments to impose the kinematic constraint (4.8) for Ω. This takes the form The solution is, Ω (w, u, x) = e −(d−2)uφ (x, e −4u w) , in which the zeroth order term in the expansion in powers of w is identified as the physical field φ (x) in d dimensions After solving the kinematic constraints we have arrived at the conformal shadow with only the degrees of freedom g µν (x) , φ (x) . We can now evaluate the full action for the shadow. The volume element becomes Every term in the Lagrangian density is now independent of w and has the same overall factor e 2du as the only possible dependence on u. Specifically Ω 2 is proportional to e 2(d−2)u and R (G) is proportional to e 4u , so Ω 2 R (G) is proportional to e 2du , etc. Both the w and u dependences are explicit. So the action in d + 2 dimensions produces the following shadow action in d dimensions where the overall renormalization constant γ is chosen so that γ du = 1. The factor of γ can be interpreted as a renormalization of Planck's constant since in the path integral appears only in the form S/ .
The shadow Lagrangian in d dimensions L d (x) takes the form Recall that the special value of a was required to generate consistently all of the Sp(2, R) kinematic constraints. Then φ (x) is the conformal scalar in d dimensions. As discussed earlier following Eq.(4.9), this action has an accidental local Weyl symmetry given by S g,φ = S (g, φ) under the gauge transformatioñ This gauge freedom can be used to gauge fix φ (x) except for an overall constant that absorbs dimensions. Assuming φ (x) has a non-zero vacuum expectation value φ 0 , we may write φ 2 (x) = φ 2 0 e (d−2)σ(x) and gauge fix the fluctuation σ (x) = 0. Note that σ (x) would have been the Goldstone boson for dilatations, but in the present theory it is not a physical degree of freedom.
We can try to trace back the origin of this accidental Weyl symmetry. It is related to the gauge symmetry discussed in section (VI). That symmetry was already used to gauge fix W (X) = w. There remains leftover gauge symmetry that does not change w, but can change the w independent parts of the fields Ω, G M N which describe the shadow. So, the conformal shadow ends up having the accidental Weyl symmetry.
It is important to emphasize that the action in d + 2 dimensions does not have a Weyl symmetry, therefore Ω could not be removed locally. In fact, as seen from (7.17), even after gauge fixing φ (x) , as well putting the theory on shell, the original field becomes Ω (w, u, x) = e (d−2)uφ (x, e 4u w) = e (d−2)u φ 0 +O (w) , so even on-shell it still depends on the spacetime coordinate u in d + 2 dimensions (also on w before setting w = 0). Thus, the full Ω is not a trivial pure gauge freedom in our theory.
The shadow that emerged with a constant φ 0 has exactly the form of General Relativity with a possible cosmological constant contributed by φ −2 0 V (φ 0 ) , if this quantity is non-vanishing What is left behind from φ (x) in the shadow is only the constant φ 0 of mass scale M d−2 2 . This constant cannot be determined within the theory we have outlined so far. With our potential V (φ) in Eq.(2.7), minimizing the action with respect to φ (x) , and then gauge fixing to φ (x) = φ 0 , does not produce a new equation for φ 0 other than the one obtained by minimizing the action with respect to the metric g µν , namely R (g) = 1 . An effective potential V (φ) with a non-trivial minimum could determine φ 0 . We assume that a non-trivial minimum arises self-consistently from either quantum fluctuations (dimensional transmutation [20]), or from the completion of our theory into string theory or M-theory (with 2 times). Although we could not determine φ 0 ∼ M d− 2 2 within the classical considerations here, this φ 0 that appears as a constant shadow of Ω (X) to observers in x-space, is evidently related to Newton's constant G d or the Planck constant κ d or the Planck scale l p in d dimensions
VIII. GRAVITATIONAL NON-CONSTANT, NEW COSMOLOGY?
We now outline the coupling of our gravity triplet W, Ω, G M N to matter fields of the type Klein-Gordon (S i (X)), Dirac (Ψ (X)) and Yang-Mills (A M (X)). In flat 2T field theory these must have the following engineering dimensions [5] dim The general 2T field theory of these fields in flat space in d + 2 dimensions was given in [5]. The matter part of the theory in curved space follows from the flat theory in [5] by making the substitutions indicated in Table-1. The dilaton Ω couples to Yang-Mills fields and fermions only as follows 3) The dilaton disappears in these expressions when d + 2 = 6. In addition, even when d + 2 = 6, the dilaton can also couple to other scalars S i (X) in the potential energy V (Ω, S) with the only condition that V (Ω, S i ) has length dimension (−d) when dim (Ω) = dim (S i ) = − (d − 2) /2. This is the only place the extra field Φ appeared in flat space in the Standard Model [5], so that field may or may not be the dilaton 7 Φ = Ω?
We now emphasize an important property of the scalars S i (including the Higgs field in the Standard Model). It turns out that, for consistency with the Sp(2, R) conditions (1.3-1.9), the quadratic part of the Lagrangian for any real scalar S i (X) must have exactly the same structure as the one for the dilaton field Ω. So, the quadratic part of the action for any scalar must have the form of the dilaton action S (Ω) = S G (Ω) + S Ω (Ω) + S W (Ω) in Eqs.(2.1-2.4), except for substituting Ω → S i , and except for an overall normalization constant 8 . This structure has been indicated in the table above, where the piece symbolically written as L(W, S 2 i ) or L(W, Ω 2 ) is the piece that contributes to the action S W in Eq.(2.4), which appears with a δ ′ (W ) rather than Furthermore the same special a = (d − 2) /8 (d − 1) must appear in the action of any scalar.
This last requirement is related to the underlying Sp(2, R), and is most directly understood by analyzing the consistency of the equations of motion for the fields G M N , S i and W in the same footsteps as sections (III-V). The Sp(2, R) constraint is that we must always obtain the same kinematic equations of motion, in particular G M N = 1 2 ∇ M ∂ M W in Eq.(3.17,1.9), independent of the field content in the action. This is a strong condition that demands the stated structure for the Lagrangian for any scalar field S i . Of course, in flat space this is immaterial since R (G) is zero, but it has an important physical effect on the meaning of the gravitational constant, as perceived by observers in the shadow worlds in d dimensions, as we will see below.
There remains however the freedom of an overall normalization which, for physical reasons, must be taken as specified in the table above. Namely, for the dilaton, the sign of the term 7 An important additional field that was required when d + 2 = 6 even in flat space was a "dilaton", which was named Φ in [5] and had dimension dim (Φ) = − d−2 2 like any other scalar field Ω, S i . A natural as well as economical assumption (although not necessary) is to identify the scalar field Φ that appeared in the 4 + 2 dimensional Standard Model with the dilaton field Ω = Φ that now appears as part of the gravity triplet W, Ω, G MN . 8 A complex scalar would be constructed from two real scalars ϕ = (S 1 + iS 2 ) / √ 2.
Ω 2 R (G) must be positive since this is required by the positivity condition of gravitational energy in the conformal shadow as seen from Eq. (7.22). Since the dilaton is gauge freedom in the conformal shadow, the sign or normalization of the term 1 2a G M N ∂ M Ω∂ N Ω was not crucial. However, for the remaining scalar fields the sign and normalization of the kinetic term − 1 2 G M N ∂ M S i ∂ N S i must be fixed by the requirements of unitarity (no negative norm fluctuations) and conventional definition of norm.
It is interesting that there is a physical consequence. We consider again the conformal shadow and try to interpret the physical structure for observers in the smaller d dimensional space. The conformal shadow is obtained by the same steps as before by taking W (X) = w. We concentrate only on the scalars and the metric. These fields have the following shadows The action in the conformal shadow at w = 0 is then 9 Due to the special value of a there is one overall local Weyl symmetry which can be used to fix the gauge φ (x) = φ 0 (8.8) 9 We must be careful that the equations of motion derived from this action are consistent with the original equations of motion in d + 2 dimensions. In fact, this is not trivial. The shadow extends to w, u space through first and second order terms in the expansion in powers of w, such as The Riemann tensor R MN P Q (G) constructed from G MN (w, u, x) contains the modesg µν ,g µν even after setting w = 0 because there are derivatives with respect to w. Thus, we emphasize that R µνλσ (G) at w = 0 depends on g µν ,g µν andg µν so it is not the same as R µνλσ (g) , and similarly for other components. Consistency with the full set of equations of motion given above require also the modesφ,φ,s i ,s i . However, all extra modes get determined in terms of only g µν , φ, s i self consistently through the full set of equations of motion in d + 2 dimensions. The self consistent dynamics in shadow space x µ , is then determined only by g µν (x) , and the interactions among fields involve only φ (x) and s i (x) . Their consistent interactions, as derived from the original equations of motion, are then described by the shadow action given here. These technical details will be given in a separate paper.
as discussed above. So, φ (x) disappears, while the remaining scalar fields s i (x) are correctly normalized and are physical. The modified Einstein equation that follows from this action is with the energy momentum tensor given by . (8.10) The trace of this energy momentum tensor is After using the equations of motion ∇ 2 s i = ∂V /∂s i + 2as i R, the special value of a, and the homogeneity of the potential φ ∂V ∂φ + i s i ∂V ∂s i = 2d d−2 V, we compare to the trace of Eq.(??), (1 − d/2) R = g µν T µν , and solve for R. We obtain This is the same result as starting with the equation of motion for φ (x) and then choosing the gauge φ (x) → φ 0 . Therefore the φ equation of motion is recovered from the equations of motion of the other fields, showing consistency.
When the s i are small fluctuations, φ −2 0 approximates the overall factor in T µν . Then the gravitational constant is determined approximately by φ 0 , as specified in Eq.(7.23).
However, if V (φ 0 , s i ) has non-trivial minima that lead to non-trivial vacuum expectation values for some of the s i = v i , then in that vacuum the gravitational constant is determined by rather than only φ −2 0 . The massless Goldstone boson, which is removed by the Weyl symmetry, is then a combination of φ and the scalars s i that developed vacuum expectation values.
Such phase transitions of the vacuum can occur in the history of the universe as it expands and cools down. This is represented by an effective V (φ, s i ) that changes with temperature. So, the various v i may turn on as a function of temperature v i (T ) or equivalently as a function of time. Among the phase transitions to be considered is inflation, possible grand unification symmetry breaking, electroweak symmetry breaking, as well as some possible others in the context of string theory to determine how we end up in 4 dimensions with a string vacuum state compatible with the Standard Model.
It would be interesting to pursue the possibility of a changing effective gravitational constant, as above, since this cosmological scenario is now well motivated by 2T-physics. This scenario may not have been investigated before.
IX. COMMENTS
As expected naively, extra timelike dimensions potentially introduce ghosts (negative probabilities) as well as the possibility of causality violation, leading to interpretational problems. However, 2T-physics overcomes these problems by introducing the right set of gauge symmetries, thus correctly describing the physical world, including the physics of the Standard Model of particles and forces [5] [21], and now General Relativity.
At the same time 2T-physics also gives additional physical information which is not encoded in 1T-physics. This is because according to 2T-physics there is a larger spacetime in d + 2 dimensions X M where the fundamental rules of physics are encoded. These rules include a complete symmetry of position-momentum X M , P M according to the principles of a local Sp(2, R) with generators Q ij (X, P ). This leads effectively to gauge symmetries in d + 2 dimensions that can remove degrees of freedom and create a holographic shadow of the d + 2 universe in d dimensions x µ . There are many such shadows, and since observers in different shadows use different definitions of time, they interpret their observations as different 1T dynamics. However, the shadows are related since they represent the same higher dimensional universe. These predicted relations would be interpreted as dualities by observers that live in the lower dimension x µ that use 1T-physics rules. With hard work, observers in the smaller x µ space could discover enough of these dualities among the shadows to reconstruct the d + 2 dimensional highly symmetric universe. 2T-physics provides a road map for this reconstruction by predicting the properties of the shadows.
Examples of some simple dualities in d dimensions, that arise from flat d + 2 dimensional spacetime, in the context of field theory such as the Standard Model, were discussed in ( [6], [7]). In the flat case, each shadow has SO(d, 2) global symmetry as hidden symmetry, where this SO(d, 2) is the shadow of the global Lorentz symmetry in d + 2 dimensions as identified in Eq.(1.11). So clues of the higher spacetime can also appear within each shadow in the form of hidden symmetries. Examples of these in field theory were also discussed in ( [6], [7]).
In curved spacetime, the details of the shadow as seen by observers stuck in the smaller spacetime x µ , depends partially on the choice of W as a function of (w, u, x µ ) . In this paper we discussed the "conformal shadow" defined by W (w, u, x µ ) = w in Eq.(7.2) and the gauge fixed form of the metric (7.1). Together, these define the timeline in the shadow space x µ as some curve embedded in the 2-time spacetime in d+2 dimensions. A different choice of gauges leads to a different shadow space with a different timeline. The same dynamics in d + 2 dimensions X M tracked as a function of one timeline can appear to be quite different 1-time dynamics relative to another timeline. Evidently, there are many choices that correspond to many embeddings of d dimensional spacetime x µ (with 1 time) into d + 2 dimensional spacetime X M (with 2 times), and these are expected to lead to dualities that relate the different looking 1-time dynamics. Depending on the nature of the higher curved space X M , there could be hidden symmetries that would be seen in each smaller x µ space as clues of the extra space and time.
The kinds of predictions above can be used to generate multiple tests of 2T-physics. This line of investigation is at its infancy and is worth pursuing vigorously.
In addition to the above, the emergent 1T-physics conformal shadow seems to come with certain natural constraints, which remarkably are not in contradiction with known phenomenology so far. On the contrary, they lead to some new guidance for phenomenology: • The Standard Model is correctly reproduced as a shadow 10 , but in addition, the Higgs sector is required to interact with an additional scalar Φ that induces the electroweak phase transition as discussed in [5] (Φ could be the dilaton Ω, but not necessarily, see footnote 7). This leads to interesting physics scenarios at LHC energy scales (an additional new neutral scalar) or cosmological scales (inflaton candidate, dark matter candidate) as suggested in [5] 11 . The supersymmetric 12 version [21] of this 2T-physics feature with extra required scalars leads to richer phenomenologically interesting possibilities.
• The gravitational constant could be time dependent as described in the previous section. This is because according to 2T-physics, if there are any fundamental scalars s i (x) at all, they all must be conformal scalars coupled to the curvature term R with the special coefficient (−a) as in the last line line of the table above. It would be interesting to study the effects of this scenario in the context of cosmology.
There are many open questions. In particular quantization in the path integral formalism is still awaiting clarification of the gauge symmetries so that Faddeev-Popov techniques can be correctly applied. Other issues include the question of whether there might be some physical role, either at the classical or quantum levels, for the "remainders" in the expansion of the fields in powers of W, as in Eq.(3.13).
Having accomplished a formulation of gravity as well as supersymmetry in 2T field theory [21] it is natural to next try supergravity. In particular the 2T generalization of 11-dimensional supergravity is quite intriguing and worth a few speculative comments. If constructed, such a theory will provide a low energy 2T-physics corner of M-theory. This would be a theory in 11+2 dimensions whose global supersymmetry can only be OSp(1|64), so it should be related to S-theory [30]. We remind the reader that S-theory gives an algebraic BPS-type setting based 10 The theta term θF * F can be reproduced as a shadow in 3+1 dimensions from 2T field theory in 4+2 dimensions (to appear). So a previous claim of the resolution of strong CP violation without an axion [5] is retracted. 11 Scenarios that include such a scalar field in both theoretical and phenomenological contexts have been discussed independently in recent papers [22]- [27] that mainly appeared after [5]. 12 It was suggested in the second reference in [5] that a conformal scalar of the type Φ, with the required SO(4, 2), could provide an alternative to supersymmetry as a mechanism that could address the mass hierarchy problem. This possibility has been more recently discussed in [28] [29].
on OSp(1|64) for the usual M-theory dualities among its corners, with 11 dimensions or 10 dimensions with type IIA, IIB, heterotic, type-I supersymmetries. A corresponding 2T-physics theory would provide a dynamical basis that could give shadows-type meaning to these famous dualities, as outlined in [31].
Finally, let us emphasize that the fundamental concept behind 2T-physics is the momentumposition symmetry based on Sp(2, R) . Despite the fact that the worldline approach in Eq.(1.1) treats position and momentum on an equal footing, the field theoretic approach that we have discussed blurs this symmetry, although the constraints implied by the Sp(2, R) symmetry in the form of the kinematic constraints were still maintained. There should be a more fundamental approach with a more manifest position-momentum symmetry, perhaps with fields that depend both on X M and P M , and in that case perhaps based on non-commutative field theory. Basic progress along this line that included fields of all integer spins was reported in [32]. If this avenue could be developed to a comparable level as the current field theory formalism, it is likely that it will go a lot farther than our current approach. | 2008-05-11T07:00:25.000Z | 2008-04-09T00:00:00.000 | {
"year": 2008,
"sha1": "ddadb5d26e235f30064d2e7e0fab73c12fa95d22",
"oa_license": null,
"oa_url": "http://arxiv.org/pdf/0804.1585",
"oa_status": "GREEN",
"pdf_src": "Arxiv",
"pdf_hash": "ddadb5d26e235f30064d2e7e0fab73c12fa95d22",
"s2fieldsofstudy": [
"Physics"
],
"extfieldsofstudy": [
"Physics"
]
} |
55556046 | pes2o/s2orc | v3-fos-license | Molecular Detection of Enterococcal Surface Protein ( esp ) Gene in Enterococcus faecalis Isolated from Dental Calculus of Patients in Sari , Iran
Corresponding Author: Hamid Reza Goli Department of Microbiology, Faculty of Medicine, KM 18 Khazarabad Road, Khazar Sq, Sari, Iran. Phone: +98-1133542067 E-mail: goli59@gmail.com Abstract Background: Enterococci are important gram-positive bacteria causing dental calculus in human beings; however, the role of these bacteria in oral cavity is unclear. The aim of this study was to investigate the presence of Enterococcal Surface Protein (esp) gene in Enterococcus faecalis isolated from dental calculus in the city of Sari, Iran. Materials and Methods: In the present study, 207 dental calculus samples were collected from patients. The isolates were identified by growth on Bile Esculin agar, Gram stain, Catalase test, Growth at 6.5% NaCl, PYR and arabinose fermentation test. Antimicrobial susceptibility pattern of the isolates was determined by disk agar diffusion method. The presence of esp gene was assessed by polymerase chain reaction (PCR). Results: Among the 56 (27%) enterococci isolated from dental calculus, 43 (76.7%) were determined as E. faecalis. The resistance rate to ampicillin, vancomycin, tetracycline, ciprofloxacin and erythromycin in E. faecalis isolates was estimated as 13.9%, 4.6%, 11.6%, 6.9% and 13.9%, respectively. The esp gene was detected in 18.6% of E. faecalis isolates. Among the isolates containing esp gene, 33.3%, 50%, 40%, 33.3% and 33.3% of them were resistant to ampicillin, vancomycin, tetracycline, ciprofloxacin and erythromycin, respectively. Conclusion: E. faecalis is an important organism causing dental calculus but the presence of esp gene had no correlation with the resistance to tested antimicrobial agents.
Introduction
Enterococci are gram-positive facultative anaerobic cocci that normally inhabit the gastrointestinal tract of humans and animals (1,2). Moreover, these bacteria are found in other parts of the body such as oral cavity and vagina as well as in water, soil, food, plants and insects (2,3). Enterococcus faecalis and Enterococcus faecium are the most common species causing human infections and frequently have been associated with nosocomial infections throughout the world (4). E. faecalis is not a normal flora of the mouth but has been observed in diseases such as dental caries, periodontitis and tooth root infections (1). Several virulence factors can cause the accumulation of these bacteria and initiation of dental infections (4,5). Enterococcal surface protein (ESP) is one of these virulence factors (2). The esp gene encodes ESP with iterative structure causing bacterial adhesion and biofilm formation. ESP is a high molecular weight superficial protein containing 1873 amino acids, which has N-terminals, central core, and C-terminal regions. The C-terminal domain contains a membrane hydrophobic region. Recently, it has been assumed that the N-terminal of ESP participates in interaction with the host and the central region of this protein has an important role in accumulation of the bacteria and hides the mentioned protein from the host immune system (6,7). Biofilm is a layer consisting of a mass of bacteria adhering to each other. This layer is composed of a polymerextracellular matrix that is made by the bacteria (8). For example, Enterococcus form the biofilm in dental root canal which was filled previously (1). Biofilm causes bacterial protection against environmental changes, host immune response and antimicrobial agents, thus prevents the treatment of the infections caused by the biofilm producer bacteria (6,9). About 80% of the bacterial infections are associated with biofilm production (10). In vivo and in vitro experiments demonstrated that minimum bactericidal concentration (MBC) and minimum inhibitory concentration (MIC) of biofilm-forming bacteria are about 1-1000 times higher than that of Planktonic cells, resulting in limitation in treatment of infections caused by these strains (8). Biofilm causes the bacterial colonization on medical instruments and dental surfaces. This leads to accumulation of other bacteria that produce acid due to the metabolism and initiation of dental calculus (8,(10)(11)(12). The aim of this study was to determine the prevalence of esp gene in E. faecalis isolated from dental calculus in patients referred to a dental clinic.
Sample collection
This study was conducted on 207 samples collected from non-repeated patients referred to the Mostafavian dental clinic in Sari, Iran. This project was explained to the patients and sampling was carried out with the consent of patients. The samples were taken by sterile swabs from the patients' dental calculus. Then, swabs were placed immediately in Brain hurt infusion (BHI) nutrient broth and transferred to the laboratory.
Microbiological and antimicrobial susceptibility testing
The samples were cultured on blood agar containing 5% sheep blood under sterile conditions, and incubated for 24 hours at 37°C. Morphological comparison with grown colonies of Enterococcus faecalis ATCC 29212 (Pasteur Institute of Iran) as a standard strain was used to confirm the suspected colonies of enterococci. The pure cultures of suspected colonies were sub-cultured on Bile Esculin agar, and incubated for 48 hours at 37°C. Moreover, Gram stain, catalase test, growth at 6.5% NaCl and PYR test were performed for early identification of enterococci (13). In this study, arabinose fermentation test was employed to differentiate E. faecalis and E. faecium (14). The susceptibility pattern of the isolates against ampicillin, vancomycin, tetracycline, ciprofloxacin and erythromycin was determined by disk agar diffusion method according to the clinical and laboratory standards institute (CLSI) guidelines (15). E. faecalis ATCC 29212 was chosen as control strain in the antimicrobial susceptibility testing.
DNA extraction
Genomic DNA was extracted using a DNA extraction kit (Thermo Scientific, Waltham, Massachusetts, United States) according to the manufacturer's instructions.
Detection of the esp gene PCR was used to detect the presence of esp gene in E. faecalis isolated from dental calculus. The sequences of primers used in the present study are shown in Table 1. The amount of each primer, DNA and master mix (Amplicon, Denmark) used in this test was equal to 10 pmol, 300 ng/μl and 8 μl respectively, and the final volume of reaction was 15 μl. The PCR process was carried out by an Eppendorf AG thermal cycler (Germany) as follows: initial denaturation at 94 °C for 1 min, followed by 30 cycles of denaturation at 94 °C for 45 sec, annealing at 60 °C for 60 sec, extension at 72 °C for 60 sec with a final extension at 72 °C for 5 min. Distilled water and enterococcus faecalis ATCC 29212 were chosen as esp negative and positive control, respectively.
Electrophoresis
The PCR products along with a 100 bp DNA Ladder and DNA of esp positive control were electrophoresed on 2% agarose gel ( Figure 1). The results were observed with gel documentation device (Vilber lourmat, France), after staining with safe stain (Aryatous, Iran).
Statistical analysis
Data were analyzed using Statistical Package for Social Science (SPSS, version 22) software. The binomial test was used for analysis of the data and Pvalue < 0.05 was statistically significant.
Results
According to the standard microbiologic tests (13), 56 (27%) of dental calculus isolates were identified as Entrococci. Among these isolates, 43 (76.7%) of them were confirmed as enterococcus faecalis and others were identified as enterococcus faecium. In this study, 13.9%, 4.6%, 11.6%, 6.9%, and 13.9% of the isolates were resistance phenotypes against ampicillin, vancomycine, tetracycline, ciprofloxacin and erythromycin, respectively. The molecular assay showed that 8 (18.6%) enterococcus faecalis isolates were esp positive. Interestingly, none of these patients used toothbrush and one patient had denture. There was no significant correlation between the presence of esp gene and the sex of the patients (P > 0.05). Also, the presence of this gene had no significant correlation with the resistance rate against ampicillin, vancomycin, tetracycline, ciprofloxacin and erythromycin ( Table 2). The association between the isolation of enterococcus faecalis and some problems which are related to the patients is shown in Table 3.
Discussion
The present study showed that E. faecalis was more prevalent than E. faecium in the samples collected from dental calculus. There was no significant relationship between the variables of questionnaire in patients and E. faecalis prevalence rate. However, there was an important correlation between antibiotic usage and the prevalence of E. faecalis (P-value < 0.01). Antimicrobial susceptibility testing showed that E. faecalis isolated from dental calculus had low levels of resistance to ampicillin (13.9%), vancomycin (4.6%), tetracycline (11.6%), ciprofloxacin (6.9%), and erythromycin (13.9%). Table 3. Association of enterococcus faecalis isolation with patients' underlying problems.
The prevalence of esp gene in our isolates was lower than other studies (1,2,4,(17)(18)(19), and that may be due to geographical differences or various clinical samples which were used in their studies. Moreover, the resistance rate to ampicillin, vancomycin, tetracycline, ciprofloxacin and erythromycin in our study had no significant correlation with the presence of esp gene. However, some studies have reported a high prevalence of this gene (17,18). According to a study conducted from 2008 to 2010 in Iran (17), the prevalence of E. faecalis isolated from urinary tract infections was 73.4% while 47.1% of the isolates contained the esp gene. This difference in the prevalence of esp gene may be due to different samples used in the two studies. The abovementioned study (17) also showed that 64%, 97%, and 100% of their vancomycin-, ampicillin-and ciprofloxacin-resistant isolates contained the esp gene, while we did not find any correlation between them. However, the resistance rate to ampicillin in our study was higher than that of their isolates. These data show that the presence of esp gene may have a significant correlation with the resistance to these antibiotics in their study. However, the production of biofilm due to the presence of esp gene can facilitate the acquisition of some antibiotic resistance genes (17). Another study from Iran (18) has reported that 16.5%, 16.3%, 87.8%, 43.9% and 65.3% of their E. faecalis isolates were resistant to ampicillin, vancomycin, tetracycline, ciprofloxacin and erythromycin, respectively. However, 74.6% of their isolates contained the esp gene. This was probably due to different clinical samples used in their study. This difference in the prevalence of esp gene was in concordance with other studies conducted in Bulgaria and Brazil, which used various clinical samples (2,18,19). The high prevalence of E. faecalis encoding this gene in different areas may be dependent on the high prevalence of this organism in animals and human carriers, which can be as reservoirs of these bacteria. The prevalence rate of esp gene in E. faecalis isolated from clinical and saliva/plaque samples collected in Germany (1), was 60% and 86.5%, respectively. Moreover, the prevalence rate of this gene in endodontic samples in the mentioned study was higher than that in our study (38.1% vs. 18.6%). E. faecalis is associated with various dental diseases and can lead to oral biofilm formation (1). The pathologic role of several virulence factors identified in E. faecalis is still under discussion and the function of these factors is unknown in the clinical isolates (1). A study carried out in Chile (20) investigated different clinical samples showed that the prevalence of esp gene in their E. faecalis isolated from urinary infection and bacteremia was 42% and 52%, respectively, however this gene was not found in endodontic isolates. The results of these studies on E. faecalis indicate that the prevalence rate of this gene is different in various clinical samples. Current knowledge on the features of virulence factors in the pathogenesis of infections caused by this bacterium is still limited. The production of biofilm in this bacterium is dependent on multiple genes such as, epa, atn, fsr, srtA, srtC, ebpA, ebpB, and ebpC, suggesting that more comprehensive studies should be conducted on this subject.
Conclusion
The presence of esp gene is not the only cause of the acquisition of resistance genes due to biofilm formation. Antibiotic resistance is associated with several mechanisms of which the most important mechanism is to obtain the resistance genes from other bacteria present in the biofilm. Considering that none of the patients who were positive for esp in E. faecalis isolate used toothbrush, it is reasonable to expect that oral hygiene plays a major role in preventing biofilm formation. | 2018-12-12T03:37:51.466Z | 2017-08-10T00:00:00.000 | {
"year": 2017,
"sha1": "bef5bfabd1ee6bb06b9f9156b378a10b20e9a438",
"oa_license": "CCBYNC",
"oa_url": "http://rmm.mazums.ac.ir/files/site1/user_files_a6894a/sharbafi-A-10-26-34-bd0809a.pdf",
"oa_status": "GOLD",
"pdf_src": "Anansi",
"pdf_hash": "e96750f592952d6586895d586d48060cfde4215e",
"s2fieldsofstudy": [
"Medicine",
"Biology"
],
"extfieldsofstudy": [
"Biology"
]
} |
11163638 | pes2o/s2orc | v3-fos-license | An approach to the author citation potential: Measures of scientific performance which are invariant across scientific fields
The citation potential is a measure of the probability of being cited. Obviously, it is different among fields of science, social science, and humanities because of systematic differences in publication and citation behaviour across disciplines. In the past, the citation potential was studied at journal level considering the average number of references in established groups of journals (for example, the crown indicator is based on the journal subject categories in the Web of Science database). In this paper, some characterizations of the author?s scientific research through three different research dimensions are proposed: production (journal papers), impact (journal citations), and reference (bibliographical sources). Then, we propose different measures of the citation potential for authors based on a proportion of these dimensions. An empirical application, in a set of 120 randomly selected highly productive authors from the CSIC Research Centre (Spain) in four subject areas, shows that the ratio between production and impact dimensions is a normalized measure of the citation potential at the level of individual authors. Moreover, this ratio reduces the between-group variance in relation to the within-group variance in a higher proportion than the rest of the indicators analysed. Furthermore, it is consistent with the type of journal impact indicator used. A possible application of this result is in the selection and promotion process within interdisciplinary institutions, since it allows comparisons of authors based on their particular scientific research.
Introduction
This work is related to author metrics and citation-based indicators for the assessment of researchers from a general bibliometric perspective. It is well known that in some scientific fields the average number of citations per publication (within a certain time period) is much higher than in other scientific fields. This is due to differences among fields in the average number of cited references per publication, the average age of cited references, and the degree to which references from other fields are cited. In addition, bibliographical databases such as the Web of Science and Scopus cover some fields more extensively than others (Moed, 2005). There are statistical patterns, which are field-specific, that allow for the normalization of the impact indicators. Garfield (1979) proposes the term 'citation potential' for systematic differences among fields of science based on the average number of references per paper. For example, in the biomedical fields, long reference lists with more than fifty items are common, but in mathematics, short lists with less than twenty references are the standard (Dorta-González & Dorta-González, 2013a, 2013b). This variability is a consequence of the different citation cultures and can produce significant differences in citation-based indicators since the probability of being cited is affected. In this sense, the average number of references is the variable most used in the literature to justify the differences between fields of science, as well as the most employed in Traditionally, normalization of field differences has usually been based on a field classification system. In said approach, each publication belongs to one or more fields and the citation impact of a publication is calculated relative to the other publications in taking all journals in a category as one meta-journal. Another example of a field classification system is the Scopus subject areas.
Nevertheless, the precise delineation between fields of science and the next-lower level specialties has until now remained an unsolved problem in bibliometrics because these delineations are fuzzy at any moment in time and develop dynamically over time.
Therefore, classifying a dynamic system in terms of fixed categories can lead to error because the classification system is defined historically while the dynamics of science is evolutionary (Leydesdorff, 2012, p.359).
Recently, the idea of source normalization was introduced; which offers an alternative approach to normalizing field differences. In this approach, normalization is achieved by looking at the referencing behaviour of citing journals. Some indices, such as the In citation-based research evaluations, it is crucial to control the previously mentioned differences among fields. This is especially the case for performance evaluations at higher levels of aggregation, such as countries, universities, or multi-disciplinary However, in these source-normalized metrics the expected number of citations is determined by the field which is defined in a field classification system. Therefore, these metrics do not include any great degree of normalization in relation to the specific research topic of each author. The topic normalization is necessary because different scientific topics have different citation practices. Therefore, citation-based bibliometric indicators need to be normalized for such differences between topics in order to allow for between-topic comparisons of authors. In this sense, we use the aggregate impact factor of three different sets of journals as a measure of the different dimensions in the citation potential of an author, and we employ a combination of these dimensions in the construction of a source normalized indicator to make it comparable between scientific fields. In order to test this new impact indicator, an empirical application with 120 authors belonging to four different fields is presented. The main conclusion we obtain is that our rate between production and impact dimensions reduces the between-group variance in relation to the within-group variance in a higher proportion than the rest of indicators analysed. Furthermore, it is consistent with the type of journal impact indicator used.
Dimensions and proportions of the author citation potential
Even within the same field, each researcher is working on one or several research lines that have specific characteristics, in most cases very distant from those of other researchers in the same field.
Generally, the citation potential in a field is determined within a predefined group of journals. This approach requires a classification scheme for assigning publications to fields. Given the fuzziness of disciplinary boundaries and the multidisciplinary character of many research topics, such a scheme will always involve some arbitrariness and will never be completely satisfactory. Therefore, we propose measuring the citation potential in the specific topic of each author and using this measure as an indicator of the probability of being cited in that topic. 6 The problem underlying the characterization of the author citation potential is as follows. Given a set of publications from an author in different journals and years, we will try to obtain a measure of the author topic defined by some dimensions of these publications so it can be compared with that of a different author (with publications in different journals and years). This problem arises in the evaluation of the research, when [ Figure 1 about here] In order to facilitate the reading of the paper, the notation used in the operational characterization of the author citation potential is shown in Table 1. In this characterization we propose the use of journal impact indicators instead of number of citations received by a particular paper. This is because it is necessary that several years pass after the publication of a document, so that the number of citations can be a consistent indicator in comparing similar documents of the same type published in the same year with that of other researchers in the same field. Consistency is a mathematical property based on the idea that the ranking of two units relative to each other should not change when both units make the same progress in terms of citations.
Something similar happens when considering an indicator based on the percentage of highly cited publications, for example, the percentage of publications belonging to the top 5% or the top 10% of a particular field. In some fields (e.g., Economics) more than 5 years are needed to obtain a consistent measure of impact (Dorta-González & Dorta-González, 2013a). In many fields of the Humanities it is necessary to wait even longer.
[ Table 1 about here] However, in the evaluation of researchers for promotion and recruitment, the most recent production years of an author have a greater predictive power in their future production. Therefore, it is useful to know a measure of the author citation potential based on journal impacts in their topic.
We consider the following dimensions of the author's research area: (d1) The production dimension -Pis the first measure of the probability of being cited in the research area and it is based on the author's publications. It is the weighted average of the impacts in the journals containing the author's papers in the target window. Therefore, this is the expected impact for the author.
As an example, the production dimension of A. Bocci (Physics & Astronomy) is illustrated in Table 2. Considering the journals in which Bocci's papers are published, the production dimension of this author is 2.817.
[ Table 2 about here] (d2) The impact dimension -Iis the second measure of the probability of being cited in the research area and it is based now on author citations. It is the weighted average of the impacts in the journals citing the author's papers in the target window. Therefore, this is the observed impact for the author's publications. In all three cases, the average is weighted by the number of papers in each journal, and the impact indicator of the journal corresponds to the year of publication. Through these different dimensions, the following four indicators that attempt to normalize the citation potential in the author's topic (dividing some dimensions by others) are proposed.
(r1) The production over impact ratio -P/Iis the proportion between production and impact dimensions. A quantity larger than one indicates that the author has published in journals with impact indicators above those observed for other authors in the same research area. This is because the average impact of the author's publications is compared with the average impact of the researchers citing this author. In this formulation only those publications in which the researchers cite this author are considered. Therefore, a value of 1.10 indicates that the production impact of the author is 10% higher than the other authors in the research area. Alternatively, a value of 0.80 indicates that the production impact of the author is 20% lower than the other authors in the research area.
As an example, in a similar way as in Table 2, the impact dimension of A. Bocci is 1.936, and therefore production over impact is 2.817 / 1.936 = 1.455. This quantity larger than one indicates that Bocci has published in journals with impact indicators higher than average in the same research topic. In particular, 1.455 indicates that the production impact of this author is 45% higher than other authors in the same research topic.
(r2) The production over reference ratio -P/Ris the proportion between the production and reference dimensions. (r3) The impact over reference ratio -I/Ris the proportion between the impact and reference dimensions. In both cases the interpretation is similar to the P/I case. Finally, (r4) the production and impact over reference ratio -(P+I)/2Ris the arithmetic mean between P/R and I/R, i.e., (P/R + A direct application of our methodology (an author citation potential obtained through journal impact indicators) is to identify those researchers who publish in higher impact journals than expected in their research topic. This would contextualize the topic of each author several years before knowing the real impact of their publications (through the received citations) in a consistent way.
In the empirical application we studied which of the previous ratios greater reduces the between-group variance in relation to the within-group variance in a set of 120 authors from four different fields.
Methods and materials
The bibliometric data was obtained from the online version of the Scopus database Four subject areas were considered: Chemistry, Computer Science, Medicine, and Physics & Astronomy. This was motivated in order to obtain authors with systematic differences in publication and citation behaviour. We designed a random sample with a total of 120 authors (30 in each subject area). They were selected from the highly productive authors of the Consejo Superior de Investigaciones Científicas -CSIC-(Spain). In the population only those authors with a production over the mean in their subject area were considered.
We used seven indicators: three that measure different dimensions of the citation potential associated to the author, and four normalized indicators of the citation potential in the topic in which the author works.
Results and discussion
In the empirical application we studied which measure of the author citation potential produces a closer data distribution among subject areas in relation to its centrality and variability measures. We compared the seven indicators (three dimensions and four ratios) described in Table 1. Table 3 shows the different dimensions and proportions of the author citation potential in the sample. Furthermore, three general production and impact indicators (number of papers, number of citations, and h-index) are shown. Table 3 presents two different scenarios, the first one (columns 6 to 12) considers the SJR as the impact indicator for journals, and the second one (columns 13 to 19) takes the SNIP as the impact indicator for journals. Thus, at any time the value of an indicator based on journal impacts using absolute citation frequencies can be compared to that based on relative citation frequencies.
[ Table 3 about here] In relation to the dimensions and proportions of the author citation potential (columns 6 through 19), important differences between both research areas and researchers within the same field can be seen in Table 3. This firstly reflects the peculiarities in the publication and citation habits of each research area as a whole and, secondly, the peculiarities in the specific research topic of each author. Furthermore, it can also be seen that, for each particular author, significant differences between the dimensions of the citation potential exist. The differences among the dimensions of the author citation potential are lower in the case of SNIP. This is expected because normalized impact indicators are used. Thus, in a major number of cases these differences are below 1 (SNIP), while in the case of SJR these differences are in many cases higher than 3. Central-tendency and variability measures in the four subject areas are shown in Table 4. Note that for any dimension of the author citation potential, the values are very different from one research area to another. Notice the high differences between areas in medians, means, and standard deviations. This is because the subject areas considered in the sample are very different in relation to the publication and citation behavior.
Furthermore, the medians are well below the means, indicating skewed distributions with many authors having low values and only a small number of authors with high values. However, when using ratios these differences between areas are greatly reduced.
[ Table 4 about here] Box-plots comparing the subject areas are shown in Figure 2. This is a way of A similar behavior is observed both in P and I in the second row of Figure 2. However, both indicators produce fairly similar distributions of data between areas when normalized data is used (see SNIP in row 3), which does not occur in the case of R.
Finally, with respect to the ratios in rows 4 to 6, the indicator that produces closer distributions of data between subject areas is P/I (with the exception of Physics & Astronomy). The differential behaviour of Physics & Astronomy is also observed when using normalized impact indicators (SNIP) and it is justified because in this subject area clearly higher values in the numerator of the ratio converge. In conclusion, P/I is the ratio based on non-normalized journal impacts that produces the least differences between most areas, and it is also close to the results using normalized journal impacts [ Figure 3 about here] Now, we will test which normalization (ratio between dimensions) of the citation potential reduces the between-group variability in relation to the within-group variability. The central-tendency and the variability measures for the different dimensions and proportions of the author citation potential in the aggregate data are shown in Table 5. Moreover, it shows the within-and the between-group variability.
[ Table 5 about here] Within-and between-group variability are both components of the total variability in the combined distributions. What we are doing when we compute within-and betweengroup variability is to partition the total variability into the within and between components. So: within variability + between variability = total variability. But, how do we measure variability in a distribution? That is, how do we measure how different scores are in the distribution from one another? In this work we use variance as a measure of variability. Recall that variance is the average square deviation of scores about the mean. Table 5 that the proportion between production and impact dimensions produces the greatest percentage reduction of the variance (76.3%). Using SNIP only an The general pattern that can be observed in the correlations reported in Table 6
Conclusions
Different scientific fields have different citation practices, and citation-based bibliometric indicators need to be normalized for such differences between fields in order to allow for between-fields comparisons of citation indicators. In this paper, we provide a normalization approach based on the dimensions of the author's research.
An empirical application, with 120 authors from four different subject areas, shows that the ratio between production and impact dimensions reduces the between-group variance in relation to the within-group variance in a higher proportion than the rest of the indicators analyzed in this paper. Furthermore, this normalized indicator is consistent in the sense that it is independent of the type of journal impact indicator considered.
14 The subject areas considered are very different in relation to the citation behavior. For this reason, in the sample there are important differences among the dimensions of the citation potential from one author to another. However, the proportion between production and impact dimensions is very close in all the subject areas considered.
We have developed a measure of scientific performance whose distributional characteristics are invariant across scientific fields. Such a measure would allow direct comparisons of scientists in different fields and permit a ranking of researchers that is not affected by differential publication and citation practices across fields. Figure 2: Box-plots comparing the subject areas for the dimensions and proportions of the author citation potential. P/I is the ratio based on non-normalized journal impacts that produces the least differences between most areas, which is also close to the results using normalized journal impacts (SNIP) Source: Scopus, 2009-2013; SJR = Scimago journal ranking; SNIP = Source normalized impact per paper; P = Production; I = Impact; R = Reference. Source: Scopus, 2009-2013; SJR = Scimago journal ranking; P = Production; I = Impact; R = Reference. a significant at the 90% level; b significant at the 95% level; c significant at the 99% level Figure 4: Scatter plots between different dimensions of the author citation potential for the 120 authors. These reveal distinct patterns in each subject area in the case of SJR and a more common bivariate distribution across subject areas in the case of SNIP. | 2014-10-08T04:28:04.000Z | 2014-10-08T00:00:00.000 | {
"year": 2014,
"sha1": "005d6154e36db66cf8f95305901de44b0a4882f0",
"oa_license": null,
"oa_url": "http://arxiv.org/pdf/1410.2065",
"oa_status": "GREEN",
"pdf_src": "Arxiv",
"pdf_hash": "0dee76d2c876ee24452639fe04fd85525535c399",
"s2fieldsofstudy": [
"Computer Science"
],
"extfieldsofstudy": [
"Computer Science",
"Sociology"
]
} |
122708906 | pes2o/s2orc | v3-fos-license | A Criterion for the Fuzzy Set Estimation of the Regression Function
We propose a criterion to estimate the regression function by means of a nonparametric and fuzzy set estimator of the Nadaraya-Watson type, for independent pairs of data, obtaining a reduction of the integrated mean square error of the fuzzy set estimator regarding the integrated mean square error of the classic kernel estimators. This reduction shows that the fuzzy set estimator has better performance than the kernel estimations. Also, the convergence rate of the optimal scaling factor is computed, which coincides with the convergence rate in classic kernel estimation. Finally, these theoretical findings are illustrated using a numerical example.
Introduction
The methods of kernel estimation are among the nonparametric methods commonly used to estimate the regression function r, with independent pairs of data.Nevertheless, through the theory of point processes see e.g, Reiss 1 we can obtain a new nonparametric estimation method, which is based on defining a nonparametric estimator of the Nadaraya-Watson type regression function, for independent pairs of data, by means of a fuzzy set estimator of the density function.The method of fuzzy set estimation introduced by Falk and Liese 2 is based on defining a fuzzy set estimator of the density function by means of thinned point processes see e.g, Reiss 1 , Section 2.4 ; a process framed inside the theory of the point processes, which is given by the following: where a n > 0 is a scaling factor or bandwidth such that a n → 0 as n → ∞, and the random variables U i , 1 ≤ i ≤ n, are independent with values in {0, 1}, which decides whether X i belongs to the neighborhood of x 0 or not.Here x 0 is the point of estimation for more details, see Falk and Liese 2 .On the other hand, we observe that the random variables that define the estimator θ n do not possess, for example, precise functional characteristics in regards to the point of estimation.This absence of functional characteristics complicates the evaluation of the estimator θ n using a sample, as well as the evaluation of the fuzzy set estimator of the regression function if it is defined in terms of θ n .The method of fuzzy set estimation of the regression function introduced by Fajardo et al. 3 is based on defining a fuzzy set estimator of the Nadaraya-Watson type, for independent pairs of data, in terms of the fuzzy set estimator of the density function introduced in Fajardo et al. 4 .Moreover, the regression function is estimated by means of an average fuzzy set estimator considering pairs of fixed data, which is a particular case if we consider independent pairs of nonfixed data.Note that the statements made in Section 4 in Fajardo et al. 3 are satisfied if independent pairs of nonfixed data are considered.This last observation is omitted in Fajardo et al. 3 .It is important to emphasize that the fuzzy set estimator introduced in Fajardo et al. 4 , a particular case of the estimator introduced by Falk and Liese 2 , of easy practical implementation, will allow us to overcome the difficulties presented by the estimator θ n and satisfy the almost sure, in law, and uniform convergence properties over compact subsets on R.
In this paper we estimate the regression function by means of the nonparametric and fuzzy set estimator of the Nadaraya-Watson type, for independent pairs of data, introduced by Fajardo et al. 3 , obtaining a significant reduction of the integrated mean square error of the fuzzy set estimator regarding the integrated mean square error of the classic kernel estimators.This reduction is obtained by the conditions imposed on the thinning function, a function that allows to define the estimator proposed by Fajardo et al. 4 , which implies that the fuzzy set estimator has better performance than the kernel estimations.The above reduction is not obtained in Fajardo et al. 3 .Also, the convergence rate of the optimal scaling factor is computed, which coincides with the convergence rate in classic kernel estimation of the regression function.Moreover, the function that minimizes the integrated mean square error of the fuzzy set estimator is obtained.Finally, these theoretical findings are illustrated using a numerical example estimating a regression function with the fuzzy set estimator and the classic kernel estimators.
On the other hand, it is important to emphasize that, along with the reduction of the integrated mean square error, the thinning function, introduced through the thinned point processes, can be used to select points of the sample with different probabilities, in contrast to the kernel estimator, which assigns equal weight to all points of the sample.
This paper is organized as follows.In Section 2, we define the fuzzy set estimator of the regression function and we present its properties of convergence.In Section 3, we obtain the mean square error of the fuzzy set estimator of the regression function, Theorem 3.1, as well as the optimal scale factor and the integrated mean square error.Moreover, we establish the conditions to obtain a reduction of the constants that control the bias and the asymptotic variance regarding the classic kernel estimators; the function that minimizes the integrated mean square error of the fuzzy set estimator is also obtained.In Section 4 a simulation study was conducted to compare the performances of the fuzzy set estimator with the classical Nadaraya-Watson estimators.Section 5 contains the proof of the theorem in the Section 3.
Fuzzy Set Estimator of the Regression Function and Its Convergence Properties
In this section we define by means of fuzzy set estimator of the density function introduced in Fajardo et al. 4 a nonparametric and fuzzy set estimator of the regression function of Nadaraya-Watson type for independent pairs of data.Moreover, we present its properties of convergence.
Next, we present the fuzzy set estimator of the density function introduced by Fajardo et al. 4 , which is a particular case of the estimator proposed in Falk and Liese 2 and satisfies the almost sure, in law, and uniform convergence properties over compact subset on R. Definition 2.1.Let X 1 , . . ., X n be an independent random sample of a real random variable X with density function f.Let V 1 , . . ., V n be independent random variables uniformly on 0, 1 distributed and independent of X 1 , . ..,X n .Let ϕ be such that 0 < ϕ x dx < ∞ and a n b n ϕ x dx, b n > 0. Then the fuzzy set estimator of the density function f at the point x 0 ∈ R is defined as follows: where Remark 2.2.The events {X i x}, x ∈ R, can be described in a neighborhood of x 0 through the thinned point process where and U x 0 ,b n X i , V i decides whether X i belongs to the neighborhood of x 0 or not.Precisely, ϕ n x is the probability that the observation X i x belongs to the neighborhood of x 0 .Note that this neighborhood is not explicitly defined, but it is actually a fuzzy set in the sense of Zadeh 5 , given its membership function ϕ n .The thinned process N ϕ n n is therefore a fuzzy set representation of the data see Falk and Liese 2 , Section 2 .Moreover, we can observe that N ϕ n n R ϑ n x 0 and the random variable τ n x 0 is binomial B n,α n x 0 distributed with In what follows we assume that α n x 0 ∈ 0, 1 .Now, we present the fuzzy set estimator of the regression function introduced in Fajardo et al. 3 , which is defined in terms of ϑ n x 0 .
. ., X n , Y n , V n be independent copies of a random vector X, Y , V , where V 1 , . . ., V n are independent random variables uniformly on 0, 1 distributed, and independent of X 1 , Y 1 , . . ., X n ,Y n .The fuzzy set estimator of the regression function r x E Y | X x at the point x 0 ∈ R is defined as follows: x, y is defined over the window I x 0 × R with thinning function ψ n x, y ϕ x − x 0 /b n 1 R y , where I x 0 denotes the neighborhood of x 0 .In the particular case |Y | ≤ M, M > 0, the fuzzy set representation of the data X i , Y i x, y comes given by ψ n x, y Consider the following conditions.
C1 Functions f and r are at least twice continuously differentiable in a neighborhood of x 0 . C2 C4 Function ϕ is symmetrical regarding zero, has compact support on −B, B , B > 0, and it is continuous at x 0 with ϕ 0 > 0.
The " L − →" symbol denotes convergence in law.
Theorem 2.7.Under conditions (C4)-( C5) and ( C8)-(C11), one has 2.9 Remark 2.8.The estimator r n has a limit distribution whose asymptotic variance depends only on the point of estimation, which does not occur with kernel regression estimators.Moreover, since a n o n −1/5 we see that the same restrictions are imposed for the smoothing parameter of kernel regression estimators.
Statistical Methodology
In this section we will obtain the mean square error of r n , as well as the optimal scale factor and the integrated mean square error.Moreover, we establish the conditions to obtain a reduction of the constants that control the bias and the asymptotic variance regarding the classic kernel estimators.The function that minimizes the integrated mean square error of r n is also obtained.
The following theorem provides the asymptotic representation for the mean square error MSE of r n .Its proof is deferred to Section 5.
Theorem 3.1. Under conditions C1 -C6 , one has
with a n b n ϕ x dx.
3.3
Next, we calculate the formula for the optimal asymptotic scale factor b * n to perform the estimation.The integrated mean square error IMSE of r n is given by the following: From the above equality, we obtain the following formula for the optimal asymptotic scale factor .
3.5
We obtain a scaling factor of order n −1/5 , which implies a rate of optimal convergence for the IMSE * r n of order n −4/5 .We observe that the optimal scaling factor order for the method of fuzzy set estimation coincides with the order of the classic kernel estimate.Moreover, where Next, we will establish the conditions to obtain a reduction of the constants that control the bias and the asymptotic variance regarding the classic kernel estimators.For it, we will consider the usual Nadaraya-Watson kernel estimator which has the mean squared error see e.g, Ferraty et al. 6 , Theorem 2.4.1 where
3.11
Moreover, the IMSE of r NW K is given by the following:
3.12
From the above equality, we obtain the following formula for the optimal asymptotic scale factor .
3.15
The reduction of the constants that control the bias and the asymptotic variance, regarding the classic kernel estimators, are obtained if for all kernel
3.16
Remark 3.2.The conditions on ϕ allows us to obtain a value of B such that
3.17
Moreover, to guarantee that we have
3.21
Observe that for each C ∈ 0, u 2 K u du exists
3.24
In our case we take B ≤ B.
On the other hand, the criterion that we will implement to minimizing 3.6 and obtain a reduction of the constants that control the bias and the asymptotic variance regarding the classic kernel estimation, is the following Maximizing ϕ u du, 3.25 subject to the conditions where K E is the Epanechnikov kernel x .
3.27
The Euler-Lagrange equation with these constraints is where a, b, and c the three multipliers corresponding to the three constraints.This yields −25/16,25/16 x .
3.29
The new conditions on ϕ, allows us to affirm that for all kernel K
3.30
Thus, the fuzzy set estimator has the best performance.
Simulations
A simulation study was conducted to compare the performances of the fuzzy set estimator with the classical Nadaraya-Watson estimators.For the simulation, we used the regression function given by Härdle 7 as follows: where the X i were drawn from a uniform distribution based on the interval 0, 1 .Each ε i has a normal distribution with 0 mean and 0.1 variance.In this way, we generated samples of size 100, 250, and 500.The bandwidths was computed using 3.5 and 3.13 .The fuzzy set estimator and the kernel estimations were computed using 3.29 , and the Epanechnikov and Gaussian kernel functions.The IMSE * values of the fuzzy set estimator and the kernel estimators are given in Table 1.
As seen from Table 1, for all sample sizes, the fuzzy set estimator using varying bandwidths have smaller IMSE * values than the kernel estimators with fixed and different bandwidth for each estimator.In each case, it is seen that the fuzzy set estimator has the best performance.Moreover, we see that the kernel estimation computed using the Epanechnikov kernel function shows a better performance than the estimations computed using the Gaussian kernel function.
The graphs of the real regression function and the estimations of the regression functions computed over a sample of 500, using 100 points and v 0.2, are illustrated in Figures 1 and 2.
Proof of Theorem 3.1
Proof.Throughout this proof C will represent a positive real constant, which can vary from one line to another, and to simplify the annotation we will write Let us consider the following decomposition Next, we will present two equivalent expressions for the terms to the right in the above decomposition.For it, we will obtain, first of all, an equivalent expression for the expectation.We consider the following decomposition see e.g, Ferraty et al. 6 Taking the expectation, we obtain where
5.4
The hypotheses of Theorem 3.1 allow us to obtain the following particular expressions for E g n x and E ϑ n x , which are calculated in the proof of Theorem 1 in Fajardo et al. 3 .
That is
5.5
Combining the fact that X i , Y i , V i , 1 ≤ i ≤ n, are identically distributed, with condition C3 , we have
5.6
On the other hand, by condition C5 there exists C > 0 such that | r n x | ≤ C. Thus, we can write
5.7
Note that Thus, we can write 5.9 Note that by condition C1 the density f is bounded in the neighborhood of x.Moreover, condition C3 allows us to suppose, without loss of generality, that b n < 1 and by 2.5 we can bound 1 − E U .Therefore, Now, we can write
5.11
Journal of Probability and Statistics 13 The above equalities, imply that Once more, the hypotheses of Theorem 3.1 allow us to obtain the following general expressions for E ϑ n x and E g n x , which are calculated in the proofs of Theorem 1 in Fajardo et al. 3, 4 , respectively.That is
5.14
By conditions C1 and C4 , we have that
5.16
Next, we will obtain an equivalent expression for H n x .Taking the conjugate, we have 5.17 where
5.18
By condition C3 , we have
5.19
So that,
5.20
Now, we can write
5.21
By condition C3 , we have
5.24
Next, we will obtain an expression for the variance in 5.
Figure 1 :
Figure 1: Estimation of r with r n and r NW K E .
Figure 2 :
Figure 2: Estimation of r with r n and r NW K G .
1 , is a kernel when ϕ x is a density does not guarantee that r n x 0 is equivalent to the Nadaraya-Watson kernel estimator.With this observation the statement made in Remark 2 by Fajardo et al. 3 is corrected.Moreover, the fuzzy set representation of the data X i , Y i as n → ∞.C10 Functions f and r are at least twice continuously differentiable on the compact set −B, B .
Table 1 :
IMSE * values of the estimations for the fuzzy set estimator and the kernel estimators.
* Minimum IMSE * in each row.
1 .For it, we will use the following expression see e.g.,Stuart and Ord 8Since that X i , Y i , V i are i.i.d and the X i , V i are i.i.d, 1 ≤ i ≤ n, we have Moreover, the hypothesis of Theorem 3.1 allow us to obtain the following expression | 2017-07-30T21:57:37.404Z | 2012-09-02T00:00:00.000 | {
"year": 2012,
"sha1": "ef4d7d2e856baa1a98cdbde75c0bdcbd34d51355",
"oa_license": "CCBY",
"oa_url": "https://downloads.hindawi.com/journals/jps/2012/593036.pdf",
"oa_status": "GOLD",
"pdf_src": "Anansi",
"pdf_hash": "ef4d7d2e856baa1a98cdbde75c0bdcbd34d51355",
"s2fieldsofstudy": [
"Mathematics"
],
"extfieldsofstudy": [
"Mathematics"
]
} |
237325797 | pes2o/s2orc | v3-fos-license | NucHMM: a method for quantitative modeling of nucleosome organization identifying functional nucleosome states distinctly associated with splicing potentiality
We develop a novel computational method, NucHMM, to identify functional nucleosome states associated with cell type-specific combinatorial histone marks and nucleosome organization features such as phasing, spacing and positioning. We test it on publicly available MNase-seq and ChIP-seq data in MCF7, H1, and IMR90 cells and identify 11 distinct functional nucleosome states. We demonstrate these nucleosome states are distinctly associated with the splicing potentiality of skipping exons. This advances our understanding of the chromatin function at the nucleosome level and offers insights into the interplay between nucleosome organization and splicing processes. Supplementary Information The online version contains supplementary material available at 10.1186/s13059-021-02465-1.
determine the combinational effects of the different influencing factors on nucleosome organization. For example, can nucleosome organization be quantitatively classified into distinct nucleosome states? How many nucleosome states are there in an epigenome? How many characteristic features are there in a particular nucleosome state? What are the relationships among these features? Are nucleosome states cell type-specific and/or genomic regional-specific?
Many studies have revealed that nucleosome organization plays a key role in the regulation of gene expression [4,5,[13][14][15]. Genome-wide nucleosome mapping has also provided structural and mechanistic links among nucleosome, wrapped DNA, and nucleosome-binding factors [16][17][18] and elucidated novel functionalities of organized nucleosomal arrays in an unbiased way [19][20][21]. Recent studies have found that chromatin structure, in terms of nucleosome organization and specific histone modifications, acts as key regulators of alternative splicing. These studies provided evidence that there exists crosstalk between chromatin and splicing [22][23][24]. Among these studies, genome-wide mapping of nucleosomes has clearly illustrated the enrichment of nucleosomes at intron-exon junctions [25][26][27]. Other works, including ours, has revealed a strong correlation between several histone modifications across the alternatively spliced regions and splicing outcome [28,29]. However, these findings are mostly correlative and observational. Therefore, it is imperative to develop a computational model to examine their relationship quantitatively.
Although many computational methods were developed to determine epigenetic states , several limitations include that (1) some supervised learning methods such as ChromaSig [60] cannot find de novo information, and (2) some unsupervised learning methods such as HMMSeg [31], ChomHMM [35], Segway [39], and T-cep [59] cannot optimally capture spatial patterns of the epigenetic marks on the nucleosomes, and they were not designed with modeling nucleosome organization. Thus, none of the above methods can define functional nucleosome states, i.e., states encoding combinatorial histone marks and nucleosome organization features that perform specific functions and respond to the different environment and intercellular signaling. Our knowledge at the quantitative aspect is very limited about the phasing of a nucleosome array, the spacing between two dyads of the nucleosomes, the degree of nucleosome positioning, as well as the extent to which the combinatorial epigenetic pattern influences nucleosome organization. There is a lack of quantitative measures on the association of functional nucleosome states with the splicing potentiality of skipping exons (SEs).
In this study, we develop a novel computational method, NucHMM, which integrates a hidden Markov model (HMM) with the characteristics of nucleosome organization (phasing, spacing, positioning), to identify the nucleosome states associated with cell type-specific combinatorial histone marks. We test it on publicly available MNase-seq and ChIP-seq of H3K4me1, H3K4me3, H3K27ac, H3K36me3, H3K79me2, H3K9me3, and H3K27me3 data in MCF7, H1, and IMR90 cells [61] and identify cell type-specific functional nucleosome states. We further quantitatively measure the association of functional nucleosome states with the splicing potentiality of SEs. Our work advances our understanding of chromatin function at the nucleosome level and further offers mechanistic insight into the interplay between nucleosome organization and splicing process.
An overview of NucHMM
To quantitatively modeling the nucleosome organization, we have developed a novel algorithm, NucHMM, to identify functional nucleosome states. NucHMM is composed of three consecutive modules: (1) initialization, (2) training, and (3) functioning ( Fig. 1 and the "Methods" section). Briefly, the initialization module pre-processes the raw sequencing data into the readable data input for the training module including converting fastaq into bam, calling the peaks for ChIP-seq data by MACS2 [62] or EPIC2 [63], identifying the positioned nucleosomes from MNase-seq by iNPS [64], and binning the genome based on positioned nucleosomes where each nucleosome-bin is assigned with an observation symbol from an alphabet list of 2 n observations symbols representing each possible combination of the number (n) of histone marks. The training module is composed of two rounds of HMM training. The first round is to train multiple HMMs for 300 iterations and to select the best HMM based on the smallest BIC score. The second round is to retrain the best HMM for another 200 iterations (Additional file 1: Fig. S1) after revising the input as aborting the states with very few bins (lower than 0.5% of the total nucleosomes or a user-defined cutoff) and evenly redistributing the transition probabilities of the aborted states to the remaining states. The resulting HMM further uses the Viterbi decoding algorithm to obtain the HMM states at the Fig. 1 An overview of NucHMM workflow. A The initialization step combines several existing tools to construct nucleosome-level HMM training sequences. B The training step includes a selection of the best model and two rounds of HMM training for the best selected model. BW algorithm is applied to acquire the transition probability matrix and the mark-state matrix derived from the emission probability matrix. C The functioning step performs functional screening on the nucleosomes based upon the genomic location, nucleosome array number, nucleosome spacing, phasing, and positioning and finally identifies the functional nucleosome states
Selecting the best HMM and determining the genomic location
We tested NucHMM in publicly available MNase-seq and ChIP-seq of H3K4me1, H3K4me3, H3K27ac, H3K9me3, H3K27me3, H3K36me3, H3K79me2 data in MCF7, H1, and IMR90 cell types (Additional file 1: Tables S1-2). We used iNPS to identify 11.6, 11.9, and 12.7 million genome-wide positioned nucleosomes in MCF7, H1, and IMR90 cell types, respectively. Since the functional nucleosomes are likely located in close to 5′transcription start site (5TSS), we chose a gene-centric genomic region for training HMM ranging from − 100Kb upstream to 5TSS (Upstream-TSS), gene body (Gene-body), and + 10Kb downstream of transcription terminal site (TTS) (Downstream-TTS) (Additional file 1: Suppl. Notes). Thus, only around 7.2, 7.4, and 7.3 million positioned nucleosomes for MCF7, H1, and IMR90 cell types were used for the first round training. We trained a total of 50 HMMs with five initial states ranging from 15 to 25 each repeated by five times and selected the best model with 20 initial states based on its smallest BIC score, 4.91E+07 ( Fig. 2A and Additional file 1: Tables S3-5).
We found seven states in the best model that are redundant (Fig. 2B) and thus removed them before the second round of training (Additional file 1: Suppl. Notes and Fig. S3).
We finally achieved a model with 13 HMM states with a transition matrix showing the transition probabilities among states (Fig. 2C) and a mark-state matrix showing the emission probabilities for each of the seven marks in each of the 13 HMM states (Additional file 1: Fig. 2D and Fig. S4
Determining nucleosome phasing and spacing
Nucleosome phasing and spacing are two main features to characterize nucleosome organization (Fig. 3A). We mathematically defined the nucleosome phasing score and spacing value based on the distribution signals of the nucleosome arrays (see the "Methods" section). We first plotted nucleosome array frequency and clearly observed distinct coverage patterns associated with each of HMM states (Fig. 3B). We then calculated the phasing score for each of HMM states by Welch's method and found that states 5, 9, and 10 have the highest phasing score ( Fig. 3C and Additional file 1: Fig. S12), suggesting that H3K4me1 and H3K27me3 marks may be capable of imposing a better organized nucleosome array. We then derived the average of nucleosome spacing for each of HMM states after averaging four nucleosome spacing values within the 1Kb nucleosome array (Fig. 3D). Interestingly, we found states 2, 3, and 10 with two repressive marks H3K9me3 and H3K27me3 and one elongation mark H3K36me3 tend to have larger nucleosome spacing values, while states 5, 6, and 9 associated with active marks H3K4me1 and H3K27ac have smaller spacing values. We further verified the reliability of our methods for calculating the phasing score and spacing value by using a simulated nucleosome array coverage signal (Additional file 1: Suppl. Notes and Fig. Fig. S14). We found the distributions of the nucleosome positioning scores showed a slightly difference among HMM states (Fig. 4C) and defined the mean of each distribution as the nucleosome positioning score. An IGV visualization of a genomic region for the nucleosome reads distribution and the positioning score calculated by Eq. 8 was shown in Fig. 4D.
After examining the HMM states with four genomic regions and nucleosome organization features, we defined 11 functional nucleosome states (NucSs) ( Table 1) H3K79me2 mark has been reported to be functionally associated with elongation and splicing processes [28,29]; we were thus particularly interested in understanding the functional relationship of SEs with NucS1 (elongation accelerator), NucS7 (elongation processor), NucS10 (elongation speeder,) and NucS11 (elongation initiator), four nucleosome states enriched with H3K79me2 mark in the gene body. Interestingly, we observed NucS10 with both H3K79me2 and H3K36me3 marks showed the highest enrichment in exons for all three cell types (Fig. 5A). We then defined a NucS-SE affinity, a ratio of SEs associated with a NucS vs randomized SEs associated with that NucS, to semiquantitatively determine the association between nucleosome states and SE events. We found that NucS10 again showed a higher SE affinity among all three cell types (Fig. 5B). To further determine the splicing potentiality of SEs, we also developed an empirical equation to quantify the splicing potentiality for each of four nucleosome states, where we assessed the splicing potentiality from three following aspects: (1) Fréchet distance between the nucleosome distribution of reliable SE (rSE) and unreliable SE (urSE) (Additional file 1: Fig. S17); (2) the difference of nucleosome positioning between nucleosomes in rSE and urSE (Additional file 1: Fig. S18); and (3) the normalized counts coefficient of each H3K79me2 related NucS (see the "Methods" section and Eq. 9). Remarkably, we found the potentiality score of NucS10 is the highest among all four H3K79me2 related NucSs (Fig. 5C). Together, our results suggest nucleosomes modified with H3K36me3 and H3K79me2 histone tails might play an important role in influencing the skipping exon processing due to its lowest phasing and a higher degree of positioning.
Discussion
Despite several existing computational methods for determining epigenetic states, none of them is able to quantitatively examine the relationship of nucleosome organization, histone marks, and genomic regions at a finer nucleosome resolution level. To the best of our knowledge, our NucHMM is the first computational algorithm and tool to identify functional nucleosome states associated with cell type-specific combinatorial histone marks and nucleosome organization. We rigorously trained and tested it on all publicly available MNase-seq and ChIP-seq data of various histone marks in MCF7, H1 and IMR90 cells. We were able to identify 11 cell type-specific functional nucleosome states, each encoded with specific biological meanings (Table 1). Importantly, NucHMM is applicable to train MNase-seq and ChIP-seq of various histone marks in many different cell types.
To test the reliability of NucHMM results, we first compared "Training" module of NucHMM with ChromHMM and Segway to evaluate its performance. We found that both NucHMM "Training" module and ChromHMM/Segway produced similar results in terms of HMM states with distinct combinatorial histone marks (Additional file 1: Figs. S5-6). We then used a simulated nucleosome array coverage signal to verify the fidelity of our methods for calculating the phasing score and spacing value (Additional file 1: Fig. S13). Remarkably, the spacing value calculated directly from the simulation sine function is consistent with the one calculated from NucHMM. The phasing score from the simulated signal is also in line with our knowledge. Finally, we constructed There are several notable strengths of NucHMM. Firstly, we built directional nucleosome-based observations in the "Initialization" module and used it for the univariate HMM "Training" module. The nucleosome-level observations allow us to annotate the combinatorial histone modifications on the nucleosomes (Additional file 1: Suppl. Notes and Figs. S6A, S7B-D, S8A and S8C) and also to capture the 5′ TSS more accurately (Additional file 1: Fig. S8B). While univariate HMM enumerates each possible combination of histone marks as the possible output of HMM, it more straightforwardly determines whether a particular histone mark occurs in a state compared to a multivariate HMM, which also enhances NucHMM ability to precisely annotate HMM states on a nucleosome. Furthermore, the directionality information provides a more realistic model of the underlying epigenetic patterns and their transitions. Secondly, we employed the "Functioning" module to convert HMM states to functional nucleosome states (NucSs), which are associated with not only combinatorial histone modifications, but also with nucleosome organization features, including nucleosome phasing, spacing showed different SE affinity of each H3K79me2-related NucSs in each cell type. Briefly, we counted the raw number of SE events for each H3K79me2-related NucSs in each cell type and then assumed SE event was randomly associated with NucSs to get the predicted SE events. Finally, we used the ratio of raw vs predicted number as the SE event affinity for each H3K79me2-related NucS. NucS10 showed a higher SE affinity among all three cell types. C Semi-quantifying the SE potentiality of each H3K79me2-related NucSs. The NucS10 consistently had the highest SE potentiality score. We assessed SE potentiality from three aspects: the Frechet distance between the nucleosome distribution of SE and no-SE; the nucleosome positioning difference between nucleosomes in SE and no-SE region; the normalized SE events of each NucS and positioning (Additional file 1: Fig. S21). This extra layer of nucleosome organization information expands the features space of genomic states from one dimensional (traditional chromatin states) into two dimensional (functional nucleosome states), which, for the first time, offers an opportunity to genome-wide study the interplay of epigenetic marks-nucleosome organization. But there are few limitations of NucHMM: (1) the initial number of HMM states needs to be estimated at the beginning of NucHMM training, (2) the increased number of states and number of nucleosomes requires more computational and memory resources, (3) the initial assignment of a histone mark within a nucleosome bin may not be very accurate since the overlapping criteria between a nucleosome bin and a histone mark peak is a little bit subjective, and (4) the cutoff threshold of the emission probability in the mark-state matrix is arbitrary for determining whether the histone marks should be included into a state. To mitigate these limitations, future improvements may be focused on implementing a parallel computing framework, optimizing the assignment of histone marks and using a statistical method to devise the initial number of HMM states and to define a cutoff threshold of the emission probability.
Importantly, we were able to associate gene body functional nucleosome states with publicly available RNA-seq to quantitatively measure the splicing potentiality. Our quantitative comparison of the influence of four gene body nucleosome states on SE events revealed that NucS10 has the highest SE potentiality (Fig. 5C). This might due to its higher distribution at the middle gene body (Fig. 2F), its lowest nucleosome phasing (Fig. 3C), and its higher degree of positioning (Fig. 4C), as well as its most enrichment at internal exons (Fig. 5A). Most of the previous studies showed either H3K79me2 or H3K36me3 has a role in regulating alternative splicing [28,29,66]. However, our analyses clearly showed that the nucleosomes with both H3K36me3 and H3K79me2 marks might have the most effective influence in co-regulating the skipping exon processing. Our finding may offer new opportunities to interrogate the mechanisms of the functional crosstalk between H3K36me3 and H3K79me2 marked nucleosomes and the skipping exon processing.
Conclusion
In summary, we developed a novel computational method, NucHMM, for identifying cell type-specific nucleosome states. With NucHMM, we identified 11 distinct functional nucleosome states for MCF7, H1, and IMR90 cell types. We further demonstrated that these functional nucleosome states can be used to quantitatively determine the splicing potentiality of SEs. Our work advances our understanding of chromatin function at the nucleosome level and further offers mechanistic insight into the interplay between nucleosome organization and splicing process.
NucHMM initialization
To remove background noises and decrease the false positive rate of called positioned nucleosomes positioning and peaks of histone marks, we performed quality control (QC) for both MNase-seq and ChIP-seq data by using trim-galore [67]. We used bowtie or bowtie2 to uniquely map the reads to human HG19 reference genome. For MNase-seq data, we used Deeptools [68] to keep fragments within the range of 130-180 bp because of the length of the wrapped DNA of nucleosome plus the linker histone is within this range. We applied iNPS, which smoothed the MNase-seq wave profile with Laplacian of Gaussian convolution, to detect the borders of the nucleosome peaks, and then use a Poisson approximation filtering process to locate the final nucleosomes. We used MACS2 to identify narrow peaks for ChIP-seq of H3K4me1, H3K4me3, and H3K27ac but used EPIC2 to identify broad peaks for ChIP-seq of H3K9me3, H3K27me3, H3K36me3, and H3K9me3 with parameters -bin 100, -fdr 0.05, and -g 2 (or -g 5).
The entire genome was then binned based on detected nucleosomes. An alphabet of 128 (2 7 ) observation notations was built by enumerating each possible combination of marks (Additional file 1: Table S5) including no marks. For example, observation 9 (0b0001001) corresponds to the presence of H3K4me3 (1 = 0b0000001) and H3K79me2 (8 = 0b0001000) and the absence of all other marks. We then assigned the converted notations to the bins based on the degree of overlapping between the histone mark's peak and the nucleosome position. We limited the trained genomic region ranging from − 100Kb upstream to 5TSS (Upstream-TSS), gene body (Gene-body), and + 10Kb downstream of transcription terminal site (TTS) (Downstream-TTS) (Additional file 1: Suppl. Notes) and compiled a set of 19,189 protein-coding genes with the unique 5′TSS from UCSC RefSeq Genes.
NucHMM training
NucHMM training included two rounds of HMM learning process. In the first round, we empirically chose initial states ranging from 15 to 25 and ran five first-order HMMs for each of them. Each HMM was trained for 300 iterations to ensure the convergence using the Baum-Welch algorithm [69]. We then selected the HMM with the lowest Bayesian Information Criterion (BIC) score. Before the second round training, we removed those states with less than 0.5% of the total nucleosomes in the model from the transition probability and emission probability matrices. To simplify the HMM and maximize its states' the descriptive power, we used the modified transition probability and emission probability matrices for the second HMM learning process. The resulting HMM was trained with the Baum-Welch algorithm (Additional file 1: Suppl. Notes) for another 200 iterations to achieve the final HMM. The log-likelihood of HMM after each iteration was calculated to ensure to reach the local minimum. We found that 200 iterations were sufficient for this second round HMM to approach the convergence. The Viterbi algorithm was applied to decode HMM states on each nucleosome (Additional file 1: Suppl. Notes). The probabilities of an individual histone mark were calculated by marginalization among all output combinations of marks probabilities. The individual emission probability follows where n is the number of histone marks, & is bitwise AND operator, and x is the output number.
Nucleosome phasing and spacing
We first processed the Gaussian smoothed nucleosome signals from iNPS to nucleosome state-specific nucleosome array signals. We then averaged nucleosome array signals by the sum of all nucleosome array signals within the nucleosome state divided by the number of the nucleosome arrays. As the resolution of the Gaussian smoothed signal is 10 bp/point in the iNPS result, the initial sample rate of the nucleosome state array signal is 100 (10 bp/point). In order to keep the fidelity and more precisely convert the signals from the genome domain to the frequency domain, we first interpolated the signals and increased the sample rate to 1000 (1 bp/point), and then implemented Welch's method [70] to make the conversion based on the periodogram spectrum estimates, which was used to calculate the nucleosome phasing score.
For a detailed implementation, we firstly used the Hanning window function w(n) to divide the nucleosome state array signal x into K available frames with M points in each frame. Each frame is represented by where R is the window hop size. Then, the periodogram of the m th frame is given by We then averaged the periodograms across the genome. The Welch estimate of power spectral density is given bŷ The simplified conversion equation between the genome domain and the frequency domain is given by: where fs is the sample rate of the signal. We finally focused on the power spectrum density within frequency 4-10 Hz, which corresponds to the genome domain range 100-250 bp. We used the highest spectral density value of each nucleosome state in the window and multiplied 1000 as the nucleosome phasing score.
The calculation of the nucleosome spacing value utilizes the distribution of a nucleosome state-specific nucleosome array. We computed all local maxima of the array distribution by the following two rules: (1) for sharp peaks, the local maximum is defined as any sample point whose two direct neighbors have a smaller amplitude, and (2) for flat peaks, the middle point index is considered as the local maximum. We then calculated the average distances between the maxima of peaks as the nucleosome spacing value. To determine the spacing range for a nucleosome within a NucS-specific nucleosome array, we used Eq. 6: where Spacing NucS is the average nucleosome spacing of a NucS; Interval is the order of the nucleosome minus one, e.g., for the second nucleosome in the array, its Interval is one; Rank refers to the rank for each of 11 NucSs based on their phasing scores; Coef range is a user-defined parameter that used to adjust the range with 1 bp as the default.
Nucleosome positioning
We used two inter-correlated approaches, the "raw reads" reference approach and the "iNPS-derived" approach, to determine the nucleosome positioning (NP) score. We defined the well-positioned nucleosomes would have higher positioning score than fuzzy nucleosomes in both methods. Both approaches were applied with the idea that nucleosome positioning is the geometric-mean of the nucleosome fuzziness and nucleosome occupancy. In the "raw-reads reference" approach, we measured three features: (1) the standard deviation of raw reads, (2) the enrichment of raw reads, and (3) the full width at half maximum of reads peak. The equation of this approach can be described as: where t ∈ {1, 2, ⋯, T} = nucleosome population set, and norm represents the interquartile range normalization.
For example, the numerator should be relatively small for a fuzzy nucleosome while the denominator should be large and make the nucleosome positioning score small. In the "iNPS-derived" approach, we first empirically created nine equations based on iNPS results to calculate the nucleosome positioning. We then used the Pearson correlation method to determine which equation has the highest correlation with the "raw-reads reference" approach. The final determined equation is given by: where height, width, area, pval peak , and pval valley are all from iNPS. Generally, the numerator in the Eq. 8 reflected the occupancy measurements and denominator reflected the fuzziness measurement. Besides, we noticed that the pval valley is abnormally high at the end of the nucleosome array regardless of the shape of the real nucleosome. Thus, we manually replaced all pval valley of the last nucleosome in the array with the median value of the whole pval valley set. All elements in Eq. 8 are also applied interquartile range normalization.
Splicing potentiality of SE
We assessed SE's the splicing potentiality associated with each of four NucSs with H3K79me2 mark by measuring the difference of nucleosome organization between the reliable SE group and the unreliable SE group. We first used MISO [71] to identify the potential SE events. The reliable SE events result from applying two rules on the identified potential SE events. Rule 1: X + Y ≥ N and Y ≥ 1, where X, Y are integer counts corresponding to the number of reads in each of these categories, (1,0):X, (0,1):Y. Class (1,0) are reads consistent with the first isoform in the annotation but not the second while class (0,1) are reads consistent with the second but not the first. N was the cutoff value derived from X + Y frequency distribution. Rule 2: CI-width > median of CIwidth, where CI is the confidence intervals outputted by MISO for each estimate of Ψ. The rest of the potential SE events are then defined as unreliable SE group. We then extracted nucleosome distribution from iNPS results based on the coordinates of the rSE and urSE groups. The difference of nucleosome organization between rSE and urSE groups was then measured by Fréchet distance [72] and nucleosome positioning population (Additional file 1: Suppl. Notes-the pseudocode for calculating Fréchet distance). The following equation calculates splicing potentiality of SE (SPSE): where norm means the results scaling to range [0, 1], frdist is the acronym of Fréchet distance, abs is the acronym of absolute function, diff nucpos represents the difference of median values of NucS rSE and urSE group, and coef event − counts is the normalized event count coefficient.
More specifically, the Fréchet Distance (norm(frdist)) is used to measure the difference of the averaged NucS array signal (containing both nucleosome spacing and phasing measurements) between rSE and urSE group (Additional file 1: Fig. S17). The norm(abs(diff nucpos ) measured the different of the nucleosome positioning between rSE and urSE group (Additional file 1: Fig. S18). The larger the Fréchet distance and nucleosome positioning implied a higher SE potentiality of the NucS. The last coef event−counts ¼ normð number of NucS rSE number of NucS Þ is used to measure the 'abundance' of the NucS in the rSE group. | 2021-08-28T06:17:22.542Z | 2021-08-26T00:00:00.000 | {
"year": 2021,
"sha1": "48dd3f97c26483582703cdb188cf531c64d08046",
"oa_license": "CCBY",
"oa_url": "https://genomebiology.biomedcentral.com/track/pdf/10.1186/s13059-021-02465-1",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "c8d296bcbd1a422ee9588a2c3e6dab37bc301021",
"s2fieldsofstudy": [
"Biology",
"Computer Science"
],
"extfieldsofstudy": [
"Medicine"
]
} |
222146719 | pes2o/s2orc | v3-fos-license | The pMy vector series: A versatile cloning platform for the recombinant production of mycobacterial proteins in Mycobacterium smegmatis
Abstract Structural and biophysical characterization of molecular mechanisms of disease‐causing pathogens, such as Mycobacterium tuberculosis, often requires recombinant expression of large amounts highly pure protein. For the production of mycobacterial proteins, overexpression in the fast‐growing and non‐pathogenic species Mycobacterium smegmatis has several benefits over the standard Escherichia coli expression strains. However, unlike for E. coli, the range of expression vectors currently available is limited. Here we describe the development of the pMy vector series, a set of expression plasmids for recombinant production of single proteins and protein complexes in M. smegmatis. By incorporating an alternative selection marker, we show that these plasmids can also be used for co‐expression studies. All vectors in the pMy vector series are available in the Addgene repository (www.addgene.com).
| INTRODUCTION
The Gram-positive genus Mycobacterium includes several human pathogens, including Mycobacterium tuberculosis (Mtb). Mtb is listed by the World Health Organization as the leading cause of death from an infectious agent and led to 1.5 million deaths in 2019 alone. 1 The increase in multi-drug resistant strains of Mtb remains a public health crisis and the need for novel antibiotic therapies to treat Mtb is a priority. The growing importance of Mtb and other mycobacterial pathogens to human health has led to an intensive effort from several structural biology consortia to investigate the structure and function of mycobacterial proteins. 2,3 Despite these efforts, currently only approximately 15% of the Mtb proteome has been structurally characterized, 4 in part due to the challenge of Mtb protein production.
The production of large amounts of highly pure, properly folded and functional protein remains a bottle neck in the structural biology pipeline. Escherichia coli is typically the standard expression host for protein Abbreviations: GFP, green fluorescent protein; His 6 tag, hexahistidine tag; hyg, hygromycin; IMAC, immobilized metal affinity chromatography; kan, kanamycin; SLiCE, seamless ligation cloning extract; TEV, tobacco etch virus. production and there are a variety of modified strains available that are optimized for tackling challenging proteins. 5 However, studies have shown that for the production of mycobacterial proteins, expression in standard E. coli strains is only successful in one third of cases. 6,7 There are several factors that may limit the suitability of E. coli for the production of mycobacterial proteins, such as the mismatch in codon usage between mycobacterial genes and the E. coli translation machinery caused by the higher GC bias in Mtb genes. 8,9 In addition, the absence of key cofactors, post-translational modifications and chaperones in E. coli may further impede the production of mycobacterial proteins. 10 Owing to these deficiencies, many groups have turned to the fast-growing, non-pathogenic mycobacterial expression host, Mycobacterium smegmatis. The benefits of the use of M. smegmatis as an expression host, leading to an improved yield, solubility and functionality of purified proteins has been reported in several studies. 7,11,12 Despite the advantages of mycobacterial protein production using M. smegmatis, the limited range of genetic tools compared to those available for E. coli has restricted the widespread use of this expression host. However, several groups continue to develop expression strains and optimize expression vectors with different features such as induction system, promoter strength and purification tags. 6,[13][14][15][16][17] There are two principle M. smegmatis strains used for protein expression, M. smegmatis mc 2 4517 17 and mc 2 155 groEL1ΔC. 16 M. smegmatis mc 2 4517 has been modified to allow the expression of T7-promoter based systems following the incorporation of the bacteriophage T7 RNA polymerase. 17 Several vector systems have been developed for use in this strain including, pYUB1062 and pYUB1049. 17 Further modified versions allow for a choice of N-or C-terminal hexahistidine (His 6 ) tag positioning (pYUB28b), 18 expression of a GFP fusion protein (pYUB1062-GFP) 19 or for the co-expression of two protein targets (pYUBDuet). 18 The mc 2 155 groEL1ΔC strain has been modified to reduce the co-purification of GroEL1 chaperone protein following the deletion of its histidine-rich C-terminus 16 and can be used for expression of vectors carrying conditional promoter systems, such as acetamidase, 20 tetracycline 21 or arabinose. 22 The acetamidase promoter of M. smegmatis can be induced by the addition of acetamide, and several vector systems utilize this promoter to drive protein expression, including the pSD 15 and pMyNT/ pMyC vectors. 23 Several modified variants of these vectors exist, including the pMyCA vector, which contains a minimized acetamidase promoter. 13,14 Induction of the acetamidase promoter leads to a high level of protein expression, which in the case of toxic protein production may not be desirable. In E. coli, the arabinose-inducible promoter (P BAD ) enables tightly controlled and tunability of gene expression. 24 The arabinose promoter system is currently not widely used in M. smegmatis, but the tunability of this promoter prompted us to further explore its use for protein expression.
The aim of this work was to further expand the versatility of the pMyNT and pMyC vectors, which have been successfully used for both the production of single soluble proteins as well as protein complexes expressed from a single operon. 16,23,25 The pMyNT and pMyC vectors are shuttle vectors which can be propagated in E. coli cells for ease of manipulation, due to the presence of the OriE and the OriM origins of replication (Ori). These Ori are used for replication in E. coli and M. smegmatis, respectively. 23 Both vectors encode a His 6 tag at the N-terminus (pMyNT) or at the C-terminus (pMyC) and a hygromycin resistance marker for selection. Here we describe the modification of the pMyNT and pMyC vectors generating variants with an alternative selection marker and an arabinose-inducible promoter resulting in the "pMy vector series". Using fluorescent reporter proteins, we show that the pMy vectors can be used for the overexpression of a single protein and in combination for the production of multiple targets. In addition, we demonstrate the tunability of the P BAD arabinose-based promoter, which may prove advantageous for the production of toxic proteins. The pMy vector series has been deposited with Addgene (www.addgene.com).
| Construction of pMy vectors
To expand the existing repertoire of M. smegmatis expression vectors and create more tools for the production of mycobacterial vectors, we created variants of the existing pMyNT and pMyC vectors, which were previously generated by our group. 23 First we exchanged the existing acetamidase promoter present in both the pMyNT and pMyC for the arabinose-inducible promoter from the pBAD vector (P BAD ), 24 with the aim of producing a vector with tunable expression. Using Gibson cloning methods, the linearized fragments of the pMyC and pMyNT vectors without the acetamidase promoter were ligated with the P BAD arabinose promoter producing the arabinose-inducible, hygromycin-resistant vectors with a N-terminal His 6 or C-terminal His 6 tag, pMyBADNT and pMyBADC, respectively. In addition, we extended the co-compatibility of the pMy vectors by including the kanR gene, which is widely used in other M. smegmatis vectors. 13 Utilizing Gibson cloning approaches again, vector backbones of pMyNT, pMyC, pMyBADNT, and pMyBADC were amplified to omit the hygromycin resistance cassette, these backbone fragments were then ligated with the kanamycin resistance cassette, thus generating pMyNT kan , pMyC kan , pMyBADNT kan , and pMyBADC kan . An overview of the pMy vectors and their respective properties are outlined in Figure 1. All of the vectors produced in this study have been made available on Addgene (www.addgene.org) with their catalog numbers listed in Figure 1d.
As for the original pMyNT vector, all pMy vectors with N-terminal His 6 tags are preceded by a tobacco etch virus (TEV) cleavage site allowing the removal of the tag following immobilized metal affinity chromatography (IMAC). The pMy vectors derived from the original pMyC vector with a C-terminal His 6 tag do not include the TEV cleavage site due to the fact that following cleavage with TEV protease, five additional amino acids from the TEV recognition site remain, 26 the pMy vectors all contain the unique restriction sites NcoI and HindIII that can be used for linearizing the plasmids for ligation with a gene of interest using either restriction enzyme (RE) based cloning methods or ligation-independent cloning methods (Figure 1b,c). However, the limited number of unique restriction sites in the pMy vectors restricts the use of RE based cloning. Therefore, we primarily use ligation-independent cloning methods such as seamless ligation cloning methods (SliCE) or Gibson assembly methods for cloning genes into the pMy vectors. The recommended primer extension sequences for use with these methods for each of the pMy vectors are listed in Table S2.
| pMy vectors provide inducible protein expression in M. smegmatis
The level of protein expression from each of the pMy plasmid variants was evaluated using green fluorescent protein (GFP), which has been successfully used to monitor protein expression levels in other M. smegmatis vector systems. 19 The GFP gene was amplified using the primers listed in Table S3 and ligated into each of the pMy vectors using SliCE. 27 For protein production, we routinely use the M. smegmatis mc 2 155 groEL1ΔC 16 strain that has been optimized for purification of proteins by IMAC methods, and therefore we tested the activity of the vectors in this strain. However as the pMyNT and pMyC vectors are compatible with other M. smegmatis strains, 28 it is likely that the new pMy variants produced in this study will also be compatible as the vector backbone is largely unchanged.
M. smegmatis cultures were grown to an OD 600 of 1 in 7H9 expression medium before induction with either 1% acetamide or 1% arabinose. A concentration of 1% of the inducer molecule was chosen as higher concentrations of acetamide lead to an increase in cell aggregation reducing the accuracy of the fluorescence measurements. The amount of GFP produced in whole cells was measured using a plate reader following 18-hour induction with either arabinose or acetamide, as appropriate ( Figure 2). The highest level of GFP expression was detected from the pMyNT and pMyNT kan vectors, which encode the acetamidase promoter and have the His 6 tag positioned at the N-terminus. In comparison to the pMyC and pMyC kan vectors (acetamidase promoter, C-terminal His 6 tag) the pMyNT and pMyNT kan vectors displayed approximately a three-fold higher level of GFP fluorescence. This difference was also observed for pMy vectors with arabinose inducible promoters when comparing GFP expression between the vectors with an N-terminal His 6 tag (pMyBADNT and pMyBADNT kan ) and the C-terminal His 6 tag (pMyBADC and pMyBADC kan ). This difference was significant for all vector variants (p = .0001) and suggests that the N-terminal position of the His 6 tag leads to more effective translation, which has been similarly observed in other systems. 29 When comparing the pMy vectors with the same affinity tag position (e.g., pMyNT vs pMyBADNT), the acetamidase-based vectors produce a significantly higher amount of GFP (p < .001). Additionally, there was no significant difference in the level of protein production between the hygromycin-and kanamycin-resistant variants, indicating that the choice of selection marker did not impact protein production.
As protein overexpression can be toxic to the host cell, an uninduced sample was included to monitor the level of background or "leaky" expression from the vectors. For all vectors the level of background expression was below 0.5% of the total amount of GFP being produced compared with the induced sample. Thus, even at high inducer concentrations both the acetamidase promoter and P BAD promoter appear to be tightly regulated in M. smegmatis mc 2 155 groEL1ΔC.
| pMyBAD vectors provide tunable protein expression
To investigate the tunability of the acetamidase (pMyNT) and P BAD (pMyBADNT) promoters in M. smegmatis, we followed the expression of GFP over time using a range of F I G U R E 2 GFP expression using the pMy vectors in M. smegmatis. M. smegmatis cultures expressing pMy vectors encoding GFP2+ were induced at an OD 600nm of 1 with either 1% acetamide or arabinose, as appropriate. Determination of the GFP expression was calculated as relative fluorescence unit (RFU). All data were averaged from three independent samples of each time point. Samples were taken before and 24 hr after addition of inducer. Error bars depict standard deviation of three independent experiments inducer concentrations (Figure 3). At the concentrations tested, induction of the acetamidase promoter leads to a rapid increase in GFP production, which does not appear to be dependent on the inducer concentration. While the highest concentration of acetamide used was 1%, decreasing the acetamide levels did not significantly reduce the level of GFP expression at the concentrations used in this study (Figure 3a). In contrast, increasing the concentration of arabinose proportionally increased the level of GFP expression from the pMyBADNT vector (Figure 3b). At the final 24 hr time point, the level of GFP produced from the different arabinose concentrations was significantly different (p < .001). Together these results indicate that P BAD promoter is more tightly regulated than the acetamidase promoter in M. smegmatis. The tunability of the pBAD promoter in the pMy vectors could be exploited for the production of toxic proteins where unregulated levels of protein expression may lead to cell death.
| pMy constructs can be combined in co-expression studies
One of the aims of generating the pMy vector series with different antibiotic selection markers was to facilitate coexpression studies. To test whether a combination of pMy vectors could successfully express multiple proteins, we monitored the expression of GFP and mCHERRY simultaneously by using the corresponding excitation and emission wavelengths for each protein. mCHERRY was cloned into pMyNT kan , pMyBAD kan using the primers listed in Table S3 with SliCE cloning methods as described above. Different combinations of pMy plasmids encoding either GFP or mCHERRY were cotransformed into M. smegmatis mc 2 155 groEL1ΔC by electroporation and co-transformants were selected using hygromycin and kanamycin. To test the level of co-expression from two plasmids carrying the acetamidase promoter pMyNT-GFP and pMyNT kan -mCHERRY were co-transformed (Figure 4a). Similarly, to test co-expression from two pMy plasmids carrying the arabinose promoter system pMyBAD-GFP was combined with pMyBAD kan -mCHERRY (Figure 4b). Finally, pMyBAD-GFP was combined with pMyNT kan -mCHERRY to test the co-expression from the two different protomer systems (Figure 4c). For all combinations fluorescent protein expression was monitored 18 hr following induction with 1% (v/v) acetamide and /or arabinose, as appropriate. For all vector combinations, the production of GFP and mCHERRY increased after induction showing that co-expression from two independent pMy vectors is possible. To compare the protein amounts produced during a co-expression experiment to the production from a single vector, the amount of GFP and mCHERRY produced during co-expression has been shown relative to the amount produced from the expression of the single protein from the corresponding vector. For example, the amount of GFP produced in the coexpression of pMyNT-GFP and pMyNT kan -mCHERRY ( Figure 4a) has been normalized to the amount of GFP produced by expressing pMyNT-GFP alone using the same conditions. Based on the normalized RFU readings the amount of GFP or mCHERRY produced in a coexpression experiment reduces by approximately 35%-55% compared to single expression. There was no significant difference between the level of GFP or mCHERRY produced by the different vector combinations, indicating that none of the pMy plasmids were expressed preferentially over the other. In summary, the expression of the two fluorescent proteins from independent vectors demonstrates that pMy vectors encoding different antibiotic resistance markers and promoter systems can be combined for co-expression studies.
| DISCUSSION
The production of mycobacterial proteins for structural, functional and biochemical studies remains an important step in the drug discovery pipeline. The production of mycobacterial proteins in M. smegmatis is becoming more common owing to the advantages of using a more native expression host over the traditional E. coli strains. 7,11,12,25 The aim of this work was to further expand the tools available for recombinant protein expression in M. smegmatis. The pMy vector series derives from the pMyNT and pMyC vectors that have been used in the mycobacterial field over the past decade. 23,30,31 Induction of protein expression from the acetamidase promoter is an established system in M. smegmatis and leads to high levels of target protein expression. 6 However, when producing proteins that are toxic to the cell such as membrane proteins, toxins or DNA binding proteins it can often be advantageous to regulate the level of protein overexpression. In E. coli the arabinose inducible promoter system, which includes the pBAD promoter and the AraC regulator has been successfully exploited to tightly regulate protein expression and its activity in M. smegmatis has also been demonstrated. 22 It is for this reason that we introduced the arabinose promoter system into the pMy vectors. By comparing the induction of GFP expression from both the acetamidase promoter (pMyNT) and the arabinose-inducible promoter (pMyBADNT), we showed that induction with different concentrations of arabinose is correlated with the level of GFP expression. This is in contrast to induction of the acetamidase promoter, where all concentrations tested resulted in a similar level of GFP fluorescence. The tuneability of the arabinose promoter in M. smegmatis could provide a useful alternative to the acetamidase promoter for the production of toxic proteins and allow better control of the production of protein complexes where the correct stoichiometry of the different components may be essential for solubility. To test the overall performance of pMy vectors we examined the level of GFP production from each of the vectors in the pMy series following induction with either acetamide, arabinose, or no inducer as a control. As expected, the acetamidase-based vectors (pMyNT, pMyNT kan , pMyC and pMyC kan ) produced higher amounts of GFP compared to the arabinose-inducible vectors (pMyBADNT, pMyBADMNT kan , pMyBADC and pMyBADC kan ). The significant reduction in the amount of GFP produced from the C-terminally tagged vectors was unexpected, but has been observed for other expression systems which show an increased production of target proteins with N-terminal fusions. 29 Recent work by Vergara et al., produced a modified pMyC variant with a reduced acetamidase regulon which increased protein production, 14 although whether this change resulted in a similar level of expression as from the pMyNT vectors was not tested in this work.
Using high concentrations of inducer, the pMyBAD vectors produced lower quantities of GFP compared to the acetamide-based vectors. This correlates with the previous reports of the low level of activity of the arabinoseinducible promoter system in M. smegmatis. 22 However, owing to the tunability of this promoter we propose that the pMyBAD vectors would be beneficial in cases where the target protein is toxic, as even low concentrations of acetamide leads to high levels of protein expression. Under these expression conditions, both the acetamidaseand arabinose-inducible systems have a low level of background expression following comparison of the induced to the uninduced samples. The low level of "leaky expression" contrasts with previous observations on the acetamide inducible systems using, for example, the pYUB1062 19 vectors in the M. smegmatis strain mc 2 4517. This may reflect differences in the plasmid, M. smegmatis strain or growth media conditions.
In addition to creating expression vectors where the level of recombinant protein production can be as tightly regulated as for the pMyBAD vectors, we also aimed to facilitate co-expression studies by introducing an alternative selection marker, kanamycin. The kanamycin-resistant pMy variants show the same expression levels as the hygromycin-resistant variants, and as kanamycin is significantly cheaper than hygromycin it may be more accessible to some laboratories. Here we show that the co-expression of GFP and mCHERRY from the combination of pMy vectors with different promoter systems is possible. The combination of the tightly regulated pBAD promoter with the highly expressed acetamidase promoter may provide a useful strategy for the expression of toxin-antitoxin systems where the cytotoxic protein component be expressed at a lower level than is anti-toxin counterpart, 32 however this has not been tested as part of this work.
In summary, this work contributes additional tools for the production of recombinant proteins in M. smegmatis. As more tools become available, we hope to see further development of specific M. smegmatis expression strains for the production of recombinant proteins, 16 as for example the C41 strain for membrane protein production in E. coli (Lucigen). The pMy vector series could be further modified to include different purification tags to expand the applications of these vectors for the production of multi-protein complexes. The production of mycobacterial proteins for biochemical, biophysical and structural studies remains a key step in the search for novel anti-mycobacterial therapies. We anticipate that the expansion of the pMy vector series will be a useful tool for the community.
| Construction of pMy vectors
The expanded pMy vector series is based on the previously described pMyNT and pMyC vectors 23 which are derived from pSD31. 15 The sequences of the primers used in the construction of the pMy vectors are listed in Table S1. PCR amplification was performed with Q5® High-Fidelity DNA Polymerase according to the manufacturer's instructions (New England Biolabs) and DNA fragments were purified with the Wizard SV Gel and PCR Clean-up System (Promega).
The different pMy vectors were created using Gibson assembly methods to generate variants encoding kanamycin resistance or the P BAD promoter. The acetamidase promoter composed of the amiC, amiA, amiD and amiS genes encoded in in the pMyNT and pMyC vectors was replaced by the arabinose promoter (araC gene) from the pBAD/His (Thermo Fischer) vector using Gibson cloning producing pMyBADNT and pMyBADC. The parent plasmids pMyNT and pMyC were linearized by PCR to generate the "vector insert" using the primers listed in Table S1. The P BAD promoter was amplified by PCR from the P BAD /His (Thermo Fischer) using the corresponding "insert" primers listed in Table S1. The PCR products were purified and ligated using Gibson Assembly Master Mix (New England Biolabs) as per the manufacturer's instructions, using the recommended vector concentration of 50 ng and a two-fold molar excess of insert. Ligation mixtures were transformed into E. coli DH5α-T1 R (Life Technologies) by heat shock transformation and plated onto LB-agar plates containing hygromycin and incubated at 37 C overnight. The resultant plasmids, pMyBADNT and pMyBADC were sequence verified (Eurofins Genomics).
To make kanamycin resistant variants of the pMyNT, pMyC, pMyBADNT, and pMyBADC the hygromycin B (hygR) resistance marker was exchanged for the for the kanR marker. The kanR gene was amplified by PCR using the pMV306 vector 33 as template using the "insert" primers listed in Table S1 for each of the corresponding pMy vectors. Linear fragments of the pMyNT, pMyC, pMyBADNT, and pMyBADC vectors without the hygR gene were generated by PCR amplification using the "vector" primers listed in Table S1. Each of the vector backbone fragments was ligated with the complementary kanR product using the Gibson Assembly Master Mix (New England Biolabs) as described above. These reactions resulted in pMyNT kan , pMyC kan , pMyBADNT kan , and pMyBADC kan plasmids which were all sequence verified (Eurofins Genomics). The pMy plasmids generated in this study, pMyNT kan , pMyC kan , pMyBADNT kan , pMyBADC kan, pMyBADNT, and pMyBADC kan have been deposited on Addgene (www.addgene.com).
| Construction of pMy vectors encoding fluorescent reporter proteins
Expression constructs were generated using SliCE cloning methods with the SliCE ligation mix prepared as described by Zhang et al. 27 Vectors were linearized by restriction enzyme digestion with NcoI/HindIII followed by dephosphorylation using Antarctic phosphatase (New England Biolabs). Genes encoding the green fluorescent protein (GFPm2+) and red fluorescent protein (mCHERRY3) were synthesized and provided by Genscript using the sequences shown in Figure S1. The genes were amplified by PCR using Q5® High-Fidelity DNA Polymerase (New England Biolabs). All primers used for the generation of plasmids used in this study are listed in Table S3. DNA fragments were purified with the Wizard SV Gel and PCR Clean-up System (Promega). SliCE cloning reactions were performed using 50 ng linearized vector with a 5:1 (insert:vector) molar excess of purified insert. Ligation mixtures were transformed to E. coli DH5α-T1 R and transformants were selected on LB plates containing the appropriate antibiotic. Plasmid DNA was prepared using QIAprep Spin Miniprep kit (Qiagen) and sequence-verified with vector-specific primers (AP-328, 5'-CGCAGTTGTTCTCGCATACC-3 0 and pMyNT-rev, 5 0 -TGGATCTCTCCGGCTTCAC-3 0 ) before transformation to M. smegmatis mc 2 155 groEL1ΔC. Electrocompetent M. smegmatis mc 2 155 groEL1ΔC were prepared as previously described. 16
| Monitoring the expression of fluorescent proteins expressed from pMy vectors in M. smegmatis
To monitor protein expression from the pMy vectors in M. smegmatis, the fluorescent reporter proteins GFP2+ or mCHERRY were used. Expression constructs outlined in Table S3 were transformed into electrocompetent M. smegmatis and selected on 7H10 agar supplemented with either kanamycin or hygromycin, as appropriate. In the case of co-expression studies, both plasmids were simultaneously transformed by electroporation into M. smegmatis and plated onto 7H10 agar plates supplemented with both kanamycin and hygromycin, (35 μM or 94 μM, respectively). Transformants were confirmed using colony PCR.
M. smegmatis starter cultures were cultivated from freshly streaked colonies or glycerol stocks for grown for 3 days at 37 C with orbital shaking at 120 rpm. For smallscale expression studies, a 1% volume of a starter culture was used to inoculate 50 ml of 7H9 expression medium. Cultures were grown to an OD 600 of 1 and induced with varying concentrations of arabinose or acetamidase, as indicated. The first time point (0) was taken at the point of induction and several time points over 24 hr after induction, at each time point 200 μl of each culture was transferred from the 50 ml culture to a black FLUOTRAC flat bottomed 96 well plate (Greiner Bio-One). The fluorescence of GFP or mCHERRY, or both in the case of co-expression studies, was measured using a TECAN infinite M1000 plate reader. GFP fluorescence was measured at an excitation wavelength of 490 nm and an emission wavelength of 509 nm. For mCHERRY detection, an excitation wavelength of 587 nm and an emission wavelength of 610 nm was used. A gain value of 100 was used for all measurements. Statistical analysis of expression levels was performed using a Student's t-test. For the analysis of the co-expression of GFP and RFP, the values have been normalized to the 18 hr timepoint level of fluorescence (RFU) for the respective reporter. | 2020-10-06T13:35:32.295Z | 2020-10-02T00:00:00.000 | {
"year": 2020,
"sha1": "824691ed5812d76511b84ffd30f47c5890ff2d22",
"oa_license": "CCBYNCND",
"oa_url": "https://onlinelibrary.wiley.com/doi/pdfdirect/10.1002/pro.3962",
"oa_status": "HYBRID",
"pdf_src": "PubMedCentral",
"pdf_hash": "b884d46e2dfe2e97afb6541ebbe0b282b01d025a",
"s2fieldsofstudy": [
"Biology"
],
"extfieldsofstudy": [
"Biology",
"Medicine"
]
} |
24968068 | pes2o/s2orc | v3-fos-license | Endoscopic sphincterotomy plus large-balloon dilation vs endoscopic sphincterotomy for choledocholithiasis: A meta-analysis
AIM: To perform a meta-analysis of large-balloon dilation (LBD) plus endoscopic sphincterotomy (EST) vs EST alone for removal of bile duct stones. METHODS: Databases including PubMed, EMBASE, the Cochrane Library, the Science Citation Index, and im-portant meeting abstracts were searched and evaluated by two reviewers independently. The main outcome measures included: complete stone removal, stone removal in the first session, use of mechanical lithotripsy, procedure time, and procedure-related complications. A fixed-effects model weighted by the Mantel-Haenszel method was used for pooling the odds ratio (OR) when heterogeneity was not significant among the studies. When a Q test or I 2 statistic indicated substantial heterogeneity, a random-effects model weighted by the DerSimonian-Laird method was used. RESULTS: Six randomized controlled trials involving 835 patients were analyzed. There was no significant heterogeneity for most results; we analyzed these using a fixed-effects model. Meta-analysis showed EST plus LBD caused fewer overall complications than EST alone (OR = 0.53, 95%CI: 0.33-0.85, P = 0.008); subcategory analysis indicated a significantly lower risk of perforation in the EST plus LBD group (Peto OR = 0.14, 95%CI: 0.20-0.98, P = 0.05). Use of mechanical lithotripsy in the EST plus LBD group decreased significantly (OR = 0.26, 95%CI: 0.08-0.82, P = 0.02), especially in patients with a stone size larger than 15 mm (OR = 0.15, 95%CI: 0.03-0.68, P = 0.01). There were no significant differences between the two groups regarding complete stone removal, stone removal in the first session, post-endoscopic retrograde cholangiopancreatography pancreatitis, bleeding, infection of biliary tract, and procedure time. CONCLUSION: EST plus LBD is an effective approach for the removal of large bile duct stones, causing fewer complications than EST alone.
INTRODUCTION
During endoscopic retrograde cholangiopancreatography (ERCP), endoscopic sphincterotomy (EST) or endoscopic papillary balloon dilation (EPBD) is the standard method of enlarging the papillary orifice before stone retrieval. However, the extent of orifice dilation with conventional EST or EPBD is limited [1][2][3] , and the use of other methods such as mechanical lithotripsy, intraductal shockwave lithotripsy, extracorporeal shock-wave lithotripsy or, if those fail, biliary stent placement with repeated ERCP or even surgery may be required in patients with difficult (usually large) stones [1] . These methods are not widely available, and a larger opening of the orifice by largeballoon dilation (LBD) seems to be necessary. Ersoz et al [4] first reported the use of LBD after sphincterotomy for large common bile duct stones and achieved a high stone clearance rate of up to 89%-95% without mechanical lithotripsy. Since then, a number of case series have also suggested that the combination technique facilitated large stone extraction and reduced dependence on mechanical lithotripsy, contributing to higher stone clearance in a single endoscopic session with an acceptable risk of complications [5][6][7][8][9] . However, the comparison of EST plus LBD and EST alone for removal of choledocholithiasis has given inconsistent results.
To the best of our knowledge, the only systematic review on the topic has been published by Liu et al [10] . This included non-randomized controlled trials (non-RCTs); two eligible abstracts [11,12] which were regarded as nonrandomized in the review were in fact randomized; this was validated by contacting the authors. More recently, a well-arranged trial has been published and some conflicting results have emerged [13] . Therefore, we believe that an updated meta-analysis is required.
Search strategy
A literature search was performed to identify all relevant studies that compared EST plus LBD and EST alone for removal of bile duct stones. The PubMed, EMBASE, Cochrane Library databases, and the Science Citation Index were searched systematically for all articles published up to May 2013, without language restriction, using the following terms in their titles, abstracts, or keyword lists: "balloon dilation," "sphincteroplasty," "sphincterotomy," "bile duct stone," and "choledocholithiasis." The references in retrieved articles were also screened manually.
The abstracts of the United European Gastroenterology
Week and Digestive Disease Week, from 2004 to 2012, were also searched systematically. An attempt to contact the first author was made when information was not ex-tractable from potentially eligible published abstracts.
Study selection
Papers selected from this initial search were then screened for eligibility using the following criteria: (1) RCTs that evaluated a comparison of EST plus LBD (larger than 12 mm in balloon size) and EST alone in the removal of large common bile duct stones (larger than 10 mm in diameter); and (2) Outcomes of interest included complete stone removal, use of mechanical lithotripsy and complications. If reports came from the same study center, we only included data from the publication with the largest population. Comments, reviews, case reports, and guideline articles were excluded.
Data extraction
Data from eligible studies were extracted independently by two reviewers (Yang XM and Hu B) using standard forms, and consensus was reached on all items. Data were extracted on: first author, year of publication, country of origin, study setting, number, age and sex of patients, stone size, balloon diameter, complete stone removal, stone removal in the first session, use of mechanical lithotripsy, procedure time, and procedurerelated complications.
Assessment of study quality
Two independent reviewers (Yang XM and Hu B) assessed the quality score of primary trials according to the Jadad scale [14] . Total scores ranged from 0 to 5. The Cochrane Collaboration's tool for assessing risk of bias was also used to address potential bias (Table 1). We defined studies with a Jadad score of 3 or more points and a low risk of bias as high quality in this meta-analysis. Disagreements were discussed by the reviewers and resolved through consensus.
Statistical analysis
For summary statistics in meta-analysis, the odds ratio (OR) is recommended for dichotomous data, and the weighted mean difference is recommended for continuous data. Complete stone removal, stone removal in the first session, use of mechanical lithotripsy and overall complications were summarized as OR with 95%CI. Peto OR with 95%CI was used for separate complications, including post-ERCP pancreatitis, bleeding, infection of biliary tract (including cholangitis and cholecystitis), and perforation, since it could generate the least biased pooled results of studies with zero event in both groups [15] . P values of less than 0.05 were considered significant.
Heterogeneity was assessed by visual inspection of a Forest plot, the Cochran Q test, and the I 2 statistic. Heterogeneity was considered significant by the Cochran Q test when P < 0.1 or I 2 > 50% [16,17] . A fixed-effects model weighted by the Mantel-Haenszel method was used for pooling the OR when heterogeneity was not significant among the studies [18] . When a Q test or I 2 statistic indicated substantial heterogeneity, a random-effects model weighted by the DerSimonian-Laird method was used [19] . We performed a sensitivity analysis by removing each study in turn from the overall data to evaluate the influence of a single study on the pooled analysis and by restricting the meta-analysis to high-quality studies. We also assessed the potential for publication bias through visual inspection of funnel plot asymmetry and evaluated the statistical significance of differences according to the methods of Begg et al [20] and Egger et al [21] . Statistical analyses were performed using Review Manager software (version 5.1 for Windows, Cochrane Collaboration, Oxford, United Kingdom).
Identification of eligible studies
The literature search yielded 316 abstracts for review, and 308 were excluded for the reasons shown in Figure 1.
The results of two studies were conflated because they were from the same trial. Thus, six studies [11][12][13][22][23][24] were included, four of which were available as full texts and were high quality studies. The combined studies enrolled 835 patients who had been randomly allocated to the EST plus LBD group or the EST alone group. The characteristics of the included trials are listed in Tables 1 and 2, and the outcome data are shown in Table 3.
Efficacy
Six studies reported complete stone removal. Heterogeneity among these studies was not significant (P = 0.28, Figure 2A). Thus, we used the fixed-effects model and found that there was no significant difference in complete stone removal between EST plus LBD and EST alone (OR = 1.41, 95%CI: 0.63-3.17, P = 0.40, Figure 2A). Sensitivity analysis by removing each study in turn from the overall data or by restricting the metaanalysis to high-quality studies showed that the result was robust. Four RCTs [12,13,22,23] reported stone removal in the first session, and there was no significant difference in stone clearance between the two methods (OR = 1.02, 95%CI: 0.65-1.61, P = 0.92). A comparison of EST plus LBD and EST alone in patients with stones larger than 15 mm was carried out, and five studies [11,13,[22][23][24] with 377 patients were included. Meta-analysis showed that there was no significant difference in the complete stone removal rate according to the fixed-effects model (OR = 0.99, 95%CI: 0.35-2.81, P = 0.98, Figure 2B).
Use of mechanical lithotripsy
Six studies reported the use of mechanical lithotripsy during the stone removal process. The trials were heterogeneous (P < 0.001, I 2 = 87%), and a random-effects model analysis was performed. The results indicated a significantly reduced dependence on mechanical lithotripsy in the EST plus LBD group (OR = 0.26, 95%CI: 0.08-0.82, P = 0.02). We conducted a sensitivity analysis by excluding the study by Stefanidis et al [24] , as no mechanical lithotripsy was used in the LBD group in this trial, and the result did not change (OR = 0.42, 95%CI: 0.18-0.98, P = 0.05). However, after removing the two eligible abstracts [11,12] , there was no significant difference in the use of mechanical lithotripsy between EST plus
Procedure time
Only two studies reported the total procedure time [13,23] .
Publication bias
The funnel plot did not show an asymmetrical pattern Figure 3A).
Safety
Six RCTs evaluated the safety in both groups ( (2) Teoh et al [13]
Table 3 Outcome data derived from the included randomized controlled trials n (%)
EST: Endoscopic sphincterotomy; LBD: Large-balloon dilation.
( Figure 4). In addition, neither the Begg test nor the Egger test revealed significant publication bias (P = 0.148 and P = 0.426, respectively).
DISCUSSION
We performed this meta-analysis mainly to investigate whether EST plus LBD was feasible and safe for the removal of large stones. serious complications such as severe pancreatitis and bile duct perforation if performed strictly under established guidelines [5,6,22] . Similarly, the current meta-analysis demonstrated that the incidence of overall complications was significantly lower in the EST plus LBD group. When standard EST is performed to remove large stones, a full or large incision may be made, possibly leading to bleeding or perforation. Our review showed that perforation occurred in four patients in the EST alone group, and in none in the EST plus LBD group. Furthermore, bleeding was rarer when balloon dilation was performed (1.7% vs 3.1%) after limited sphincterotomy, although no significant difference was observed. We presume that this may be due to the small incision made before LBD.
Many concerns have been raised about post-ERCP pancreatitis with increasing balloon size, especially for those over 15 mm. However, our meta-analysis showed that LBD did not increase pancreatitis. Theoretically, the initial sphincterotomy may orientate the direction of subsequent dilation, leading to a resultant tear away from the pancreatic orifice, which might decrease the risk of pancreatitis. Post-ERCP pancreatitis may also be associated with other factors such as cannulation time and stone removal time. Only two studies reported the total procedure time [13,23] , and meta-analysis showed no difference in ERCP duration between the two groups. We cannot estimate the effect of procedure duration on the risk of pancreatitis.
Only the study by Teoh et al [13] compared the direct cost of the procedures between the two groups. A significant reduction in overall cost was noted in the EST plus LBD group [USD $5025 (interquartile range, $4140-$5235) vs $6005 (interquartile range, $4462-$5441), P = 0.034]. Whether this combined technique is less expensive requires clarification by conducting further trials.
Our findings are similar to those of the previous meta-analysis by Liu et al [10] . This previous meta-analysis included three RCTs [22,23,26] , and summarized the results of RCTs and non-RCTs separately. One trial included in the previous meta-analysis which performed dilation using a small (8 mm) balloon [26] was excluded in our review. A well-arranged trial was excluded in the previous metaanalysis because mechanical lithotripsy was used in all the patients in the EST group, but in none of the patients in the EST plus LBD group [24] , which did not accurately reflect the use of mechanical lithotripsy. We conducted a sensitivity analysis by excluding this study, and the result did not change. By contacting the authors, we found two eligible abstracts [11,12] regarded as non-randomized in the previous meta-analysis, which were in fact randomized. Furthermore, our meta-analysis included a recently published well-designed trial by Teoh et al [13] . The previous meta-analysis showed a significant reduction in the use of mechanical lithotripsy and overall complications for non-RCTs, but not for RCTs. However, our meta-analysis showed that EST plus LBD caused fewer overall complications than EST alone, and the result did not change by restricting the meta-analysis to high-quality studies. In addition, our meta-analysis showed that the use of mechanical lithotripsy in the EST plus LBD group decreased significantly, especially in patients with a stone size larger than 15 mm.
This meta-analysis also has some limitations. Firstly, it included two low-quality trials. It has been well documented that in RCTs and meta-analyses, low-quality studies are vulnerable to bias and may lead to exaggerated results. However, subgroup analysis of high-quality studies was also significant, which strengthened the results. Secondly, only a few studies were included, which might decrease the robustness of the analysis and mask publication bias. Our meta-analysis showed that the significant reduction in perforations in the EST plus LBD group was marginal (P = 0.05), this was probably attributable to the small number of subjects with perforation (n = 4, all in the EST alone group).
In conclusion, large-balloon dilation following limited sphincterotomy appears to be an effective approach for large stone extraction. This method may cause fewer complications and reduce dependence on mechanical lithotripsy. However, it warrants more well-designed studies to clarify whether this combined technique is outweighed.
Background
Endoscopic sphincterotomy (EST) or endoscopic papillary balloon dilation (EPBD) is the standard method for stone retrieval. However, the extent of orifice dilation with conventional EST or EPBD is limited, and the use of other methods, such as mechanical lithotripsy, may be required in patients with large stones. A larger opening of the orifice by large-balloon dilation (LBD) may facilitate stone removal. For the past few years, LBD following limited EST appears to be an alternative to EST alone for removing large bile duct stones. However, which one is predominant remains controversial.
Research frontiers
The current meta-analysis was carried out to comparatively assess LBD plus EST and EST alone for removal of large bile duct stones. The main outcome measurements included complete stone removal, stone removal in first session, use of mechanical lithotripsy, procedure time, and procedure-related complications.
Innovations and breakthroughs
The current meta-analysis demonstrated that EST plus LBD is an effective approach for the removal of large bile duct stones, causing fewer complications than EST alone. Furthermore, this combined technique may decrease dependence on mechanical lithotripsy during stone extraction. | 2018-04-03T04:07:38.576Z | 2013-12-28T00:00:00.000 | {
"year": 2013,
"sha1": "fa1bed8d4b7737657d556f9baf6181a3d5c06ab7",
"oa_license": "CCBYNC",
"oa_url": "https://doi.org/10.3748/wjg.v19.i48.9453",
"oa_status": "HYBRID",
"pdf_src": "Adhoc",
"pdf_hash": "7cdbfe4f3c5a3cd7170a1e069721b2d1c2c73058",
"s2fieldsofstudy": [
"Medicine"
],
"extfieldsofstudy": [
"Medicine"
]
} |
15856864 | pes2o/s2orc | v3-fos-license | Overcoming the restriction barrier to plasmid transformation and targeted mutagenesis in Bifidobacterium breve UCC2003
Summary In silico analysis of the Bifidobacterium breve UCC2003 genome predicted two distinct loci, which encode three different restriction/modification systems, each comprising a modification methylase and a restriction endonuclease. Based on sequence homology and observed protection against restriction we conclude that the first restriction endonuclease, designated BbrI, is an isoschizomer of BbeI, the second, BbrII, is a neoschizomer of SalI, while the third, BbrIII, is an isoschizomer of PstI. Expression of each of the B. breve UCC2003 methylase‐encoding genes in B. breve JCM 7017 established that BbrII and BbrIII are active and restrict incoming DNA. By exploiting knowledge on restriction/modification in B. breve UCC2003 we successfully increased the transformation efficiency to a level that allows the reliable generation of mutants by homologous recombination using a non‐replicative plasmid.
Introduction
The commensal gut microbiota has long been appreciated for its influence on gut health (reviewed by O'Hara and Shanahan, 2006;Turroni et al., 2008). Bifidobacteria constitute a specific group of mostly commensal bacteria, which inhabit the gastrointestinal tract (GIT) of mammals, including the human GIT, where they are estimated to represent 3-6% of the adult faecal flora (Ventura et al., 2004;Saxelin et al., 2005;Zoetendal and Vaughan, 2006). The presence of bifidobacteria in the human GIT has been associated with many beneficial health effects, such as the prevention of diarrhoea, amelioration of lactose intolerance and immunomodulation (reviewed by Leahy et al., 2005). Indeed, the health benefits of probiotic bacteria such as bifidobacteria have been shown to extend beyond the GIT (Lenoir-Wijnkoop et al., 2007). These many positive attributes have led to the widespread incorporation of bifidobacteria as live components of commercial health-promoting probiotic foods. Despite these commercial and scientific interests, fundamental knowledge is still scarce regarding the exact molecular mechanisms by which bifidobacteria contribute to host health and well-being. Such scientific knowledge is essential to scientifically explain the purported health benefits, and consequently support the inclusion of such bacteria as probiotics in functional foods.
The genome sequences on Bifidobacterium longum subsp. longum NCC2705 (Schell et al., 2002), B. longum subsp. longum DJ010A (Lee et al., 2008), B. adolescentis ATCC15703 (Suzuki et al., 2006), B. adolescentis L2-32 (Fulton et al., 2007), B. dentium ATCC27678 (Sudarsanam et al., 2008) and B. animalis subsp. lactis HN019 (Collett et al., 2008) have recently become available and have contributed very significantly to advancing our knowledge on bifidobacterial genetics and metabolism. However, the availability of a genome sequence is merely a first step towards a better understanding of a specific probiotic property, and unravelling the molecular mechanisms by which bifidobacteria bring about positive host responses demands the availability of suitable molecular tools. To date, relatively few molecular tools for bifidobacteria have been developed, which explains why the genetics of these microbes is rather poorly understood, certainly when compared with other bacteria of industrial importance.
Available genetic tools for bifidobacteria include bifidobacterial plasmids, which were first reported by Sgorbati and colleagues (1982). In recent years significant effort has focused on identifying and sequencing plasmids from bifidobacteria, and exploiting some of these native bifidobacterial replicons for the creation of Escherichia coli-Bifidobacterium shuttle vectors (Lee and O'Sullivan 2006;Alvarez-Martín et al., 2007;Cronin et al., 2007;Sangrador-Vegas and colleagues, 2007). A limitation of many of these shuttle vectors is the low transformation efficiency of many of the bifidobacteria tested, coupled in some cases with segregational instability (Lee and O'Sullivan, 2006). The observed differences in transformation efficiency among different strains of bifidobacteria may be attributed, at least in part, to restriction/modification (R-M) systems, which are ubiquitous among prokaryotes and generally comprise of a restriction endonuclease (REase) and cognate methyltransferase (MTase) (Murray, 2002;Tock and Dryden, 2005). R-M systems are believed to serve primarily as defensive instruments that protect prokaryotic cells against invading DNA such as promiscuous plasmids or infecting bacteriophage. R-M systems are classified into four groups (designated type I, II, III and IV) on the basis of their subunit composition, co-factor requirement, recognition sequence structure and the cleavage site relative to the recognition sequence (Roberts et al., 2003). Type I R-M systems consist of three different subunits, HsdM, HsdR and HsdS, that are responsible for modification, restriction and sequence recognition respectively. Type I REases require ATP, MG 2+ and AdoMet for activity. In general they interact with two asymmetrical bi-partite recognition sites, translocate the DNA in an ATP hydrolysis-dependent manner and cut the DNA distal to the recognition sites, approximately halfway between two sites (Murray, 2002). Typically, in a type II R-M system the REase recognizes and cleaves within a short (4-8 bp) palindromic DNA sequence. Protection of 'self' DNA from restriction occurs by methylation using an MTase, which modifies specific adenosyl or cytosyl residues within the sequence recognized by the corresponding REase (Kobayashi, 2001;Pingoud et al., 2005). Type III R-M systems consist of two subunits, Mod, responsible for DNA recognition and modification, and Res, responsible for DNA cleavage. Active nucleases require ATP and MG 2+ for activity and are stimulated by AdoMet. The holoenzyme, composed of two Res and two Mod subunits, interacts with two unmodified asymmetric target sites positioned in inverse orientations with respect to each other and cuts the DNA close to one recognition site (Janscak et al., 2001). Type IV R-M systems are specified by either one of two structural genes encoding proteins with specificities for methylated, hydroxymethylated or glucosyl-hydroxymethylated bases in the target DNA molecule (Roberts et al., 2003).
In the current study we report on the identification and preliminary characterization of three R-M systems encoded on the genome of B. breve UCC2003. Circumventing these R-M systems allowed the development of a reliable method for the creation of gene disruptions in B. breve UCC2003.
Sequence, genetic organization and amino acid analysis of the BbrI, BbrII and BbrIII R-M systems from B. breve UCC2003
Two loci, predicted to encode three different R-M systems, were identified from the annotation of the genome sequence of B. breve UCC2003 (S. Leahy. M. O'Connell Motherway, J. Moreno Munoz, G.F. Fitzgerald, D. Higgins and D. van Sinderen, unpubl. results) and designated BbrI, BbrII and BbrIII (Fig. 1A). The G+C content for each system is 58% which is in agreement with the approximately 60% G+C content for bifidobacteria (Ventura et al., 2007). The first gene of the BbrI R-M system, bbrIM, codes for a protein (M.BbrI; 43.2 kDa) with 60% and 53% identity to cytosine-specific MTases from Clavibacter michiganesis and Photorhabdus luminescens respectively; M.BbrI also contains the six highly conserved motifs characteristic of known 5′-methylcytosine MTases (Kumar et al., 1984) (Fig. 1B). The cytosinespecific MTases from C. michiganesis and P. luminescens are known to methylate of the sequence 5′-GGC (m5) GCC-3′, which is also the recognition sequence of the BbeI REase identified by Khosaka and colleagues (1982) from B. breve YIT4006. The protein product of the second ORF, bbr0215, exhibits 94% identity to a hypothetical protein encoded by B. longum subsp. longum NCC2705 (Schell et al., 2002). The third gene of the BbrI gene cluster, bbrIR, is separated from bbr0215 by remnants of an insertion sequence element. The bbrIR gene encodes a protein (30 kDa) exhibiting low homology (33%) to various type II R-M system restriction subunits and for this reason it is predicted to represent the restriction component of the BbrI R-M system, probably an isoschizomer of BbeI.
Assessment of R-M activity in B. breve UCC2003
To establish if the identified R-M systems are functional in B. breve UCC2003 and whether they affect trans-formation efficiency of this strain, the transformation frequency of two E. coli-bifidobacterial shuttle vectors, pPKCM7 and pAM5 (Table 1), was determined when these plasmids had been isolated either from B. breve UCC2003 (DNA protected from R-M) or from E. coli JM101 (DNA sensitive to R-M). 200 ng quantities of each of these plasmid DNAs isolated from these two different hosts was used to transform B. breve UCC2003 by electroporation. Transformants were selected on RCA supplemented with chloramphenicol (Cm) in case of plasmid pPKCM7, or tetracycline (Tet) in case of plasmid pAM5, and enumerated following anaerobic incubation at 37°C for 48 h (Fig. 2). For each plasmid there was a 500-fold higher transformation efficiency of the plasmid DNA isolated from B. breve UCC2003 as compared with the DNA isolated from E. coli, thus indicating that one or more of the identified R-M systems encoded by B. breve UCC2003 is functional and contributes to the efficiency at which plasmids can be introduced in this strain.
BbrI, BbrII and BbrIII represent three R-M systems
In order to verify the prediction that M.BbrI, M.BbrII and M.BrrIII represent distinct MTases that protect, based on their similarities to characterized R-M systems, DNA sequences cut by BbeI, SalI and PstI, respectively, genomic DNA of B. breve UCC2003 was restricted with these enzymes and analysed by agarose gel electrophoresis. The results obtained showed that B. breve UCC2003 genomic DNA is protected from restriction with BbeI and PstI, but not SalI (Fig. 3A).
To establish if the methylase activities associated with the BbrI and BbrIII R-M systems were present in other B. breve strains, genomic DNA from nine additional B. breve strains was restricted with BbeI or PstI (Table S2). Only for three strains, B. breve UCC2004, NCFB 2258 and NCFB 8815, the DNA was protected from restriction with BbeI. In addition, DNA from B. breve NCFB 8815 was also protected from restriction with PstI. Genomic DNA from the remaining six strains was restricted by these two enzymes. This would indicate that different strains of B. breve exhibit quite a variety of different R-M activities.
To determine the individual effect of each R-M system on the transformation frequency of B. breve UCC2003, we first introduced plasmid pAM5, which harbours one PstI, two SalI and three BbeI sites, into B. breve JCM7017 strains harbouring either pNZ8048, pNZ-M.BbrI, pNZ-M.BbrII or pNZ-M.BbrIII. The methylation of the pAM5 DNA at the appropriate sequence in each of the methylase expressing strains was confirmed by restriction analysis (results not shown) prior to introducing 200 ng of each plasmid preparation into B. breve UCC2003 by electroporation. The number of transformants was determined after 48h of anaerobic incubation at 37°C on RCA with tetracycline selection (Fig. 4). pAM5 DNA isolated from JCM7017 expressing M.BbrIII allowed an almost 1000fold higher transformation frequency as compared with pAM5 isolated from E. coli or JCM 7017 harbouring pNZ8048. A 10-and 5-fold higher transformation efficiency was observed for pAM5 isolated from JCM7017 expressing M.BbrII and M.BbrI respectively. The transformation frequency obtained with pAM5 DNA isolated from JCM 7017 expressing M.BbrIII was comparable to the transformation frequency obtained with pAM5 plasmid DNA isolated from B. breve UCC2003. However, in the latter case DNA preparations contain just the pAM5 plasmid, while in the former case the DNA preparation would have contained a mixture of pAM5 and pNZ-M.BbrIII. These results demonstrate that the BbrIII restriction endonuclease (isoschizomer of PstI) is highly active in B. breve UCC2003 and that the activity of this restriction endonuclease appears to represent the main limitation to the genetic accessibility of B. breve UCC2003, at least for plasmid pAM5.
Expression of M.BbrII and M.BbrIII in E. coli and methylation of plasmid DNA
From the data presented above it was clear that all three REases BbrI, BbrII and BbrIII are active in B. breve UCC2003. In order to enhance transformation efficiencies of B. breve UCC2003 by prior methylation of plasmid DNA, two E. coli strains expressing both M.BbrII and M.BbrIII were constructed. In the first, E. coli pNZ-M.BbrII-M.BbrIII, two of the bifidobacterial methylases were expressed on plasmid pNZ8048 (see Experimental procedures and Table S1). As expected, chromosomal (and plasmid) DNA from E. coli strain EC101 harbouring pNZ-M.BbrII-M.BbrIII is protected from restriction with PstI. The second E. coli strain, BM1, harbours bbrIIM and bbrIIIM under the control of an IPTG-inducible promoter integrated into the glgB gene on the E. coli JM101 chromosome (see Experimental procedures). Upon induction with 10 mM IPTG total DNA from E. coli BM1 is protected from restriction with PstI (Fig. S1A). However, complete protection from SalI restriction was not observed (results not shown) and this may be due to the lower level expression of bbrIIM from the E. coli chromosome as compared with expression from plasmid pNZ-M.BbrII-M.BbrIII. In addition, SalI can restrict hemi-methylated DNA, therefore the observed restriction by SalI may be a reflection of incomplete methylation.
To evaluate the effect of methylation of plasmid DNA on transformation efficiency, pAM5 was introduced into E. coli pNZ-M.BbrII-M.BbrIII and E. coli BM1 by electroporation. Expression of M.BbrII and M.BbrIII in BM1 harbouring pAM5 was effected by the addition of 10 mM IPTG prior to the isolation of plasmid DNA. Plasmid preparations of E. coli harbouring pNZ-M.BbrII-M.BbrIII or E. coli BM1 were then used for B. breve UCC2003 transformation. pAM5 DNA isolated from E. coli harbouring pNZ-M.BbrII-M.BbrIII gave a 1000-fold higher transformation (Fig. S1B).
Disruption of the galA and apuB genes of B. breve UCC2003
In order to establish if methylation of a non-replicating plasmid by the B. breve UCC2003 MTases would increase transformation efficiency to a sufficiently high level that would allow site-specific homologous recombination, two genes, galA and apuB, were selected as mutational targets. The galA and apuB genes encode an endogalactanase and an amylopullulanase, respectively, which are involved in extracellular polysaccharide metabolism by B. breve UCC2003 (Hinz et al., 2005;Ryan et al., 2006;O'Connell Motherway et al., 2008). To establish if gene disruption could be achieved using homologous recombination, DNA fragments of 476 and 744 bp, representing internal fragments of the galA gene, and a 939 bp internal fragment of the apuB gene were cloned in pORI19 and provided with a tetracycline resistance marker, generating plasmids pORI19-tet-G744, pORI19-tet-G476 and pORI19-tet-apuB respectively (see Experimental procedures). These plasmids, being derivatives of pORI19, cannot replicate in B. breve UCC2003 as they lack a functional replication protein (Law et al., 1995). These pORI19 derivatives were introduced into E. coli EC101 harbouring pNZ-M.BbrII-M.BbrIII to facilitate methylation, and preparations of the resulting methylated pORI19-derived plasmids were then introduced into B. breve UCC2003 by electroporation. Tetracycline-resistant transformants were isolated at a frequency of 50 per mg of transformed DNA when greater than 700 bp of homologous DNA was used. The number of potential integrants was slightly reduced when the smaller region of homologous DNA was used. All transformants obtained were expected to carry galA or apuB gene disruptions, while no such transformants were obtained when unmethylated pORI19 constructs were introduced into B. breve UCC2003. The suspected chromosomal integration of the pORI constructs was verified by colony PCR on a selection of Tet r transformants using a forward primer upstream of the region of integration and a reverse primer based on pORI19 (galAp1 and pORI19A, or apuBp1 and pORI19B) (results not shown). Southern hybridizations confirmed the assumed integration of the individual pORI-derived plasmids by homologous recombination. For the presumed galA disruptions of B. breve UCC2003, Southern hybridizations were performed using SphI-digested genomic DNA and employing a 2.6 kb PCR fragment encompassing galA as a probe. SphI was selected for the genomic digests since there are no corresponding restriction sites within the galA sequence. The galA fragment hybridized to a 6.1 kb fragment of UCC2003 genomic DNA, while in the UCC2003 derivatives with a presumed pORI-tet-G476 or pORI-tet-G744 integration this band was absent, and expected hybridization signals of 10.5 kb and 557 bp, or 10.8 kb and 848 bp, respectively, were observed (Fig. 5). For two of each of the UCC2003 mutant strains examined the galA probe also hybridized to a 5.3 kb or 5.5 kb Sph1 fragment for the pORI19-tet-G476 and pORI19-tet-G744 integrants respectively [ Fig. 5B(i), lanes 4 and 5; Fig. 5B(ii), lanes 5 and 6]. These hybridization signals indicate that duplication of pORI19-tet-galA plasmids had occurred after integration of the plasmid into the bacterial chromosome in these mutant strains. For the suspected apuB integrants of strain UCC2003, Southern hybridizations were performed using BamHI-digested genomic DNA and a 1 kb probe encompassing an internal fragment of apuB. The apuB fragment hybridized to a 3.6 kb fragment of UCC2003 genomic DNA. For the apuB mutant strains the anticipated hybridization signals of 2.1 and 7.2 kb were obtained (Fig. S2).
Collectively these results demonstrate that methylation of plasmid DNA by the B. breve UCC2003 MTases M.BbrII and M.BbrIII in E. coli circumvents the BbrII and BbrIII REase activities in B. breve UCC2003 and allows a sufficiently high transformation efficiency so as to allow reliable homologous recombination in B. breve UCC2003. In addition, these data illustrate that chromosomal integration in B. breve UCC2003 can be achieved with less than 500 bp of homologous DNA.
Phenotypic analysis of the B. breve UCC2003 plasmid integrants
In order to verify the expected phenotypic consequences of the created gene disruptions in galA and apuB, strains B. breve UCC2003, and individual representatives of B. breve UCC2003 mutants generated by insertion of pORI19-tet-G744 or pORI19-tet-G476, designated here as UCC2003-galA-476 and UCC2003-galA-744, respectively, were analysed for their ability to grow on galactan as the sole carbohydrate source (Fig. 6A). Similarly, B. breve UCC2003 and a derivative with an integrated pORI19-tet-apuB (designated UCC2003-apuB-939) were analysed for the ability to grow on starch, amylopectin, glycogen or pullulan as the sole carbohydrate source (Fig. 6B). In contrast to the wild-type B. breve UCC2003, the B. breve UCC2003-galA-476 or UCC2003-galA-744 mutant strains failed to grow on potato galactan, while comparable growth of the parent and galA mutant strains was observed when glucose was the sole carbohydrate source. In a similar manner it was shown that B. breve UCC2003-apuB-939 failed to grow on starch, amylopectin, glycogen or pullulan, which contrasted with observed good growth on these substrates by the parent strain. Comparable growth for parent and mutant strains was observed when glucose was used as the sole carbohydrate source. These results confirm that the chromosomal plasmid integrations in UCC2003 cause a demonstrable phenotype and clearly illustrate the importance of the extracellular enzymes specified by galA and apuB in the metabolism of specific high-molecular-weight polysaccharides by B. breve UCC2003.
Discussion
Bifidobacterial strains demonstrate substantial variability in the efficiency of transformation by plasmids from E. coli, while many strains exhibit complete resistance to trans-formation (Lee and O'Sullivan, 2006). Progress in the evaluation of probiotic factors in bifidobacteria has been slow due to the lack of efficient and versatile systems for genetic manipulation (Ventura et al., 2004). While quite a number of E. coli-bifidobacterial shuttle vectors have been developed, it has been noted that widespread application of these plasmids among bifidobacterial species is limited (Lee and O'Sullivan, 2006).
As shown here, R-M systems are one of the major obstacles hindering progress in the genetic accessibility and analysis of B. breve UCC2003, and are likely to do this in other (bifido)bacteria as well. Convincing evidence to support this notion can be obtained from the available bifidobacterial genome sequences. Genes specifying R-M systems can be identified in all sequenced bifidobacterial genomes. The genomes of B. longum subsp. longum NCC2705 (Schell et al., 2002) and B. longum subsp. 848 bp 557 bp longum DJ010A (Lee et al., 2008) both harbour a single type I R-M system, two type II R-M systems and one type IV R-M system. The type II REases specified by blo_1473 and bld_0356 are predicted to be isoschizomers of EcoRII, which restricts within the sequence ↓ CCWGG, while the REases specified by blo_564 and bln_1359 are predicted to be isoschizomers of Sau3A1, which recognizes the sequence ↓ GATC. The recognition sequence of the type I and type IV R/M systems in the sequenced B. longum genomes are unknown. The genome of B. adolescentis ATCC15703 (Suzuki et al., 2006) specifies two MTase subunits and six REase subunits. The restriction subunits specified by bad_1283 and bad_1232 are predicted to be isoschizomers of KpnII and Sau3AI, respectively, while the remaining four are as yet unknown. The sequenced genomes of B. dentium ATCC27678 (Sudarsanam et al., 2008) and B. animalis HN019 (Collett et al., 2008) both harbour a single type II R-M system, where the REase is predicted to be an isoschizomer of AvaII, which recognizes the sequence G ↓ GWCC (Sutcliffe and Church, 1978). Based on the results obtained for B. breve UCC2003, it is tempting to speculate that exploiting the MTases encoded by the aforementioned sequenced bifidobacterial strains would allow the transformation efficiencies of these strains to be improved. For bifidobacterial strains that are particularly recalcitrant to transformation or where the complete genome sequence is not known it may be possible to methylate plasmid DNA isolated from E. coli by incubating the DNA with crude cell extracts of the Bifidobacteria in the presence of S-adenosylmethionine thereby possibly improving the transformation efficiency.
An alternative method that would circumvent bifidobacterial R-M systems would be to introduce plasmid DNA by conjugation. To date conjugation has not been conclusively demonstrated for the genus Bifidobacterium. Until recently the only evidence supporting the possibility of conjugation in bifidobacteria was the identification of genes encoding proteins potentially involved in the conjugation process on various bifidobacterial plasmids. Putative relaxase-encoding genes have been identified on plasmids pJK36 and pJK50 from B. longum subsp. longum (Park et al., 1999;2000), while homologues of septal DNA translocator (Tra) proteins have been identified on the B. breve plasmid pCIBb1 (O'Riordan and Fitzgerald, 1999) and the B. pseudocatenulatum plasmid p4M (Gibbs et al., 2006). Recently, Shkoporov and colleagues (2008) sequenced three plasmids of bifidobacterial origin: pB44 from B. longum, pB90 from B. bifidum and pB21a from B. breve. Both pB44 and pB90 harbour genes encoding potential mobilization functions while pB21A encodes a putative Tra protein. These proteins were exploited in efforts to achieve conjugation in bifidobacte- ria, and although antibiotic-resistant, PCR-positive and thus putative transconjugants were obtained, plasmid transfer has as yet not been demonstrated. The difficulties associated with obtaining sufficiently high transformation efficiencies so as to allow insertional mutagenesis in B. breve UCC2003 through homologous recombination led us to believe that R-M systems were the barrier that needed to be overcome in order to achieve this. In the present study we describe three different R-M systems specified by the genome of UCC2003: BbrI, an isoschizomer of BbeI; BbrII, a neoschizomer of SalI; and BbrIII, an isoschizomer of PstI. Restriction analysis of chromosomal DNA from UCC2003 showed that the DNA is protected from restriction with BbeI and PstI, but not SalI. The observed restriction of DNA by SalI can be explained by M.SalI being a N6-adenosine-methylase, while M.BbrII is predicted to be cytosine-specific MTase, which may therefore not confer (full) protection against SalI restriction. However, the finding that M.BbrII does provide full protection against SalI restriction when it is expressed from a multicopy plasmid in B. breve JCM 7017 would indicate that M.BbrII in such circumstances is more abundant, thereby eliciting complete methylation and concomitant protection of the DNA. The three R-M systems identified in B. breve UCC2003 do not appear to be highly conserved among B. breve strains, just one strain examined in this study, B. breve NCIMB 8815, was shown to exhibit protection of BbrI and BbrIII recognition sites indicating that this species and indeed the genus Bifidobacterium is likely to harbour a very diverse range of R-M activities.
The contribution of each R-M system in impeding plasmid transformation of B. breve UCC2003 was determined and established that all three systems impact on transformation efficiency, with BbrIII, at least under the circumstances used here, providing the biggest hurdle to incoming DNA. To facilitate methylation of plasmid DNA by M.BbrII and M.BbrIII, thereby enhancing the transformation frequency of B. breve UCC2003, two E. coli strains were constructed, where bbrIIM and bbrIIIM were expressed in different ways, either from their own promoter on plasmid pNZ8048 or from an IPTG-inducible promoter on the E. coli chromosome. The observed higher transformation efficiency for pAM5 DNA isolated from E. coli pNZ-M.BbrII-M.BbrIII may be attributed to the high copy number of pNZ8048 plasmids in E. coli and resulting higher expression levels of the MTases as compared with expression from single copy on the E. coli chromosome in E. coli BM1.
Having established that the use of M.BbrII-and M.BbrIII-methylated plasmid DNA results in a significantly increased transformation efficiency of B. breve UCC2003, we conclusively showed that gene disruptions in B. breve UCC2003 can be created using a non-replicating and M.BbrII-and M.BbrIII-methylated plasmid. We have previously produced a gene disruption in the apuB gene of B. breve UCC2003 by adaptation of the lactococcal two plasmid homologous recombination system (O'Connell Motherway et al., 2008). However, in our hands this system was very tedious, time-consuming and not reliable (O'Connell Motherway et al., 2008;our unpublished results). Therefore, insertional mutagenesis of the apuB gene was deemed an appropriate control to evaluate the validity and reliability of the plasmid methylation strategy. By M.BbrII-M.BbrIII-mediated methylation of plasmid DNA in E. coli prior to transformation into B. breve UCC2003, gene disruptions not only in apuB, but also in galA were successfully and reliably created, as verified by genetic and phenotypic analyses.
This, to the best of our knowledge, therefore represents the first reliable system for creating insertional mutation in a member of the genus Bifidobacterium. The ability to achieve chromosomal integration of a non-replicative plasmid with less than 500 bp of homologous DNA also opens the opportunity for the creation of a bank of B. breve UCC2003-derived mutants carrying random chromosomal integrations, which in turn will provide a range of possibilities to further advance fundamental knowledge on the physiology, biochemistry and genetics of this strain. Such information will obviously be relevant to other bifidobacteria and will be crucial to understand the healthpromoting properties that have been attributed to various members of this genus.
Experimental procedures
The description of the experimental procedures resides in Appendix S1 in Supporting information.
Supporting information
Additional Supporting Information may be found in the online version of this article: Fig. S1. A. Restriction analysis of E. coli JM101 and two representative JM101 bbrIIM and bbrIIIM methylase integration strains. Lane 1, molecular weight marker X (Roche). Lanes 2-4, Pst1 digest of total DNA isolated from JM101 following induction with 0, 1 or 10 mM IPTG. Lanes 5-7 and lane 8-10, PstI digests of total DNA isolated from two representative JM101 bbrIIM and bbrIIIM methylase integration strains after induction with 0, 1 or 10 mM IPTG. B. Transformation efficiency of B. breve UCC2003 with pAM5 plasmid DNA isolated from B. breve UCC2003, E. coli pNZ-M.BbrII-M.BbrIII, E. coli BM1 or E. coli pNZ8048. Table S1. Oligonucleotide primers used in this study. Table S2. Restriction analysis of genomic DNA from B. breve strains with BbeI and PstI. Appendix S1. Experimental procedures.
Please note: Wiley-Blackwell are not responsible for the content or functionality of any supporting materials supplied by the authors. Any queries (other than missing material) should be directed to the corresponding author for the article. | 2018-04-03T04:09:34.975Z | 2009-04-17T00:00:00.000 | {
"year": 2009,
"sha1": "27013a3a9ee8247bd36591d788b18fac510df2c1",
"oa_license": "implied-oa",
"oa_url": "https://europepmc.org/articles/pmc3815753?pdf=render",
"oa_status": "GREEN",
"pdf_src": "PubMedCentral",
"pdf_hash": "27013a3a9ee8247bd36591d788b18fac510df2c1",
"s2fieldsofstudy": [
"Biology"
],
"extfieldsofstudy": [
"Biology",
"Medicine"
]
} |
233213554 | pes2o/s2orc | v3-fos-license | THE RELATIONSHIP OF SELF-COMPASSION AND SUICIDE RISK FACTORS IN AMERICAN INDIAN/ALASKA NATIVE PEOPLE
In this study, positive aspects of self-compassion (i.e., selfkindness, common humanity, and mindfulness of one’s thoughts and feelings) were explored in relation with suicide risk factors (i.e., perceived burdensomeness and thwarted belongingness) in a community sample of 242 self-identified American Indian/Alaska Native (AI/AN) adults. Participants completed a survey packet including a demographic form, the Interpersonal Needs Questionnaire, and the Self-Compassion Scale at several Indian Health Service clinics and tribal centers in the Great Plains of the United States. Results indicated that positive aspects of selfcompassion (i.e., self-kindness, common humanity, and mindfulness) were associated with and predictive of less suicide risk (i.e., less perceived burdensomeness and thwarted belongingness) among AI/AN adults. Of those with a history of suicidal ideation (n = 89), positive aspects of selfcompassion were predictive of less perceived burdensomeness, but were not predictive of thwarted belongingness. Implications for prevention and intervention programs that emphasize self-compassion, mindfulness, and culturally relevant practices, as well as mental health advocacy, including suicide prevention, for AI/AN people are highlighted.
Suicide Risk Among American Indian/Alaska Native People
In 2018, the Centers for Disease Control and Prevention (CDC; 2020) reported suicide as the second leading cause of death for American Indian/Alaska Native (AI/AN) people between the health concerns, family violence, impulsivity, suicide attempt history, and access to lethal means (Gray & McCullagh, 2014). Historical and intergenerational trauma is a specific risk factor for suicide among AI/AN people as these traumas are embedded in families and communities and passed down to future generations (FitzGerald et al., 2017;Gray & McCullagh, 2014).
Interpersonal Psychological Theory of Suicide
One theory of suicidal risk and behaviors is Joiner's Interpersonal Psychological Theory of Suicidal Behavior (IPTS). IPTS draws on several components to explain why people may be at risk for death by suicide and ultimately why people die by suicide, including thwarted belongingness, perceived burdensomeness, and acquired capability (Joiner, 2005;Van Orden et al., 2010).
Thwarted belongingness refers to the mental suffering that occurs as a result of a lack of connectedness with others. Human beings are born to be relational and desire to feel connected, and when this does not occur, it results in loneliness and increases thwarted belongingness (Joiner, 2005).
Perceived burdensomeness is the extent to which people believe they are a burden to those who play an important role in their lives (i.e., family, friends, community, etc.). Therefore, the greater the sense of being a burden on others (regardless of whether or not others view the person as a burden), the greater the suicide risk. The combination of perceived burdensomeness and thwarted belongingness are theorized to be risk factors for suicide and death by suicide for people in general (Joiner, 2005;O'Keefe et al., 2014;Van Orden et al., 2010).
Acquired capability refers to an individual's ability to follow through with the actual act of suicide (Van Orden et al., 2010). The actual act of suicide can be a fearful and painful event.
Human beings are not innately designed to follow through with such an act (Joiner, 2005). So, the way people develop the acquired capability to carry out the act of suicide is through repeated exposure to painful events (e.g., experiencing traumatic events, being bullied). While there is merit in studying acquired capability, many researchers in the field of suicidality tend to focus on thwarted belongingness and perceived burdensomeness as suicide risk factors (Brailovskaia et al., 2020;El et al., 2018;Martin et al., 2018;McClay et al., 2020;O'Keefe, 2014;Roeder & Cole, 2018). Therefore, thwarted belongingness and perceived burdensomeness will be the aspects of suicide risk explored in the present study. and non-familial connectedness (i.e., caring relationships with school officials, religious leaders, and tribal leaders) were protective factors for AI/AN adolescents related to suicidality. Therefore, furthering these research efforts in understanding the relationship between thwarted belongingness and suicidal ideation in AI/AN adults in community settings is warranted.
Perceived burdensomeness is another potential risk factor for suicidality among AI/AN people. If someone feels like a burden to their family, given the importance of connectedness and closeness of family relationships in AI/AN communities, it is possible that AI/AN individuals may be at risk for depression and possibly suicidal risk in combination with other factors. According to Rhoades-Kerswill (2012), perceived burdensomeness for AI/AN people might increase when they believe that they are not fulfilling their traditional roles, which could create a sense of burdensomeness on their community and/or family. There are only a few research findings suggesting that AI/AN people have an increased risk for suicidal thoughts and behaviors when they feel like a burden to others (O'Keefe et al., 2014;Rhoades-Kerswill, 2012;Olson et al., 2011).
Given that perceived burdensomeness is an important component identified in the IPTS that may enhance one's desire to die by suicide, further research on perceived burdensomeness among AI/AN people is needed.
Positive Aspects of Self-Compassion as Potential Protective Factors for Suicide Risk Among AI/AN People
Knowing that AI/AN people have an increased rate of suicidal risk and behaviors, including death by suicide, compared to other ethnocultural groups, it is also important to research the protective factors related to suicidality within AI/AN cultures. Self-compassion is a positive psychology construct, and to the best of our knowledge, it has not been explored in AI/AN communities and may indeed be a protective factor against suicidality in AI/AN adults in community settings.
Self-compassion refers to the ability to have empathy toward oneself and one's suffering (Neff, 2003), which has been known to increase positive emotional states while reducing depression and anxiety (Neff & Vonk, 2009). Neff (2003 identified three theoretical dimensions of self-compassion including self-beliefs, relational beliefs, and the relationship to one's own thoughts and feelings. The three positive aspects of self-compassion are self-kindness, common humanity, and mindfulness.
American Indian and Alaska Native Mental Health Research Self-kindness refers to how kind an individual is to oneself while refraining from judging oneself. Common humanity involves embracing imperfection as a shared human experience.
Mindfulness of one's thoughts and feelings is the ability to equalize experiences, that is, to experience one's thoughts and feelings in the moment, instead of amplifying individual suffering (Akin & Akin, 2015;Neff, 2003).
A self-compassionate mindset is created when all three positive self-compassion (i.e., selfkindness, common humanity, and mindfulness of one's thoughts and feelings) components blend together and reciprocally interact (Neff & McGehee, 2010). If self-compassion is linked with connectedness, happiness, and optimism (Neff & McGehee, 2010), then it is likely that an increase in self-compassion could potentially avert and/or decrease suicidal thoughts.
Of interest, only four studies to date have explored self-compassion as a protective factor against suicidality in the general population. In two studies, lower levels of self-compassion were associated with higher rates of suicide plans (i.e., particularly self-kindness and common humanity; Ali, 2014) and/or suicide attempts for children and adolescents (Tanaka et al., 2011). Selfcompassion was also found to be directly and inversely related to suicidal behavior and depressive symptoms among college students (Rabon et al., 2018). Lastly, Rabon and colleagues (2019) explored the relationship between self-compassion and suicidal behavior in a sample of 541 United States veterans and found a significant inverse relationship between self-compassion and suicidal behavior among veterans, which was strengthened as the level of suicide risk severity increased.
No researchers to date have explored the relationship of self-compassion and suicide risk factors among AI/AN people, demonstrating the need for the present study. Current research has focused on identifying protective factors within AI/AN families and tribal communities (Alcántara & Gone 2007;Gilligan, 2002;Goldston et al., 2008;Henson et al., 2017;Hill, 2009;Rhoades-Kerswill, 2012). Understanding the relationship of self-compassion with suicide risk factors, such as thwarted belongings and perceived burdensomeness, may provide a new perspective in identifying internal/psychological protective factors related to suicidality among AI/AN people.
Protective factors related to suicide risk tend to be understudied and receive less attention in research studies in general (FitzGerald et al., 2017). Identifying and increasing protective factors may be more effective than interventions aimed to reduce suicide risk factors (FitzGerald et al., 2017;Freedenthal & Stiffman, 2004).
The purpose of the present study was to explore the relationship of the three selfcompassion dimensions with suicide risk factors of thwarted belongingness and perceived American Indian and Alaska Native Mental Health Research burdensomeness in a sample of AI/AN people. The research questions for this study were: 1) What is the linear relationship of self-compassion dimensions with perceived burdensomeness among AI/AN adults? and 2) What is the linear relationship of self-compassion dimensions with thwarted belongingness among AI/AN adults? It was hypothesized that the self-compassion dimensions of self-kindness, common humanity, and mindfulness of one's thoughts and feelings would be significantly and inversely correlated with and predictive of 1) perceived burdensomeness and 2) thwarted belongingness among AI/AN people.
Participants
The sample consisted of 242 self-identified AI/AN adults (83 men and 159 women) who came to one of several Indian Health Service (IHS) and/or tribal centers in the Great Plains of the United States. To respect participants' anonymity, as well as tribal approvals and university IRB processes and procedures, specific tribal affiliations of participants will not be reported. See Table 1 for the demographics of the sample.
In terms of suicidality, 62.9% (n = 151) of the participants did not have a history of suicidal ideation whereas 37.1% (n = 89) of the participants reported a history of suicidal ideation. The majority of the participants reported no history of suicide attempts (81.3%, n = 196), but 18.7% (n = 45) of the participants did identify a history of attempting suicide (of whom 71.1% had attempted once or twice and 28.9% reported three or four attempts).
Procedure
Tribal research and university IRB approvals were obtained prior to the start of this study.
AI/AN adults who visited their IHS and/or tribal centers were recruited via flyers that were posted at their center or recruited by their behavioral health care providers and/or the front desk staff at the centers. They were invited to participate in the study and informed that their participation was voluntary and that their decision whether or not to participate did not influence any services received at the centers.
If participants stated an interest in this research study, they were given an envelope, which included the informed consent form, the demographic questionnaire, the Interpersonal Needs Questionnaire, the Self-Compassion Scale, and a resource page. The participants did not write their names on any survey forms, so there was no way to connect their survey responses with their identities. The participants sealed the envelope after completing the survey and dropped it off to the front desk staff at the center, who put the envelope in a locked file cabinet. Participants received $5 upon completion of the survey by the staff at the center/clinic.
Demographic Page
On the first page of the survey, participants completed questions related to their demographics including their age, gender, race, tribal membership, marital status, current living arrangements, past living arrangements, highest level of education completed, annual family income, spiritual preference, previous suicidal ideation and/or attempts, number of close friends, and type(s) of current presenting concerns.
Interpersonal Needs Questionnaire
The
Self-Compassion Scale
The Self-Compassion Scale (SCS; Neff, 2003) is a 26-item self-report measure of selfcompassion. The positive subscales of SCS were included in this study: self-kindness (e.g., "I try to be loving towards myself when I'm feeling emotional pain"), common humanity (e.g., "When things are going badly for me, I see the difficulties as part of life that everyone goes through"), and mindfulness (e.g., "When something upsets me, I try to keep my emotions in balance").
Participants rated each item, using a 5-point Likert scale (1 = almost never to 5 = almost always).
For the current AI/AN sample, the internal consistency reliability estimates (Cronbach alpha's) for the self-compassion subscales were as follows: .83 for self-kindness, .73 for common humanity, and .78 for mindfulness. The psychometric properties of the SCS are established as being a reliable and valid measure of self-compassion (Neff, 2003), and this is the first study to use the SCS with AI/AN participants.
RESULTS
Inspection of the descriptive statistics for the main study variables revealed that, on average, this sample of AI/AN people experienced mild to moderate levels of self-compassion and thwarted belongingness, with some variation in scores, and on average, mild levels of perceived burdensomeness, with less variation in scores. See Table 2 for the descriptive statistics for the main study variables.
Preliminary analyses were conducted to see how demographic variables might relate to the outcome variables of the study. T-tests were conducted to explore potential demographic group differences (categorical) in the outcome variables of perceived burdensomeness and thwarted belongingness. Pearson correlational analyses were conducted to explore the relationship of the demographic variables (continuous) with the outcome variables of perceived burdensomeness and thwarted belongingness. Age was not significantly related to perceived burdensomeness (r = -.11, p > .05) or thwarted belongingness (r = -.08, p > .05). Educational level was also not significantly correlated with perceived burdensomeness (r = -.11, p > .05) or thwarted belongingness (r = -.11, p > .05).
Annual family income was significantly and inversely related to perceived burdensomeness (r = -.16, p < .05) and thwarted belongingness (r = -.18, p < .01). Higher levels of annual family income were associated with lower levels of perceived burdensomeness and thwarted belongingness for the AI/AN adults in this study.
American Indian and Alaska Native Mental Health Research
Based on these preliminary findings, annual family income was statistically controlled for in the multiple regression analysis for perceived burdensomeness, and gender and annual family income were statistically controlled for in the multiple regression analysis for thwarted belongingness.
Correlation Analyses
Pearson correlational analyses were conducted to explore the bivariate relationships between and among the self-compassion subscales, perceived burdensomeness and thwarted belongingness. See Table 3 for the correlation matrix for the main study variables.
A statistically significant positive relationship was found between perceived burdensomeness and thwarted belongingness (r = .58, p < .001). More of a sense of belonging was associated with less perceived burdensomeness.
The positive aspects of self-compassion were significantly and inversely related to perceived burdensomeness, including self-kindness (r = -.27, p < .001), common humanity (r = -.10, p < 001), and mindfulness of one's thoughts and feelings (r = -.22, p = .001). The positive aspects of self-compassion were significantly and inversely related to thwarted belongingness, including self-kindness (r = -.44, p < .001), common humanity (r = -.31, p < 001), and mindfulness (r = -.40, p < .001). Thus, being kind to oneself, feeling more connected to the common conditions of humanity, and being more mindful of one's thoughts and feelings were associated with feeling less of a burden to others and fewer struggles in belongingness in interpersonal relationships.
Multiple Regression Analyses
Two separate multiple regression analyses were conducted to explore the relationship of the self-compassion scales with 1) perceived burdensomeness and 2) thwarted belongingness.
In the first multiple regression analysis for perceived burdensomeness, annual family income was entered into the first block of the analysis, and then the three positive self-compassion subscales (i.e., self-kindness, common humanity, and mindfulness of one's thoughts and feelings) were entered into the second block. In the first model, annual family income significantly entered the equation Table 4. In the multiple regression analysis for thwarted belongingness, gender and annual family income were entered into the first block of the analysis. In this first model, gender and annual family income significantly entered the equation and accounted for 5.6% in thwarted belongingness, F (2, 233) = 6.96, p = .001. In the second model, the three positive self-compassion subscales were added to the equation, accounting for an additional 20.3% of the variance in thwarted belongingness scores, F (5, 230) =16.10, p < .001 Examination of the standardized beta weights for model 2 revealed that self-kindness (β = -.33, t = -3.89, p < .001, annual family income, β = -.19, t = -3.24, p < .001), and gender (β = -.18, t = -3.21, p < .01) were the significant individual predictors of thwarted belongingness. In summary, self-kindness was the strongest individual predictor of perceived burdensomeness and thwarted belongingness for this AI/AN sample. See Table 5. Mindfulness -.13 * = p < .05; ** = p < .01; *** = p < .001; R 2 = R-Squared; β = Standardized Beta Weight
Post-hoc analyses
For participants who reported a history of suicidal ideation (n = 89), the three positive aspects of self-compassion, when considered together, accounted for 19.9% in perceived burdensomeness scores, F (3, 85) = 7.03, p <.001. However, for these same participants, the three positive aspects of self-compassion did not significantly predict struggles with belongingness in relationships with others, F (3, 85) = 1.18, p > .05. Therefore, for those AI/AN people with a history of suicidal ideation in this sample, being more self-compassionate in general was predictive of feeling less of a burden to others, but not predictive of thwarted belongingness.
American Indian and Alaska Native Mental Health Research
DISCUSSION
The purpose of this study was to explore the positive aspects of self-compassion in relation to two interpersonal components of suicide risk-perceived burdensomeness and thwarted belongingness among AI/AN adults.
As hypothesized, the positive aspects of self-compassion, including self-kindness, common humanity, and mindfulness of one's thoughts and feelings, were significantly and inversely related to perceived burdensomeness and thwarted belongingness for AI/AN adults. Few researchers have specifically focused on perceived burdensomeness and thwarted belongingness among AI/AN people, yet Hill (2009) recognized the unique dimensions of belongingness, which included the psychological, sociological, physical, and spiritual connections of individuals, families, and communities within the AI/AN population. However, no researchers to date have explored the self-compassion experiences of AI/AN people in relation to these variables.
In the current study, self-kindness was the most significant individual predictor of perceived burdensomeness and thwarted belongingness in this sample of AI/AN people. Therefore, those who more kind to themselves tended to struggle less with belongingness, which is in line with Neff and McGehee's (2010) findings that self-compassion was a significant predictor of connectedness among adolescents. In previous research, self-compassion has been associated with emotional regulation (see Vettese et al., 2011). Therefore, self-compassion could serve as a buffer against negative thoughts such as thwarted belongingness or other unwanted feelings.
While this is the first study of its kind to explore how self-compassion is a protective factor related to perceived burdensomeness and thwarted belongingness among AI/AN adults, these findings are in line with the Ali (2014) findings with a predominantly White adolescent sample in that higher levels of self-compassion were associated with lower levels of suicidality.
Annual family income was found to be a significant predictor of perceived burdensomeness and thwarted belongingness, but only accounted for a small portion of the variance in comparison to self-compassion. Having more financial resources was associated with more belongingness and feeling less of a burden to others. Thus, the relationship between financial well-being and suicide risk factors among AI/AN people should not be underestimated.
Age was not related to perceived burdensomeness and thwarted belongingness, which is a unique finding in the suicidality literature in general. However, age as a variable, was not the focus of this study.
American Indian and Alaska Native Mental Health Research
There were no gender differences in perceived burdensomeness. However, gender was a significant individual predictor of thwarted belongingness. AI/AN men, on average, reported more thwarted belongingness than women. More research is needed to understand potential gender differences in suicide risk for AI/AN people, including relevant protective factors such as selfcompassion. FitzGerald et al. (2017) found gender differences in protective factors related to suicidality (i.e., attempts) for AI/AN youth. Positive relationships in the home, school, and community were significant protective factors for girls, and positive relationships with adults in the home was the protective factor for boys.
The post-hoc findings of the current study revealed that the three positive aspects of selfcompassion were significant predictors of perceived burdensomeness for AI/AN people who reported a history of suicidal ideation (n = 87), but were not for thwarted belongingness. These findings provide some support for one previous research study in which researchers found that perceived burdensomeness was related to suicidal ideation among AI/AN college students (O'Keefe et al., 2014).
Implications for Counseling Practice and Prevention Programs with AI/AN Adults
The results from this study indicate that the positive aspects of self-compassion, in particular, self-kindness, common humanity, and mindfulness of one's thoughts and feelings, were significantly and inversely related to and predictive of feelings of perceived burdensomeness and thwarted belongingness for AI/AN adults, and thus, self-compassion appears to be a protective factor for AI/AN adults. Given that positive aspects of self-compassion explained more than 8% of the variance in perceived burdensomeness and over 20% of the variance in thwarted belongingness among AI/AN adults seeking Indian Health Services and/or tribal center services, more self-compassionate and mindfulness-based interventions should be incorporated into health and wellness programs as well as culturally relevant evidence-based counseling and psychotherapy support to AI/AN adults. Mindfulness is an important technique that mental health professionals could incorporate into their sessions with AI/AN adults who feel like a burden to others and/or feel as though they do not belong. Teaching AI/AN clients how to relate to their internal experiences without judging or overanalyzing them is essential for well-being and hope, given the findings of this study.
Learning stress-reduction and mindfulness techniques will help AI/AN people focus on being in the moment and being more self-compassionate in general. Cognitive behavioral techniques and skills could be incorporated to assist AI/AN clients with their automatic thoughts, images, and core beliefs as well as their emotional well-being, with the goal of establishing a kind, compassionate relationship with their own thoughts and feelings, being more of an observer and investigator of these internal experiences, and noticing one's thoughts and feelings and learning how to specifically respond to them internally in helpful, non-judgmental ways.
The fact that self-compassion was related to less perceived burdensomeness and thwarted belongingness within the AI/AN adult community is exciting news for those developing prevention programs in such communities. Self-kindness, common humanity (i.e., realizing the commonalities in our experiences as human beings), and mindfulness (i.e., noticing and acknowledging what we think and feel, without judging ourselves and/or others) could be utilized as skills to be taught at a young age to AI/AN children in schools as well as to adults and older adults in community settings. Not only could the positive aspects of self-compassion allay any feelings of being a burden or not belonging in the future, but it could also increase the ability to cope with one's thoughts and emotions that might be encountered.
As mental health professionals advocate and support AI/AN adults, it would be important for clients' financial resources to be assessed and explored in relation to their emotional well-being and potential for perceived burdensomeness, thwarted belongingness, and/or other aspects of suicidality. If AI/AN individuals experience job loss, changes in financial resources, and/or lack of financial funds, it would be important to assist in finding financial resources as well as discussing thoughts and feelings associated with financial concerns, given that financial needs could result in people feeling like a burden on others and/or influence their sense of interpersonal connection or belonging.
Finally, mental health care professionals must recognize that AI/AN men may be more at risk for thwarted belongingness than AI/AN women, based on the results of this study. Assessing for disconnections and feelings of remorse or guilt, and/or even feelings of responsibility that could be potentially detrimental to AI/AN men, may be worthwhile. We concur with FitzGerald et al.'s (2017) recommendation that gender differences in protective factors related to suicidality must be taken into consideration when developing prevention and intervention programs for AI/AN individuals to make them more culturally and gender sensitive. Exploring the types of preventative and counseling programs that may benefit AI/AN men and women in unique ways is warranted.
Limitations of the Study and Areas for Further Research
The results from this study need to be interpreted in light of the following potential limitations. Given the survey nature of the study, it is possible that the participants in this study may have responded in socially desirable ways. Participants completed the survey in the waiting room of their IHS and tribal centers, so they may or may not have felt comfortable completing the survey with others nearby. The presenting issues that brought participants into the clinic could have potentially affected their responses to the survey. The majority of participants in this sample were AI/AN adults from the Great Plains of the United States, and thus, the results may not generalize to AI/AN adults from other parts of the country and/or from specific sovereign nations.
Further research is needed to explore the effectiveness of self-compassion and mindfulness-based interventions with AI/AN people who may present with interpersonal suicide risk factors, such as perceived burdensomeness and thwarted belongingness. Researchers could also explore how one's identification with mainstream ways compared to more traditional practices/ways relate to self-compassion and interpersonal risk factors associated with suicidality among AI/AN people. Mixed methods and qualitative methods would allow future researchers to gather further insight into understanding the personal, family/interpersonal, and tribal/cultural factors that might influence self-compassion and/or suicide risk for AI/AN people. | 2021-04-13T06:17:00.815Z | 2021-01-01T00:00:00.000 | {
"year": 2021,
"sha1": "5fe371854086b60965fb90813b304b700b3baa03",
"oa_license": null,
"oa_url": "https://doi.org/10.5820/aian.2801.2021.103",
"oa_status": "GOLD",
"pdf_src": "MergedPDFExtraction",
"pdf_hash": "1801813846a73271bd75ef8209cdb404b2a820f8",
"s2fieldsofstudy": [
"Psychology",
"Sociology"
],
"extfieldsofstudy": [
"Medicine"
]
} |
258556912 | pes2o/s2orc | v3-fos-license | A Generalized Framework for Predictive Clustering and Optimization
Clustering is a powerful and extensively used data science tool. While clustering is generally thought of as an unsupervised learning technique, there are also supervised variations such as Spath's clusterwise regression that attempt to find clusters of data that yield low regression error on a supervised target. We believe that clusterwise regression is just a single vertex of a largely unexplored design space of supervised clustering models. In this article, we define a generalized optimization framework for predictive clustering that admits different cluster definitions (arbitrary point assignment, closest center, and bounding box) and both regression and classification objectives. We then present a joint optimization strategy that exploits mixed-integer linear programming (MILP) for global optimization in this generalized framework. To alleviate scalability concerns for large datasets, we also provide highly scalable greedy algorithms inspired by the Majorization-Minimization (MM) framework. Finally, we demonstrate the ability of our models to uncover different interpretable discrete cluster structures in data by experimenting with four real-world datasets.
Introduction
The availability of massive volumes of data coupled with the need to understand, analyze and explore patterns in them as a means to find solutions and drive decision making has made clustering a popular tool in data science. Cluster analysis is widely used in problems with unlabeled data and has become synonymous to unsupervised learning. It has been found helpful in a varied range of machine learning and data mining tasks, including pattern recognition, document clustering and retrieval, image segmentation, and medical and social sciences [34,41,55,59,60].
This reflects its broad applicability, and usefulness as an exploratory data analysis tool, especially for large datasets. However, relatively little focus has been directed towards using clustering for predictive tasks with labeled data.
In many cases, it is natural to assume that real data is generated from complex processes that might be mixtures of discrete modes of a predictive target response. Some of these modes could just be from different processes that generate the data [14,22,32,48,54], while others may be due to implicit or explicit confounders that lead to a significant change in the response variable being predicted. Naturally, a single predictive
INTRODUCTION
model cannot capture such multiple relationships between the dependent and explanatory variables.
For an example use case, consider the housing price regression predictive task for a city. Crime rates influence the housing market, and in most cases, property values drop with an increase in crime [10,20]. But in contrast to this trend, housing values in inner-city or downtown areas are high regardless of the high crime rates. This positive relationship could be because of increased reporting and higher property crimes in affluent high-income neighborhoods [10,19,39]. Clearly, one regression model cannot capture these two different trends in prices with respect to the crime rate. This kind of multiple regression modeling has been found suitable for analyzing data from various domains, including housing price prediction [15], marketing analysis [24], demographic neighborhood analysis [42], and weather prediction [6].
To this end, historically, several methods have gone beyond standard unsupervised clustering to supervised or predictive versions. Most of these models from the literature fall under the clusterwise regression (CLR) category [9,40,49,56]. These models primarily aim to identify disjoint subsets or explicit subclasses of the data that lead to different predictive (regression in this case) models in each cluster. However, existing methods for predictive clustering are largely bespoke for specific problems, supervised objectives, or cluster definitions and have largely gone unused as a general tool for data science. This motivated us to take a broader perspective towards clustering and build a framework to explore the predictive clustering design space.
Consider, for example, the samples of points shown in Figure 1. We generated this data consisting of points from three different regression planes such that the points in these three disjoint groups are reasonably well separated in the feature space. Predictive clustering aims to identify these distinct modes present in the data. The plots show multiple perspectives of clustering with the supervised regression objective used to solve this problem. We can either (1) assign data to clusters without any restriction on the search space (as is the case with traditional CLR [9,49]), (2) define clusters as bounding boxes in the feature space, or (3) define clusters as the regions nearest to exemplar data centers [40,56]. The plots with the projection of points in the feature plane show how these clustering methods differ and identify the three groups. We remark that to date, clustering methods have been defined for (1) and (3), but only limited approximated options are available when it comes to (2) [7,12].
In this article, we seek to comprehensively explore the design space of supervised objectives and cluster definitions that allow us to identify several important gaps in this space. To this end, we formalize a general mathematical framework for predictive clustering that subsumes existing methods and introduces new ones. We also propose global optimization methods that can directly exploit our unifying formalization as well as general greedy optimization methods that are highly scalable for large-scale datasets, nearoptimal on cases where we can compare to global methods, and which reduce to existing methodologies in some special cases. Finally, we demonstrate the power of this unified perspective through a variety of applications that exhibit how different supervised objective and cluster definitions allow us to detect and learn important discrete structures and behaviors in the dataset.
We summarize our main contributions in the article as follows: • We present a general framework for predictive clustering that combines clustering with a supervised objective. Specifically, we focus on three clustering methods as shown in Figure 1a, and we call them arbitrary, closest center, and bounding box clustering. Furthermore, we explore two supervised loss functions in our design space for regression and classification tasks. We identify that clusterwise classification adds novelty in the field of general linear classification. • We provide two ways for optimizing the loss functions in our models: (1) mixed-integer linear programming (MILP) for global optimization, and (2) greedy methods inspired by the Majorization-Minimization (MM) prescription of algorithms to tackle the scalability issues of using MILP, and at the same time provide comparable but sub-optimal (locally optimal) solutions. • We demonstrate the applicability of the different models in our framework with case studies Illustrative example with multiple generative modes that were uncovered with predictive clustering. (a) Different perspectives to clustering: (1) arbitrary clustering with points assigned without any restriction (left), (2) clusters defined as bounding boxes w.r.t. to features (center), and (3) clusters defined as regions closest to data centers demonstrated here using a Voronoi plot with the three cluster centers represented by points shown in black (right). (b) Synthetic data with three distinct regression planes (dependent variable Y as a linear function of two independent variables X1, X2) where the projection of points on the X1-X2 feature space gives well-separated clusters as shown by blue, orange, and green points (right).
on four real-world datasets and evaluate its performance with baseline linear models. These models provide highly interpretable results that we believe will help in decision and policy making when applied to data science problems.
Related work
There is a substantial body of research related to clustering and its applications in unsupervised learning tasks. However, our proposed contributions focus more on clustering as a predictive tool. Therefore, we briefly survey the literature on available clustering techniques, followed by relevant research focused on using clustering for supervised learning tasks.
K-means and alternative cluster definitions: Clustering is an extensively researched topic in the context of advancements in clustering techniques (engineer highly scalable and fast algorithms) and its applications in problems in data science. Since surveying this sheer mass of literature is beyond the scope of this article (some comprehensive clustering surveys [34,59,60]), we focus only on several clustering techniques relevant to our work.
The most commonly used method for cluster analysis, especially in the context of hardpartition clustering, is the popular K-means algorithm [37]. It is a fast heuristic algorithm designed to solve the minimum sum-of-squares clustering problem (MSSC), where the task is to choose clusters such that the points within clusters have small sum-squared errors. Several attempts have been made to solve the MSSC problem optimally using column generation and integer linear programming [2,5,13,25]; however, none of these could scale like K-means.
Similarly, several other definitions for clusters exist in the literature. Among them, density-based clustering [3,26] has gained huge popularity primarily because of its ability to produce arbitraryshaped clusters in contrast to k-means which can only deal with spherical clusters. Yet another approach is defining clusters based on grids as first described in the CLIQUE [1] algorithm for clustering high dimensional data. Here, the central idea was to first discretize the entire space into a mesh with a predefined grid size followed by identifying grids with a dense collection of points in subspaces. Although our bounding boxes clustering method (refer to Figure 1a) resembles the grid-based definitions for a cluster, they differ in how these clusters are identified. CLIQUE uses a bottom-up approach -unions of dense cells from lower subspaces to define clusters in higher dimensions. In contrast, our model directly identifies the bounding boxes based on a supervised optimization objective.
Predictive Clustering: Numerous methods have been mentioned in the literature that have moved away from discussing clustering in the traditional sense and have focused on using it for predictive purposes. However, most of these models were designed for a specific supervised learning objective or application. Therefore, we survey the supervised clustering literature in two directions: clusterwise methods for regression and classification.
Clusterwise Regression (CLR) greedy models: The central idea in clusterwise regression (CLR) is to split the data into several disjoint sets to identify the various regression modes present in them. In the pioneering work of Späth [49] in CLR, he proposed an exchange method to jointly optimize the overall regression error by unifying regression and clustering phases. In this approach, two observations from different clusters would be exchanged if it reduces the overall error. In followup work, Späth [50] proposed a faster exchange algorithm where a single observation is shifted between clusters if it reduces the overall cost. More recently, Manwani and Sastry [40] proposed the K-plane regression algorithm, which is similar, in spirit, to the K-means [37] algorithm. This approach repeatedly involves (1) identifying the best regression weights in each cluster and (2) reassigning each observation to have the least error when assigned to that cluster. The above heuristic approaches provide acceptable solutions in many cases; however, as with K-means, they are sensitive to initializations and converge to sub-optimal solutions.
Optimal CLR methods: Several researchers have tried to provide globally optimal solutions for the CLR problem. Lau et al. [36] proposed a nonlinear programming formulation for a variant to the CLR problem, but they do not provide any guarantees for the optimal solution. A more common approach seen in the literature starts from the CLR problem's quadratic programming (QP) reformulation. As a first, Carbonneau et al. [16] proposed a mixed-logical quadratic programming formulation to solve the CLR problem to global optimality feasibly. They further improved upon this approach in their later works [17,18] where they used linear integer programming tricks such as column generation and repetitive branch and bound [13] methods coupled with heuristic algorithms.
As an alternative approach, Bertsimas and Shioda [9] proposed the CRIO model where they used the more robust absolute error metric (similar to Späth's model in [51]) as the regression loss and used MILP to solve the problem optimally. In more recent research, Zhu et al. [62] also adopt the same approach for CLR. Obviously, this approach is more elegant and computationally less intensive than the QP counterparts. Here, we remark that the CLR method, specifically CRIO [9], is captured in our framework under arbitrary clustering (refer to Table 1). This approach to clustering works reasonably well for some instances, especially when the regression lines from two different clusters intersect. However, as shown in Figure 1a, arbitrary assignment fails to identify the three well-separated clusters. Thus, homogeneity among points in clusters w.r.t. to the feature variables can be a desirable trait, as argued by several researchers in their works on CLR [14,40,48,54,56].
To address this drawback and obtain homogeneous clusters, Manwani and Sastry [40] expanded on their work to present a modified K-plane regression algorithm. In this approach, the authors added the MSSC loss (w.r.t. to the independent variables) to the squared error regression loss (with a regularization parameter). The same approach was used by Silva et al. [48]. In more recent research, authors in [56] presented the Optimal Predictive Clustering (OPC) method where they used a variant of the cost function used in the above approaches. They included the dependent variable (along with the features) while computing the MSSC error per cluster. Furthermore, they provided a greedy algorithm based on K-means++ [4] to warm-start the mixed-integer quadratic programming method to obtain nearoptimal results. These approaches are arguably similar to our regression with the closest center clustering method. However, in contrast to OPC [56], we use the absolute error cost function for regression and hence, obtain a computationally more tractable MILP formulation for global optimization. Moreover, we provide a different methodology for greedy optimization when [12], OCT [7] compared with the modified K-plane regression algorithm [40]. We also present a novel bounding box clustering methodology to solve the CLR problem. With this approach, not only do we retain the critical advantage of the closest center method of having coherent clusters, but we also identify a set of decision rules to define a cluster. This adds to the interpretability of our results. This may be in a similar vein to model trees [45,57], where a greedy approach is used to build decision trees with regression models at the leaves. Also, to solve model trees optimally, Bertsimas and Dunn [8] presented the Optimal Regression trees with linear predictions (ORT-L) model. Fundamentally, these approaches build a decision tree in search of good regression fits at the leaves (with just binary splits on a single feature at each node); hence, we believe that this approximates our approach. In contrast, our model performs a more holistic search of the feature space to identify the best set of bounding boxes.
Clusterwise Classification: Much of the research at the intersection of clustering and classification has been along the lines of either clusterand-classify or clustering-based classification. In the former approach, clustering precedes the classification task. One such approach clustered large datasets to a relatively smaller number of clusters and used the cluster centroids to complete the classification task [27]. Other methods were more application-specific, where the data was first clustered, and then a classification model was run on each cluster. Fahad et al. [28] used this approach for activity recognition in smart homes; Tammenah et al. [53] used hierarchical clustering with Neural networks to classify road traffic accidents.
In contrast to the cluster-and-classify approaches, cluster-based classifiers perform the classification task assisted by clustering. Bertsimas and Shoda [9], in their CRIO, assigned one class of points to clusters such that no points of the other class belong in these clusters. The drawback of this approach is that it can only address a binary classification problem. Furthermore, clustering assisted information retrieval and text classification are also common [29,61]. In more recent research, clustering was used for information retrieval to find multiple clusters that hold highly relevant retrieved information [11]. All these approaches identify clusters such that all observations in them belong to the class of interest.
In this work, we focused on a per cluster classification model approach, similar in spirit to the CLR approach. Unlike the cluster-and-classify approaches, our clusterwise classification (CLC) models jointly optimize the overall error of the clustering and classification tasks. Moreover, we use the closest center and bounding box clustering for our CLC tasks. The bounding box approach for CLC can be seen as similar to classification trees [12] and their optimal versions called optimal classification trees [7]. However, in contrast to classification tree methods which partition the feature space to propose one class per leaf, our approaches have one classification model per partitioned space.
In summary, with our framework, we were able to identify and address critical gaps in the supervised clustering literature while simultaneously capturing some existing models like CLR and CRIO (refer Table 1). Overall, in our work: (1) we directly capture the MILP based CRIO approach and K-plane regression greedy algorithm; (2) we provide a different problem formulation and loss function (MAE, which is more robust) for the OPC model; (3) we propose a greedy optimization strategy that is different from the modified K-plane regression algorithm; (4) we describe an alternative approach to CLR with our bounding boxes clustering model and (5) we present a novel clusterwise classification approach.
Methodology
In this section, we formally present the predictive clustering framework and the mathematical notations we use. We then describe our two optimization procedures: (1) Mixed Integer optimization (MIO) and (2) greedy algorithms.
Problem definition
As the name suggests, the framework for predictive clustering constitutes of two main "ingredients": 1. Prediction: involves optimizing a supervised objective function to predict the label or the dependent variable. Typical loss functions we include are mean squared error (MSE) and mean absolute error (MAE) for linear regression tasks, and hinge loss for soft margin support vector machines (SVM) for both binary and multi-class classification tasks. 2. Cluster assignment: which involves assigning every observation in the data to a cluster based on an assignment choice. As previously mentioned, the different options available are arbitrary (Arbit), closest center (CC), and bounding boxes (BB) clustering (refer to the Figure 1a).
In the following subsections, we describe the above-mentioned clustering methods and loss functions in detail.
Notation
We assume that we have N observations of the form D = (x i ,y i ) (for i ∈ N = {1, ..., N }) in the data, and where x i is the feature variable vector and y i is the label to be predicted. Moreover, we assume that the features ., x id )). We note that clustering without any prediction is the trivial case when labels y i corresponding to all observations are null. The goal in hard-partitioning clustering is to assign each of the N observations to one of the K clusters {C 1 , ..., C K } where K ≤ N . We also have binary indicator variables c ik to identify cluster assignments for all observations in the data. If a point i is associated with cluster C k , then we have c ik = 1; c ik = 0, otherwise. With the cluster definitions as above, we desire the following properties: • No overlap between clusters: C k ∩ C j = φ, ∀ k, j ∈ K and k = j • All observations are assigned to clusters: We now define notations to capture various cluster definitions and supervised loss functions. We use the following notation throughout the rest of the article: • Variables θ k to denote the cluster-specific parameters of our model. It can be the weights of the regression planes or weights defining the hyperplanes separating the classes in a classification task. • Per-datum error represented by l(x i ,y i , θ k ) to typically indicate, for instance, the hinge loss in the case of SVM or squared error for regression associated with each observation. • Overall error L(θ, c) for any combination of clustering and supervised objective function given by: The per-datum error l(x i , y i , θ k ) is multiplied by the indicator variable c ik to ensure that for each observation we only account for the error associated with the cluster it is assigned to.
Supervised learning objective
In this subsection, we discuss the supervised error functions we used in our framework.
Regression loss :
The central idea in CLR is to cluster the data while simultaneously learning cluster-specific regression models through a join optimization methodology. We can either use mean squared error (MSE) or mean absolute error (MAE) to optimize our regression. With MAE, the total error would be given by: Here, θ k stands for the regression weights in the k-th cluster and the per-datum loss for this case is l( While both error measures are quite similar, MAE penalizes the outliers less substantially and hence, is more robust than MSE. Moreover, the overall CLR problem with the MAE loss reduces to a MILP formulation which can be more tractable and computationally less expensive than a Quadratic Programming formulation with MSE loss. 2. Classification loss : Similar to CLR, the purpose of CLC is to group points and run percluster classification models to drive the overall classification error to a minimum. This article only focuses on multi-class classification with SVM, wherein we find the hyperplanes that best separate the multiple classes found in each cluster. The classical approach to solve the multiclass problem with SVM is to employ a collection of binary classifiers with the one-vsall classification trick [46]. However, for our MILP formulations, we utilize the first "single machine" approach for the multi-class case called Weston and Watkins (WW-SVM) [58]. This approach provides a single error value per data which we could then elegantly plug into our framework. For a M class classification task, the overall cost function with this approach along with an L1 regularization [23] for the coefficients is given by: Here, the per-datum loss would be given by
Cluster assignment
In this subsection, we briefly introduce the three unique clustering methods currently included in our framework. We provide the mathematical formulations necessary to achieve these cluster definitions in Section 3.5 along with a mixed-integer optimization procedure to solve the overall model.
Arbitrary clustering (Arbit):
The assignment of a point to a cluster is independent of any constraints, and optimizing the supervised learning objective drives these assignments. For example, when we combine regression and arbitrary clustering, we obtain the traditional CLR model [9,49]. The main advantage with arbitrary clustering is its ability to find overlapping clusters, specifically, intersecting regression lines in the case of CLR. However, as noted previously with the synthetic data example in Figure 1, this method fails to identify the three well-separated clusters but instead provides overlapping clusters as its solution.
Hence, we seek other clustering methods that provide within-cluster homogeneity. 2. Closest center (CC) clustering: In this clustering method, points are assigned to their closest cluster center to give spherical-shaped coherent clusters in the feature space. The central idea is to determine the best K cluster centers such that assigning observations in the data to them minimizes the overall loss function. The cluster centers help interpret and analyze the profile of points that belong to a cluster. 3. Bounding boxes (BB) clustering: The fundamental idea is to define clusters as axisparallel hypercuboids (rectangles in the two features case as shown in Figure 1a), and observations that fall within the boundaries of a cluster belong to that cluster. A key benefit is that clusters can now be characterized with a set of decision rules like a DNF expression, making the models highly interpretable.
Mixed Integer Optimization
Having presented the two components of our framework -clustering and prediction objective in the previous subsections, we now show how to marry them together to get the desired model. The available model choices in our design space are shown in Table 2. We can mix and match the three cluster definitions and the two-loss functions to give an array of models tailored to specific problems. Our general strategy for optimization is to define a set of constraints for each of the three clustering methods and combine it with the previously described supervised objective functions. We then employ MIO to obtain globally optimal results for our models. The general form of the objective function is given in Equation 1. The objective function in this form is non-linear. Therefore, we reformulated the cost function using the "big-M" method as follows: With this reformulation, we forced the new variable e ik to take the value of l(x i , y i , θ k ) when c ik = 1. Additionally, when c ik = 0, minimization of the objective function along with the constraints ensured that e ik = 0, i.e., when an observation does not belong to a cluster k, it does not incur prediction error w.r.t. that cluster. We remark that the choice of the big-M is critical in ensuring the reformulation works as expected.
In clustering, the main objective is to associate each point to one cluster and further add clustering type-specific restrictions. This is achieved by appropriately placing constraints on the indicator variables c ik . We describe these constraints in Table 3.
In the case of (1) arbitrary clustering: we used constraints to make sure that a point is assigned to only one cluster; (2) closest center clustering: we introduced variables d i to capture the distance between a point and the cluster center β k . We added this variable to the objective function to ensure that points are assigned to their closest cluster center. A hyperparameter λ was also used to trade-off between the supervised error and point to cluster center distances. In such a formulation, the indicator constraints along with the minimization criteria ensured that when c ik = 1 then d i = x i − β k 1 . We choose the L1 norm distance metric to compute distances between points and cluster centers to have computationally more feasible linear programming formulation; (3) bounding box clustering: we employed additional variables x max kj and x min kj to define the edges of the bounding box, and indicator variables I ikj to force points that belong within these boundaries to belong to that cluster.
With the appropriate choice of the loss function, which was MAE for regression and L1 regularized SVM loss for classification, we had reformulated our overall problem as a MILP. However, such a MILP-based approach is NP-hard and a very difficult problem to solve [36]. This methodology is only practical with small datasets with a few hundred observations. Therefore, we describe greedy approaches to optimize our models in the following subsection. We used our MILP based solutions to benchmark these greedy methods with synthetic datasets.
Greedy Optimization
We were inspired by the Majorization-Minimization (MM) [33,35,43] algorithm framework to build our greedy methods. Fundamentally, the MM prescription for constructing algorithms is based on the principle of identifying a suitable, "easy to optimize" surrogate function to assist in the optimization of a non-convex objective. The algorithms iteratively optimize a sequence of these surrogate functions to drive the optimization of the original objective.
Formally, in a minimization task for an objective function f (θ) w.r.t. parameter θ, we have a surrogate majorizing function g t (θ) at the t-th iteration satisfying the following: (1) touching condition f (θ (t) ) = g t (θ (t) ) which ensures that both functions have the same value at θ (t) ; and (2) condition that g t (θ) majorizes f (θ), i.e., f (θ) ≤ g t (θ). At each time step, we minimize the majorizing function to obtain the value of parameters for the next time step given by θ (t+1) . This process is repeated to drive the original objective to a minima, but without assured convergence to global optima. The commonly seen expectationmaximization (EM) approach is a special case of the MM algorithm.
We briefly describe our algorithm and the elements of the MM framework that we adopted in our greedy search for clusters with the joint optimization of the supervised loss. We used the Assignment-function returns a new assignment 6: t ← t + 1 7: end while 8: return c ik , θ k 'Predictive-clustering' algorithm to wrap the overall procedure and an 'assignment' subroutine to assign points to clusters.
1. Predictive-clustering algorithm: This algorithm, as shown in Algorithm 1 runs an iterative procedure with two steps until the convergence of loss function up to a threshold. First, the cluster assignment variables are randomly initialized. This is followed by an iterative procedure that involves: (1) optimizing the error function after fixing the point-cluster assignments to learn a new set of parameters θ (t) (line 4 in Algorithm 1); and (2) reassigning points to clusters based on the new parameters and clustering criteria (line 5 in Algorithm 1). We utilized the more traditional MSE loss function for regression and L2-regularized SVM for classification tasks in our greedy methods. As a result, when the cluster assignments were fixed, the overall objective reduced to a per cluster supervised learning (regression or classification) problem with smooth convex loss functions that were easy to solve. Consider the regression case when the cluster assignments are fixed,
Algorithm 2 Assignment Function
Require: Assigning points to their closest centroid 5: else if Bounding box clustering then 6: Assigning points to have them inside bounding boxes Under the MM framework definitions, the function g t (θ) in Equation 5 is our easy to solve convex surrogate function. Optimizing this function gives us the best regression weights for the next iteration θ (t) k . This step is followed by the reassignment step where the 'Assignment' function is called to return the indicator variables c (t+1) ik for the next iteration. 2. Assignment function: The reassignment step is more complicated since it needs to address the different cluster assignment criteria. Here, the cluster-specific parameters (θ (t) k ) are fixed. This subroutine as shown in Algorithm 2 reassigns a point to a different cluster if it has a lower prediction error when assigned to that cluster. Continuing the regression example, the new assignments are as follows: When the function stops at this step (line 1 in Algorithm 2) and returns the new assignment variables c (t+1) ik , we arrive at the result for the arbitrary clustering case. These new assignment variables can now be used to define the surrogate function for the next iteration by plugging them into Equation 5. Precisely, this assignment step as described in Equation 6 ensures the "touching condition" for the next time step under the MM framework definition. A similar procedure is described in a recent work by Manawami and Sastry [40] (called the K-plane regression algorithm) to solve the traditional CLR problem, but they do not make this connection between their algorithm and the MM approach.
Furthermore, we extend this assignment subroutine to address the other clustering types by slightly deviating from the MM framework. The function achieves the other clustering methods by either: (1) computing the new cluster centroids (with variables c (t+1) ik ), and reassigning all points to the closest cluster center using the Euclidean distance metric to get the closest center clustering (line 4 in Algorithm 2); (2) computing the new centroids as above but now assigning points to the closest cluster centers using the L1 norm distance metric to obtain approximate bounding box clustering (line 6 in Algorithm 2). This is because when Voronoi diagrams are plotted with the L1 norm distance, the polygon edges are axis-parallel, giving an approximate bounding box shape.
Results
In this section, we experimentally investigate the ability of our models to converge to the ground truth using synthetic datasets. We compare our greedy methods with the MILP-based approach, and report the results from our experiments. We then report results for four real-world datasets to motivate and demonstrate the applicability of our models.
Performance Evaluation
In this subsection, we benchmark the performance of our greedy methods with the MILPbased approaches and empirically show that these methods perform well using synthetic datasets. We obtain globally optimal solutions with MILPbased approaches, but they are not salable. In contrast, greedy methods may not guarantee global optimality, but they are more practical for realworld data. Therefore, we designed our experiments intending to understand (1) how well these greedy methods learn the underlying true generative model for several synthetic datasets; and (2) how time-efficient they are compared to MILP methods.
To implement our MILP-based approach, we employed the commercially available Gurobi solver [30] which is free for academic use. We evaluate all our models in a desktop computer with an 8-core CPU at 3.2 GHz and 8 GB memory. For our MILP models, we fixed the exit optimalitygap threshold at 5%. We also prescribed an upper limit of 1 hour running time per experiment. On the other hand, each evaluation was carried out for our greedy methods by averaging the results over ten independent runs of these models on the synthetic data.
Since we aim to understand the ability of our models to learn the underlying ground truth, we chose different generative models to construct the synthetic datasets. First, we took two feature variables and generated reasonably wellseparated clusters of points in this feature space with the number of clusters K ∈ {2, 3}. Then, we used cluster-specific regression weights chosen randomly to give two datasets for the CLR task (Gaussian noise was also added). Similarly, two datasets for the CLC task were generated with a binary-classification objective per cluster (hyperplanes separating classes are different for different cluster) with some noise (class labels assigned randomly). Finally, we ran our experiments by varying the size of the data N from 20 to 10 4 .
Regression case: We report the results in Figure 2a with the experimental setup described above for the two datasets. Here, we compared the MILP-based and greedy methods for each of the three clustering types. The evaluation metric we utilized was the overall R 2 score to measure the goodness of regression fit across the different clusters.
We observed that MILP for the closest center and arbitrary clustering methods were not feasible for more than N = 250 observations. In fact, the optimality gap was over 30% at the 1-hour exit condition for the N = 250 case, and hence, we do not report results for larger N values. Interestingly, we found that MILP for the bounding box method is much more scalable. Moreover, it is evident that the greedy methods perform well in most cases and are comparable to the MILP methods. We also note that the performance of greedy CLR with arbitrary assignment is good for the synthetic data 2 but does poorly for the other data. This is because overlapping regression planes, although not representative of the underlying trends in the data, can sometimes lead to a lower error due to the added noise. Classification case: Similarly, we report the classification results in the Figure 2b for our four models -closest centroid and bounding box clustering with greedy and MILP approaches. Here, we used the accuracy metric to evaluate our models. It is evident that the greedy methods perform as well as the MILP on most occasions. While it may be concerning that the greedy methods performed poorly when N ≤ 50, we remark that our greedy methods can sometimes reach a solution with all points being in the same cluster when the number of observations are very small, resulting in a poor local minima.
In conclusion, these evaluations show that the greedy algorithms provide scalable solutions that are a good approximation to the MILP-based methods. Furthermore, with the bounding boxes MILP method succeeding to attain the 5% optimality gap threshold even in cases with more than 1000 observations in the data, we found that it is significantly more scalable when compared with the MILP for other clustering methods. We believe that defining clusters as bounding boxes introduces much stronger constraints (or cuts on the feasible space), resulting in much-reduced search space for the MILP solvers.
Case Study
In this section, we illustrate the relevance of the array of models available in our design space by analyzing four different real-world datasets picked from a diverse set of domains. Each of these application problems asks a very different question, and we show how we can mix and match tools available in our framework to address them. Through these case studies, we aim at exploring our models' ability to (1) scale for large datasets, (2) perform better than the baseline linear models, and (3) provide highly interpretable results that help in uncovering the different underlying modes of behaviors in data. We focus on benchmarking model performance with the Boston housing dataset, interpretability of results with San Francisco crime rate and FAA Wildlife-strike dataset, and the model's ability to scale with the Movie-Lens 100k dataset.
As a general preprocessing step, we partitioned the data into the train (65%), validation (15%), and test (20%) sets to tune our hyperparameters (with a focus on finding the best K clusters) and report the 5-fold cross-validation results. We used R 2 score and accuracy metric to evaluate our regression and classification models, respectively. Furthermore, we compared the results from our greedy algorithms with baseline Lasso regression and SVM one-vs-all models from the sklearn package in python [44].
Boston Housing data
We use the popular and well-studied Boston housing dataset to perform a clusterwise regression analysis and benchmark our model's performance. As mentioned previously, we expect property values to have multiple trends in different parts of the city. This makes Boston housing 1 an interesting and relevant dataset to analyze using CLR and understand if our models can identify various trends.
The dataset is small and has N = 506 observations with 13 features. The variable for prediction is the median value of houses per census tract. The list of features, along with the prediction variable, is shown in Table 4. A description of these features is standardly found across articles [56]. We used our greedy methodology for the CLR-CC model to train this dataset. Our choice was based on the idea that the cluster centroids can help understand the average socio-economic and structural feature values in a cluster to explain the cluster-specific regression trends. We trained our model with K ∈ {2, ..., 7} clusters. Best results were found with 6 clusters with an out-of-sample R 2 score of 0.8622. This is very close in comparison with the average test R 2 score of 0.863 reported by by authors in [56] with their optimal OPC model for the same dataset with 6 clusters.
We report the feature averages and regression weights for each cluster in Table 4. It is evident from cluster 5 that the property values are lowest for old houses in areas with a very high crime rate. We generally observe that the prices inverse with a decrease in crime (cluster 5, for example). Yet, in cluster 3, both property values and crime are high, and the variables are positively related (weight of 5.78), which is against the usual trend. This could potentially be a cluster of housing properties in Downtown where crime is generally high, and houses tend to be very expensive besides being old and small (also observed in Table 4). Another pattern worth noting is in cluster 1, which has the highest average housing price. Here, the costs increase steeply when the number of rooms increases and the educational environments improve (indicated by the PTRA-TIO variable). Overall, our model is able to pick up on the different modes in the data and discover unique patterns.
FAA Wildlife-Strike data
The FAA Wildlife-strike database contains 2 the records of all reported aircraft-wildlife strikes (mostly bird strikes) in the US in the last three decades. A general upward trend in the number of bird strikes has been observed over the years, as shown in Figure 4. This could be caused by many factors like increased flights and/or birds, or increased reporting every year. We were motivated to explore this dataset with predictive clustering to answer some of these questions.
We were mainly interested in two sets of feature variables: 'level of damage' caused to the aircraft due to the bird strike and the 'region' in the US where it took place. In the database, we found six levels of damage: minor, substantial, uncertain level, destroyed, unknown (damage not reported) and none (or no damage); and five regions in the US: Midwest, Northeast, South, West, and unknown (when the region is not known). These indicator levels were encoded as binary variables to generate our features. For example, variable 'South' = 1 when a bird strike took place in the South region of the US, and 0 otherwise. Finally, after the preprocessing and transformation steps, we grouped the individual bird strike records per year of the strike, damage levels, and the US regions. The resulting count of records, an aggregate of the number of strikes w.r.t to the features, was used as the prediction variable.
Our strategy was to use our greedy CLR-BB model to identify highly interpretable clusters and capture different regression lines describing the prediction variable. We have N = 803 observations with 12 features in our generated dataset. We trained our model with K ∈ {2, 3, 4, 5} clusters and found that the K = 4 case gave the best results in terms of interpretability and performance. The average out-of-sample R 2 score for our model was found to be 0.929, much higher in comparison with the R 2 score of 0.613 for the baseline lasso regression model. We report the regression weights and min-max ('range') of the prediction variable in Table 5. We also leverage our bounding box clustering model's ability to define clusters as a set of decision rules to build a tree-shaped architecture as shown in Figure 3. It is evident from Table 5 that cluster 4 has a substantially lower number of bird-strikes reported, and the slope w.r.t. to year feature is very small. From the tree in Figure 3, we realize Fig. 3: Decision rules based tree architecture representing the 4-clusters obtained in the FAA wildlife-strike dataset analysis that this cluster corresponds to the cases when the flight had minor to substantial damage, i.e., when 'Damage None' and 'Damage unknown' variables are both 0. For the "no" damage case, the model had partitioned the data based on the regions to give 3 clusters. Clusters 2 and 3 correspond to bird strike reporting from the South and Northeast regions. Clearly, the highest reporting was from the South region along with a steeper slope w.r.t. to the year variable. This is reflected in Figure 4. The plot for the South region in this Figure 4, shows two trends, one which increases steeply corresponding to the "no" damage case (cluster 4) while the other that remains flat corresponding to the "some" damage case (cluster 2). Similarly, the plot for the Northeast region represents clusters 3 and 4. Overall, our model was also able to obtain very clear partitions along the features to identify regions of high and low bird strike activity, along with giving clarity on the damage levels in these regions. Because we note that only the curve for the "no" damage case is increasing, there is reason to believe that it is only the increased awareness among pilots that has resulted in higher reporting.
SF Crime dataset
Through this case study, our goal was to categorize crime rates in the census tracts in San Francisco into three classes -high, medium, or low; and understand the relationship between crime in neighborhoods and census features. Moreover, we were motivated to leverage our models to conduct demographic analysis with a geospatial dataset and seek readily interpretable results. We used our greedy classification-bounding box clustering model to conduct this study. Since crime patterns primarily depend on location and intrinsic socioeconomic features, we used the bounding boxes method to obtain spatially coherent clusters. To facilitate this study, we used the Longitudinal Tract Database (LTDB) [38] to get socio-economic and demographic features for the census tracts (2010) in San Francisco (SF). We then obtained the crime incidents reports from the police department database from 2003 to 2018 from DataSF 3 (an open data portal for SF). We performed several preprocessing steps to prepare the data. First, we computed the per tract count of property and violent crime incidents reported, and then assigned class labels low, medium, and high (corresponding to classes 1,2, and 3) based on it. Next, we used mutual information [21] to pick the following census features [38]: housing units in multi-unit structures (multi), persons in poverty (npov), median house value (mhmval), people with at least a four-year college degree (col), professional employees (prof), per-capita income (incpc), latitude, and longitude (corresponding to the central point in a tract).
The generated data is relatively small with N = 195 observations with 8 features. We trained the data with our greedy CLC-BB model with K ∈ {2, 3, 4} clusters and found that the best results were obtained with K = 3 clusters. The average out-of-sample accuracy was 0.692, an improvement over the baseline linear classification result of 0.558. We report the feature averages and their boundaries for the three clusters in Figure 5c. The prediction label average was 1.86, 2.67, and 1.89 for the three clusters. Clearly, a larger class label average represents a high crime cluster. From Figure 5, it is evident that we have two low crime rate clusters (1 and 3). Moreover, we note that clusters 2 and 3 are similar in being high-income, well-educated, and high property value clusters. However, these clusters have two contrasting modes concerning crime rates. Any traditional unsupervised clustering model like K-means would have put both these groups in the same cluster. Furthermore, a single linear classification model may not efficiently capture such intricate details and multiple modes present in the data.
With Figure 5, we show how we leveraged our bounding box clustering approach to understand the results further. For instance, by connecting census tracts in the map containing the clusters from Figure 5b with that of the crime labels per tract shown in Figure 5a, we recognize that cluster 2 corresponds to the high crime regions in the Downtown SF. Inner-city areas are expected to have higher crime reporting and a more significant proportion of the educated, high-income working-class population.
Overall, our clustering method was able to identify spatially coherent clusters while simultaneously recognizing the different modes of crimes observed in these regions. Also, it gave better performance than the baseline, along with interpretable and visually appealing results.
Movielens data
Our objective was to use the MovieLens-100K dataset [31] to perform a recommendation task using user and item features -content-based filtering. We transformed the data to a classification problem and applied our classification-closest centroid greedy model. Although we understand that the state-of-the-art recommendation systems use collaborative filtering or hybrid methods [47,52], we use our methodology as a proof of concept to drive that predictive clustering models can be used as a first step in exploratory data analysis. Moreover, by using a large dataset for this case study, we could also complement the other three analyses that used relatively smaller datasets.
The MovieLens dataset contains information of 943 users, rating a fraction of the 1682 movies list available. To prepare our data, we went through several pre-processing steps. First, we identified the top 10 genres from a list of 19 genres that cover more than 85% of the movies in the list. We used these genre indicator variables as our movie features. Additionally, we obtained movie information like popularity indicator, number of votes, vote average, and revenue-budget ratio from the IMDB database to supplement our movie features. Second, we used the user's gender, and age (after binning the age) features for user description. Finally, we merged the two datasets to obtain our user-item rating dataset. Since the ratings are from 1-5, we threshold the ratings at 4, i.e., a rating greater than or equal to 4 got assigned to class 1 (recommend a movie), and class 0 otherwise. This generated dataset has more than 85K observations with 21 features.
-122. 54 We used our greedy CLC-CC model for this dataset. We tuned the value of clusters K ∈ {2, 3, 4, 5, 6, 7}, and found the best results with K = 3. We observed an average test accuracy of 0.652 and test root mean square error (rmse) of 0.589.This is a small improvement compared to baseline classification, which gave an average accuracy of 0.637 and rmse of 0.602. In Table 6, we report a summarized version of what each cluster represents based on the feature averages found in them. The label's average indicates how users react to movies that fall in a cluster. For instance, cluster 2 indicates that males between 20-30 and 45+ prefer drama, action, and thriller genres. In addition to this, we found the following variables to be significant based on the weights of SVM hyperplanes found in each cluster: vote average, vote count and age categories in cluster 1; vote count, action and crime genre features in cluster 2; and revenue-budget ratio, romance and thriller genres in cluster 3. Overall, our model was able to provide interpretable results that helped identify these interesting populations sections or target groups, as evident from Table 6.
Conclusion
In this section, we briefly summarize our key contributions and results, and discuss important limitations of our work along with possible future research directions to address them.
Summary
In this article, we began with the observation that clustering for data science has often been viewed through the lens of K-means and the application of clustering for supervised tasks has largely been overlooked and unexplored. We took a broader outlook towards clustering and introduced a novel generalized framework for predictive clustering to address this deficiency.
In our framework, we presented different perspectives to define clusters and a general approach to combine clustering with various supervised objectives. As a result, an array of models falls out of this framework. Some of these have been previously explored in the literature; however, they were restrictive in their approach and largely application-driven. Furthermore, we presented two methodologies to optimize all models in our framework. Using MILP-based formulations, we ensured global optimization for our models and provided reproducible results. Our highly scalable and relatively efficient greedy algorithms inspired by the Majorization-minimization framework give a good approximation of the benchmark optimal MILP-based solution in instances where comparison was possible.
We also demonstrated the relevance of a predictive clustering framework by analyzing and obtaining results for four unique datasets from a diverse set of domains. Through our case studies, we were able to show how these models were able to detect the different generative modes present in the data and how we can interpret these results. Consequently, we obtained significantly better results compared to baseline regression and classification models.
More fundamentally, we had focused on defining a framework that can break the notion of clustering as an unsupervised learning tool, uncover multiple "behavioral" modes present in the data, and discover "hidden" patterns in the supervised sense. As a step towards this direction, we developed a small toolkit of supervised clustering methods, which can potentially be expanded to include many more cluster definitions and supervised objectives. This framework could not only be used as a novel conceptual approach to solve many problems in data science but also outperform traditional linear models to give better results. Moreover, we believe that data scientists and policymakers could efficiently leverage this toolkit to obtain workable solutions that are highly interpretable and can help design policy interventions.
Future work
Overall, we believe that this work brings an alternative broader outlook towards clustering and thereby can act as a catalyst to inspire a range of fascinating extensions and future applications. We list some exciting areas of future work below: 1. Expansion of the design space along both dimensions: By defining a generalized framework for optimization for predictive clustering, we enable the scope for expanding the design space along both the clustering type and supervised objective dimensions. For example, unsupervised clustering methods like DBSCAN and spectral clustering can be added to the array of the cluster definitions already a part of the framework. Density-based clustering like DBSCAN could add the flavor of having arbitrary-shaped clusters, unlike the hypercuboids of the bounding box and spherical clusters of closest center clustering. Furthermore, several other loss functions can be incorporated into the framework, including 0-1 loss and Huber loss for classification with MILP-based and greedy optimization and cross-entropy loss function with greedy optimization. This would provide a broader toolkit of methods to choose from to tackle the constantly evolving needs in the data science field. More importantly, such a framework will then partially enjoy the non-linearity advantage of random forests and neural networks while still retaining its quality of being highly interpretable. 2. Scalable optimization: As seen previously, MILP-based methods for predictive clustering were not practical to solve in real-time for large datasets but nonetheless provided global optimization. Although we observed some improvement with the bounding box clustering in this aspect, there is undoubtedly a need to address scalability for these models. We believe that further research can utilize decomposition methods, tighter and symmetry breaking constraints, constraint and column generation techniques to strengthen the optimization. This would enable us to exploit the global optimization advantage of mixed integer optimization while being able to scale for large datasets that are generally encountered in the real world. | 2023-05-09T01:16:10.711Z | 2023-05-07T00:00:00.000 | {
"year": 2023,
"sha1": "c0f554f0b046eebab241f273f29d09bb3052bb94",
"oa_license": null,
"oa_url": null,
"oa_status": null,
"pdf_src": "Arxiv",
"pdf_hash": "c0f554f0b046eebab241f273f29d09bb3052bb94",
"s2fieldsofstudy": [
"Computer Science",
"Mathematics"
],
"extfieldsofstudy": [
"Computer Science",
"Mathematics"
]
} |
51998181 | pes2o/s2orc | v3-fos-license | Complexity Results for MCMC derived from Quantitative Bounds
This paper considers how to obtain MCMC quantitative convergence bounds which can be translated into tight complexity bounds in high-dimensional {settings}. We propose a modified drift-and-minorization approach, which establishes generalized drift conditions defined in subsets of the state space. The subsets are called the"large sets", and are chosen to rule out some"bad"states which have poor drift property when the dimension of the state space gets large. Using the"large sets"together with a"fitted family of drift functions", a quantitative bound can be obtained which can be translated into a tight complexity bound. As a demonstration, we analyze several Gibbs samplers and obtain complexity upper bounds for the mixing time. In particular, for one example of Gibbs sampler which is related to the James--Stein estimator, we show that the number of iterations required for the Gibbs sampler to converge is constant under certain conditions on the observed data and the initial state. It is our hope that this modified drift-and-minorization approach can be employed in many other specific examples to obtain complexity bounds for high-dimensional Markov chains.
1. Introduction. Markov chain Monte Carlo (MCMC) algorithms are extremely widely used and studied in statistics, e.g. [5,19], and their running times are an extremely important practical issue. They have been studied from a variety of perspectives, including convergence "diagnostics" via the Markov chain output (e.g. [18]), proving weak convergence limits of speed-up versions of the algorithms to diffusion limits [39,40], and directly bounding the convergence in total variation distance [34,44,46,42,24,47,16,3,25]. Furthermore, there is a recent trend focusing on quantitative mixing time bounds in terms of either total variation distance or Wasserstein distance for certain types of MCMC methods (such as Langevin Monte Carlo) and targets (such as strongly log-concave targets), see e.g. [9,11]. Among the work of directly bounding the total variation distance, most of the quantitative convergence bounds proceed by establishing a drift condition and an associated minorization condition for the Markov chain in question (see e.g. [35]). One approach for finding quantitative bounds has been the drift and minorization method set forth by [44].
Computer scientists take a slightly different perspective, in terms of running time complexity order as the "size" of the problem goes to infinity. Complexity results in computer science go back at least to [7], and took on greater focus with the pioneering NP-complete work of [8]. In the Markov chain context, computer scientists have been bounding convergence times of Markov chain algorithms since at least [52], focusing largely on spectral gap bounds for Markov chains on finite state spaces. More recently, attention has turned to bounding spectral gaps of modern Markov chain algorithms on general state spaces, again primarily via spectral gaps, such as [29,53,30,54,55] and the references therein. These bounds often focus on the order of the convergence time in terms of some particular parameter, such as the dimension of the corresponding state space. In recent years, there is much interest in the "large p, large n" or "large p, small n" high-dimensional settings, where p is the number of parameters and n is the sample size. [38] use the term convergence complexity to denote the ability of a high-dimensional MCMC scheme to draw samples from the posterior, and how the ability to do so changes as the dimension of the parameter set grows.
Direct total variation bounds for MCMC are sometimes presented in terms of the convergence order, for example, the work by [45] for a Gibbs sampler for a variance components model. However, current methods for obtaining total variation bounds of such MCMCs typically proceed as if the dimension of the parameter, p, and sample size, n, are fixed. It is thus important to bridge the gap between statistics-style convergence bounds, and computerscience-style complexity results.
In one direction, [41] connect known results about diffusion limits of MCMC to the computer science notion of algorithm complexity. They show that any weak limit of a Markov process implies a corresponding complexity bound in an appropriate metric. For example, under appropriate assumptions, in p dimensions, the Random-Walk Metropolis algorithm takes O(p) iterations (see also [56]) and the Metropolis-adjusted Langevin algorithm (MALA) takes O(p 1/3 ) iterations to converge to stationarity. This paper considers how to obtain MCMC quantitative convergence bounds that can be translated into tight complexity bounds in high-dimensional settings. At the first glance, it may seem that an approach to answering the question of convergence complexity may be provided by the drift-and-minorization method of [44]. However, [38] demonstrate that, somewhat problematically, a few specific upper bounds in the literature obtained by the drift-andminorization method tend to 1 as n or p tends to infinity. For example, by directly translating the existing work by [6,26], which are both based on the general approach of [44], [38] show that the "small set" gets large fast as the dimension p increases. And this seems to happen generally when the drift-and-minorization approach is applied to statistical problems. [38] also discuss special cases when the method of [44] can still be used to obtain tight bounds on the convergence rate. However, the conditions proposed in [38] are very restrictive. First, it requires the MCMC algorithm to be analyzed is a Gibbs sampler. Second, the Gibbs sampler must have only one high-dimensional parameter which must be drawn in the last step of the Gibbs sampling cycle. Unfortunately, other than some tailored examples [38], most realistic MCMC algorithms do not satisfy these conditions. It is unclear whether some particular drift functions lead to bad complexity bounds or the drift-and-minorization approach itself has some limitations. It is therefore the hope by [38] that proposals and developments of new ideas analogous to those of [44], which are suitable for high-dimensional settings, can be motivated.
In this paper, we attempt to address concerns about obtaining quantitative bounds that can be translated into tight complexity bounds. We note that although [38] provide evidence for the claim that many published bounds have poor dependence on n and p, the statistics literature has not focused on controlling the complexity order on n and p. We give some intuition why most directly translated complexity bounds are quite loose and provide advice on how to obtain tight complexity bounds for high-dimensional Markov chains. The key ideas are (1) the drift function should be small near the region of concentration of the posterior in high dimensions; (2) "bad" states which have poor drift property when n and/or p gets large should be ruled out when establishing the drift condition. In order to get tight complexity bounds, we propose a modified drift-and-minorization approach by establishing generalized drift conditions in subsets of the state space, which are called the "large sets", instead of the whole state space; see Section 2. The "large sets" are chosen to rule out some "bad" states which have poor drift property when the dimension of the state space gets large. By establishing the generalized drift condition, a new quantitative bound is obtained, which is composed of two parts. The first part is an upper bound on the probability the Markov chain will visit the states outside of the "large set"; the second part is an upper bound on the total variation distance of a constructed restricted Markov chain defined only on the "large set". In order to obtain good complexity bounds for high-dimensional settings, as the dimension increases, the family of drift functions should be chosen such that the function values are small near the region of concentration of the posterior, which we will define formally as a "fitted family of drift functions", and the "large sets" should be adjusted depending on n and p to balance the complexity order of the two parts.
As a demonstration, we prove three Gibbs samplers to get complexity bounds. In the first two examples, we demonstrate how to choose the "fitted family of drift functions". In the third example, we demonstrate the use of "fitted family of drift functions" together with "large sets". More specifically, we show in Section 3.3 that a certain realistic Gibbs sampler related to the James-Stein estimator converges in O(1) iterations; see Theorem 3.7. As far as we know, this is the first successful example for analyzing the convergence complexity of a non-trivial realistic MCMC algorithm using the (modified) drift-and-minorization approach. Several months after we uploaded this manuscript to arXiv, [37] successfully analyzed another realistic MCMC algorithm using the drift-and-minorization approach. Although the analysis by [37] does not make use of the "large set" technique proposed in this paper, they do make use of a "fitted family of drift functions", which they use an informal concept called "a centered drift function". We explain in this paper that when there exists some "bad" states, using a "fitted family of drift functions" might not be enough to establish a tight complexity bound. For example, for the Gibbs sampler we successfully analyze in Section 3.3, it is unknown how to obtain tight complexity bound by the traditional drift-and-minorization approach or other approaches. This is confirmed in a later study by [10]. To the best of our knowledge, our approach using the "large set" is the only successful approach so far to get the tight complexity bound of this example. For another successful example using the "large set", we refer to recent work in [57] for high-dimensional Bayesian variable selection. An important message from the successful analysis of several MCMC examples using the "large set" together with a "fitted family of drift functions" is that complexity bounds can be obtained even without any particular form of non-deteriorating convergence bounds. Previous attempts in the literature on studying how the geometric convergence rate behaves as a function of p and n are incomplete. It is our hope that our approach can be employed to many other specific examples for obtaining quantitative bounds that can be translated to complexity bounds in high-dimensional settings.
Notation: We use d − → for weak convergence and π(·) to denote the stationary distribution of the Markov chain. The total variation distance is denoted by · var and the law of a random variable X denoted by L(X). We adopt the Big-O, Little-O, Theta, and Omega notations. Formally, T (n) = O(f (n)) if and only if for some constants c and n 0 , T (n) ≤ cf (n) for all n ≥ n 0 ; T (n) = Ω(f (n)) if and only if for some constants c and n 0 , T (n) ≥ cf (n) for all n ≥ n 0 ; T (n) is Θ(f (n)) if and only if both T (n) = O(f (n)) and T (n) = Ω(f (n)); T (n) = o(f (n)) if and only if T (n) = O(f (n)) and T (n) = Ω(f (n)).
Generalized Geometric Drift Conditions and Large Sets.
Scaling classical MCMCs to very high dimensions can be problematic. Even if a chain is geometrically ergodic for fixed n and p, the convergence of Markov chains may still be quite slow as p → ∞ and n → ∞. Throughout the paper, we assume the Markov chain is positive Harris recurrent, aperiodic, and π-irreducible, where π denotes the unique stationary distribution. For a Markov chain {X (i) , i = 0, 1, . . . } on a state space (X , B) with transition kernel P (x, ·), defined by the general method of [44] proceeds by establishing a drift condition where f : X → R + is the "drift function", some 0 < λ < 1 and b < ∞; and an associated minorization condition where R := {x ∈ X : f (x) ≤ d} is called the "small set", and d > 2b/(1 − λ), for some ǫ > 0 and some probability measure Q(·) on X . Then [44,Theorem 12] states that under both drift and minorization conditions, if the Markov chain starts from an initial distribution ν, then for any 0 < r < 1, we have denotes the expectation of f (x) over x ∼ ν(·). However, it is observed, for example, in [38,37], that for many specific bounds obtained by the drift-and-minorization method, when the dimension gets larger, the typical scenario for the drift condition of Eq. (2) seems to be λ going to one, and/or b getting much larger. This makes the "size" of the small set R grow too fast, which leads to the minorization volume ǫ go to 0 exponentially fast. In the following, we give an intuitive explanation of what makes a "good" drift condition in high-dimensional settings.
2.1. Intuition. It is useful to think of the drift function f (x) as an energy function [24]. Then the drift condition in Eq. (2) implies the chain tends to "drift" toward states which have "lower energy" in expectation. It is well-known that a "good" drift condition is established when both λ and b are small. Intuitively, λ being small implies that when the chain is in a "high-energy" state, then it tends to "drift" back to "low-energy" states fast; and b being small implies that when the chain is in a "low-energy" state, then it tends to remain in a "low-energy" state in the next iteration too. In a high-dimensional setting as the dimension grows to infinity, for a collection of drift conditions to be "good", we would like it to satisfy the following two properties: P1. λ is small, in the sense that it converges to 1 slowly or is bounded away from 1; P2. b is small, in the sense that it grows at a slower rate than do typical values of the drift function.
We explain the intuition behind the properties and define a new notion of "fitted family of drift functions" in this subsection and later demonstrate how to establish the properties using examples in Section 3. One way to understand this intuition is to think of it as controlling the complexity order of the size of the "small set", R = {x ∈ X : f (x) ≤ d}. Since d > 2b/(1 − λ), if λ converges to 1 slowly or is bounded away from 1, and if b is growing at a slower rate than typical values of f (x) (we will illustrate the meaning of "typical values" later in examples), then the size of the small set parameter d can be chosen to have a small complexity order on n and/or p. This in turn makes the minorization volume ǫ converge to 0 sufficiently slowly (or even remain bounded away from 0).
Next, we define the notion of "fitted family of drift functions", which is somewhat related to the informal concept of "centered drift function" in [37]. DEFINITION 2.1. Let π p be the target distributions when the dimension of the state space is p. We call a collection of non-negative functions, Then a fitted family of drift functions is just a fitted family of functions which also satisfy a family of (generalized) drift conditions. Note that the fitted family of functions can also depend on n if n is a function of p. In the rest of the paper, we may simply write π p as π and f p (x) as f (x) for simplicity. However, we should keep in mind that the notation π and f (x) are actually a family of target distributions and a fitted family of drift functions in high-dimensional settings when we study the behavior of the Markov chains for p → ∞.
Now we explain the intuition on why we should use a fitted family of drift functions in high-dimensional settings. For clarity, we first assume that λ is bounded away from 1, and focus on conditions required for b to grow at a slower rate than typical values of f (x). Assume for definiteness that p is fixed and n → ∞, and the drift function is scaled in such a way that f (x) = O(1) and there is a fixed typical statex with f (x) = Θ(1) regardless of dimension. Then, to satisfy property P2 above, we require that b = o(1). On the other hand, taking expectation over x ∼ π(·) on both sides of Eq. (1) implies that the drift function should be chosen such that E π [f (x)] → 0, which is exactly the definition of the fitted family of drift functions. Therefore, to get a small b in a high-dimensional setting, we require a (properly scaled) drift function f (·) whose values f (x), where x ∼ π(·), concentrate around 0, which is guaranteed by the fitted family of drift functions.
Note that the fitted family of drift functions for high-dimensional settings can be very different than traditional "good" drift functions. For example, to study a Markov chain {X (k) } sampling a fixed-dimensional target π, one might think f (x) = π(x) −α for some fixed number α > 0 is a good candidate for the drift function. However, this is not a good intuition for choosing the fitted family of drift functions for the high-dimensional settings. The following is a toy example. EXAMPLE 2.2. Consider π is the standard multivariate Gaussian N (0, I p ). One choice for the drift function could be f (x) = exp( x 2 ) − 1 or f (x) = x 2 /p (which is similar to the one used in [44,Example 1]). However, a better fitted family of drift functions in high dimensional settings could be This is because that under X ∼ N (0, I p ), we know X 2 /p concentrates around 1. The family of drift functions {( x 2 /p − 1) 2 } ∞ p=1 exactly fits this concentration phenomenon. The traditional popular choices of drift functions do not have this property.
Note that in the existing literature, the drift functions used to establish the drift condition usually don't satisfy the definition of fitted family of drift functions. This is because in the traditional setting where n and p are fixed, a "good" drift condition is established whenever λ and b are small enough for specific fixed values of n and p. The complexity orders of λ and b as functions of n and/or p are not essential, so fitting the concentration region of the posterior as dimension increases is not necessary. As a result, many existing quantitative bounds cannot be directly translated into tight complexity bounds, since the size of the small set does not have a small complexity order on n and/or p. At the very least, one has to re-analyze such MCMC algorithms using a fitted family of drift functions.
Next, we focus on establishing λ that is either bounded away from 1 or converges to 1 slowly, assuming a fitted family of drift functions is already chosen. Intuitively, λ describes the behavior of the Markov chain when its current state has a "high energy". If λ goes to 1 very fast when n and/or p goes to infinity, this may suggest the existence of some "bad" states, i.e. states which have "high energy", but the drift property becomes poor as n and/or p gets large. Therefore, in high dimensions, once the Markov chain visits one of these "bad" states, it only slowly drifts back toward to the corresponding small set. Since the drift condition in Eq. (2) must hold for all x ∈ X , the existence of "bad" states forces λ go to 1 very fast. And since the small set is defined as R = {x ∈ X : f (x) ≤ d} where d > 2b/(1 − λ), the scenario λ → 1 very fast forces R to become very large, and hence the minorization volume ǫ goes to zero very fast. One perspective on this problem is that the definition of drift condition in Eq. (2) is too restrictive, since it must hold for all states x, even the bad ones.
In summary, we are able to establish a small b as in P2 above by using a fitted family of drift functions. However, the other difficulty in establishing a small λ as in P1 above is the existence of some "bad" states when n and/or p gets large. Since the traditional drift condition defined in Eq. (2) is restrictive, the traditional drift-and-minorization method is not flexible enough to deal with these "bad" states. In the following, we instead propose a modified drift-and-minorization approach using a generalized drift condition, where the drift function is defined only in a "large set". This allows us to rule out those "bad" states in high-dimensional cases.
New Quantitative Bound.
We first relax the traditional drift condition and define a generalized drift condition which is established only on a subset of the state space. Recall that {X (k) } denotes the Markov chain on a state space (X , B) with a transition kernel P (x, ·), ∀x ∈ X . Let P k (x, ·) be the k-step transition kernel. Denote R 0 as the "large set", i.e., R 0 ∈ B is a subset of X .
(C1). The "large set" R 0 is defined by (C1'). The transition kernel P (x, ·) is a composition of reversible (with respect to π) steps P = I i=1 P i , i.e. , P (x, dy) = (x1,...,xI−1)∈X ×···×X P 1 (x, dx 1 )P 2 (x 1 , dx 2 ) · · · P I (x I−1 , dy), where I ≥ 1 is a fixed integer, and where {X (k) } denotes a restricted Markov chain with a transition kernel I i=1P i wherẽ P i (x, dy) := P i (x, dy) for x, y ∈ R 0 , x = y, andP i (x, x) := 1 − P i (x, R 0 \{x}), ∀x ∈ R 0 . REMARK 2.4. Note that only one of (C1) and (C1') is required. For (C1'), the Markov chain needs to be either reversible or can be written as a composition of reversible steps. This condition is very mild since it is satisfied by most realistic MCMC algorithms. For example, full-dimensional and random-scan Metropolis-Hastings algorithms and random-scan Gibbs samplers are reversible, and their deterministic-scan versions can be written as a composition of reversible steps. For (C1), it is required that the "large set" is constructed using the drift function in a certain way but there is no restriction for the transition kernel P . If R 0 is constructed as in (C1) then Eq. (8) automatically holds. Therefore, one should verify (C1') if one hopes to have more flexibility for constructing R 0 than the particular way in (C1). Particularly, if the drift function f (x) depends on all coordinates, it might be hard to control all the states in {x ∈ X : f (x) ≤ d 0 } as the dimension increases. Then (C1') might be preferable. REMARK 2.5. To verify (C1') in Definition 2.3, one has to check a new inequality . This inequality in (C1') implies the "large set" R 0 should be chosen such that the states in R 0 have "lower energy" on expectation. This is intuitive since we assume the "bad" states all have "high energy" and poor drift property when n and/or p gets large. One trick is to choose R 0 by ruling out some (but not too many) states with "high energy" even if the states are not "bad". In Section 3.3, we demonstrate the use of this trick to select the "large set" R 0 so that E(f (X (1) ) can be easily verified. The constructed R 0 in Section 3.3 satisfies (C1') but not (C1).
Next, we propose a new quantitative bound, which is based on the generalized drift condition on a "large set". THEOREM 2.6. Suppose the Markov chain satisfies the generalized drift condition in Definition 2.3 on a "large set" R 0 . Furthermore, for a "small set" , the Markov chain also satisfies a minorization condition: for some ǫ > 0, some probability measure Q(·) on X . Finally, suppose the Markov chain begins with an initial distribution ν such that ν(R 0 ) = 1. Then for any 0 < r < 1, we have where α −1 = 1+2b+λd 1+d , Λ = 1 + 2(λd + b), and νP i (·) := X P i (x, ·)ν(dx).
PROOF. See Appendix A.
REMARK 2.7. Note that the new bound in Theorem 2.6 assumes the Markov chain begins with an initial distribution ν such that ν(R 0 ) = 1. This assumption is not very restrictive since the "large set" ideally should include all "good" states. In high-dimensional settings, the Markov chain is not expected to converge fast beginning with any state (see Section 3.3.2 for discussions on initial states). Furthermore, the use of "warm start" becomes popular recently, see e.g. [12]. However, it doesn't directly relate to the large set. We only require that the initial distribution µ is supported in the large set. For example, µ can be a point mass. For the term Q(R 0 ) in Eq. (10), it can be replaced by any lower bound of Q(R 0 ). Since the "large set" is ideally chosen to include all "good" states, one can expect Q(R 0 ) is at least bounded away from 0. In particular, if we have established an upper bound for P (x, R c 0 ) with x ∈ R, then we can apply ǫQ(R c 0 ) ≤ P (x, R c 0 ) to get an upper bound of Q(R c 0 ) which can be turned into a lower bound on Q(R 0 ). REMARK 2.8. In the proof of Theorem 2.6, the generalized drift condition in Definition 2.3 essentially implies a traditional drift condition in Eq. (2) for a constructed "restricted" Markov chain only on the "large set" R 0 . The first two terms in the upper bound Eq. (10) are indeed an upper bound on the total variation distance of this constructed "restricted" Markov chain. Note that the general idea of studying the restriction of a Markov chain to some "good" subset of the state space has appeared in the literature, such as [32,13,21,15,31,51,33] and the references therein, in which different ways of restrictions have been considered for different reasons. For example, [4] studied the rate of convergence of the MALA algorithm by a similar argument, which is later extended in [14] to study contraction rate in Wasserstein distance w.r.t. Gaussian reference measure. However, the argument in [4] is only for the MALA algorithm and the proof technique is by constructing a restricted chain. Comparing with [4], our Theorem 2.6 is for general MCMC algorithms with weaker conditions in (C1) and (C1').
In the proof, we use either a trace chain or a restricted chain depending on which condition is satisfied. Most importantly, the motivation of this work is to obtain tight complexity bound which is quite different from [4]. In Theorem 2.6, the goal of considering a "good" subset of the state space is to obtain better control on the dependence on n and p for the upper bound. REMARK 2.9. The last two terms in the upper bound Eq. (10) give an upper bound of the probability that the Markov chain will visit R c 0 starting from either the initial distribution ν or the stationary distribution π. Therefore, the proposed method in Theorem 2.6 is a generalized version of the classic drift-and-minorization method [44] by allowing the drift condition to be established on a chosen "large set". Indeed, if we choose R 0 = X , then Eq. (10) is almost the same as Eq. (4), except slightly tighter due to the terms α rk . REMARK 2.10. One more note about Eq. (10) is that the new bound does not decrease exponentially with k. For example, the term k π(R c 0 ) is linear increasing with k for fixed n and p. We emphasize that we do not aim to prove a Markov chain is geometrically ergodic here. An upper bound which decreases exponentially with k for fixed n and p does not guarantee to have a tight complexity order on n and/or p, which has been discussed in [38]. Instead, our new bound in Eq. (10) is designed for controlling complexity orders of n and/or p for high-dimensional Markov chains. In Section 3.3, we obtain a tight complexity bound for a Gibbs sampler of a simple random effect model related to the James-Stein estimator. Previous unsuccessful attempts for the same Gibbs sampler (see [10]), were focusing on how to obtain convergence bounds with geometric/polynomial rates as a function of p and n. The successful analysis of the Gibbs sampler in the current paper implies that complexity bounds can be obtained even without any particular form of non-deteriorating convergence bounds.
Complexity Bound.
Note that mixing time is often defined uniformly over initial states, which is difficult to extend to general state spaces. In this paper, the term "mixing time" is defined depending on the initial state. The formal definition is given in the following. DEFINITION 2.11. For any 0 < c < 1, we define the mixing time K c,x of a Markov chain The proposed new bound in Theorem 2.6 can be used to obtain complexity bounds in highdimensional settings. The key is to balance the complexity orders of k on n and/or p required for both the first two terms and the last two terms of the upper bound in Eq. (10) to be small.
The complexity order of k on n and/or p for the first two terms to be small can be controlled by adjusting the "large set". The "large set" should be kept as large as possible provided that "bad" states have been ruled out. For the last two terms to be small, we should determine the growth rate of k as a function of n and p so that This may involve (carefully) bounding the tail probability of the transition kernel, depending on the definition of the "large set" and the complexity order aimed to establish.
We give a direct corollary of Theorem 2.6 on mixing time in terms of p. In general, mixing time in terms of both n and p can be obtained using Theorem 2.6. COROLLARY 2.12. Suppose Theorem 2.6 has been established for every dimension p. Letk p andk p be sequences of positive integers as functions of p such that bothk p → ∞ and k p → ∞ as p → ∞. Furthermore, lim p→∞kp −k p ≥ 0 and Then the mixing time of the MCMC starting from ν has the complexity order O(k p ).
Using Corollary 2.12, one can plug-in the orders of b, 1 − λ, and ǫ to get the complexity bound. The following result is directly from Corollary 2.12. COROLLARY 2.13. Suppose Theorem 2.6 has been established for every dimension p and c 1 , · · · , c 5 are non-negative constants such that 1 , then the mixing time starting from ν has the complexity order O(p c1 log(p c5 + p c2+c3 ) log(p c2+c3 )) = O(p c1 (log(p)) 2 ).
We will discuss several MCMC examples in Section 3 to demonstrate the use of the fitted family of drift functions and "large sets" to get complexity bounds.
2.4.
Discussions. We finish this section by giving a few more remarks and discussions on our main results.
• Geometric ergodicity: The Markov chain to be analyzed in Theorem 2.6 does not have to be geometrically ergodic. The proof of Eq. (10) only implies that, after ruling out "bad" states, a constructed "restricted" Markov chain defined on the "large set" is geometrically ergodic. Therefore, the new bound in Eq. (10) can be used to analyze non-geometrically ergodic high-dimensional Markov chains.
• Relation to spectral gaps: Many approaches in MCMC literature bound the spectral gap of the corresponding Markov operator [29,53,30,54,55]. However, on general state spaces, the spectral gap is zero for Markov chains which are not geometrically ergodic, even if they do converge to stationarity. Our results do not require the Markov chain to be geometrically ergodic. Instead, we only require the constructed "restricted" chain on the "large set" in our proof is geometrically ergodic. Therefore, we cannot connect our results to bounds on spectral gaps. Furthermore, we do not require the Markov chain to be reversible. So our results apply even in the non-reversible cases, which makes spectral gaps harder to study or interpret. For these reasons, we do not present the main results in terms of spectral gaps. • Other types of drift condition: In this paper, we use the drift condition of the type in [44].
There is another popular drift condition (e.g., in [43]) and the connection between the two is well-known (see [25,Lemma 3.1]). Therefore, it is straightforward to establish our main result using the other drift condition in [43]. • Complexity of MCMC estimators: It would be nice to obtain rate of convergence (or nonasymptotic bounds) for general MCMC estimators. The proof techniques in the existing literature on establishing rate of convergence of MCMC estimators [2,1,36,28,27,48,49,50] requires certain conditions such as geometric/polynomial drift conditions, or spectral gaps. However, our result doesn't require establishing a geometric/polynomial drift condition or a spectral gap. Therefore, it is not clear how to connect our complexity results to complexity of other MCMC estimators. This is certainly an interesting direction for future work.
Gibbs Sampler Convergence Bound.
In this section, we study several examples of Gibbs sampling to analyze the convergence complexity using the proposed approach. In Section 3.1 and Section 3.2, we consider a simple Gaussian example and a hierarchical Poisson example. Simplified versions of both examples for fixed dimensions was originally studied in [44, Example 1 and Example 2] and the original mixing times have poor complexity orders in terms of dimensions. We study the extensions of them in the high-dimensional setting and obtain tight complexity bounds by choosing fitted families of drift functions. In Section 3.3, we study the MCMC model in [46] which is related to the James-Stein estimator. We demonstrate how to use both the fitted family of drift functions and the "large sets" to obtain a tight complexity bound.
Note that although the bound in Theorem 2.6 contains different "admissible" growth combinations such as of b, 1/(1 − λ), and 1/ǫ (see also Corollary 2.13), the minorization volume ǫ relies on the small set which is determined by both b and λ. Furthermore, if b is fairly large, it is not surprising that λ can be bounded away from 1. Therefore, we can summarize our general principle in analyzing all the three examples as follows.
We first focus on choosing a fitted family of drift functions so that
has a small order. 2. Next, we establish the drift condition. If λ from the drift condition goes to 1 too fast, we apply the "large set" to rule out certain states. After the first two steps, we get a generalized drift condition which leads to a small set with reasonably "size". 3. Finally, we focus on establishing a (potentially multi-step) minorization condition to obtain ǫ which goes to zero slowly (or bounded away from 0).
A Gaussian Toy Example.
A bivariate Gaussian model was studied in [44, Example 1] as a demonstration of the drift-and-minorization approach. In this subsection, we study an extension of this example to the high-dimensional setting. Suppose our target π is N (µ, Σ), To sample from the target distribution, we use a two-step Gibbs sampler as in [44,Example 1]. Writing X = X 1 X 2 , the conditional distribution can be written as (16) and similarly for X 2 | X 1 .
For simplicity, we only consider the setting such that µ 1 = µ 2 = 0 and Σ 11 = Σ 22 = I d and Σ 12 = Σ 21 = 1 2 I d . It is straightforward to extend our analysis to general cases of µ and Σ. The corresponding Gibbs sampler is Note that X (0) 1 is not used in the updates. If we choose a drift function similar to the one used in [44], such as Then as X , it can be easily verified that the following drift condition can be established: However, as X 2 2 /p concentrates to 1 under stationarity, the drift condition leads to a small set {X : X 2 2 /p = O(1)} in which the states that X 2 2 /p is much smaller than 1 are included.
In our analysis, we choose a fitted family of drift functions which lead to a small set with much smaller size: We can establish the following drift condition: The corresponding small set fits exactly the concentration region of the target as p → ∞. Using the above drift condition and a multi-step minorization condition, we can obtain the mixing time is O(log(p)). Our main result is in the following.
where γ < 1 is a fixed constant and number of steps n = ⌊kC 2 log(p)⌋ + 1 where k is any positive integer.
PROOF. See Appendix F.
This implies the following complexity bound directly. COROLLARY 3.2. Under the assumptions of Theorem 3.1, the mixing time of the Gibbs sampler is O(log(p)).
A Hierarchical Poisson Model.
We study a hierachical Poisson model originally for analyzing a realistic data set in [17]. A Gibbs sampler for this model has been studied by [17] and a (numerical) quantitative bound was studied using the drift-and-minorization approach in [44,Example 2]. In this subsection, we study the Gibbs sampler in the high-dimensional setting.
Suppose the data has the form where Y i represents the number of failures over a time interval t i of n nuclear pumps. One can model the failures as a Poisson process with parameter λ i . Thus, during a observation period of length t i , the number of failures Y i follows a Poisson distribution of parameters λ i t i . We are interested in inferring the parameters λ = (λ 1 , . . . , λ n ) from the data {Y i , t i }. We follow a hierarchical Bayesian approach where we assume that λ 1 , . . . , λ n are conditional independent on a hyperparameter β and follow a gamma distribution with density where α is a constant. We assume further that the hyperparameter β follows itself a prior gamma distribution Ga(ρ, δ) where ρ and δ are fixed constant For simplicity, in this example we assume the time intervals are unit, that is, t i = 1 for all i. It is straightforward to extend our analysis to general cases of time intervals.
Overall, the model can be written as β ∼ Ga(ρ, δ). (28) In this example, we have p = n + 1 and x = (λ 1 , . . . , λ n , β). The posterior satisfies Note that this multidimensional distribution is rather complicated and it is not obvious how the rejection sampling or importance sampling could be efficiently used in this context. As the conditional distributions π(λ 1 , . . . , λ n | β, {Y i }) and π(β | {λ i }, {Y i }) admit standard parametric forms, we can write a Gibbs sampler with the following updating order: Next, we present the main result for this Gibbs sampler. The key step is to use a fitted family of drift functions: Our main result for this Gibbs sampler is as follows.
where l and u are two fixed constant such that 0 < l < u < ∞. Then there exists a constant C such that for large enough n and for all k, we have where γ < 1 is a constant.
PROOF. See Appendix G.
Note that it is very reasonable to make some reasonable assumptions on the observed data since the posterior depends on the observed data and we are actually studying a sequence of posteriors for the convergence complexity. In Theorem 3.3, we assume there exists a constant N that for all n ≥ N the data satisfiesȲ : where l and u are two fixed constant such that 0 < l < u < ∞. This assumption is quite weak. For example, it holds if the data is indeed generated from the model with some "true" parameters. Theorem 3.3 implies the following complexity bound directly.
A Random Effect Model related to the James-Stein Estimator.
In this subsection, we concentrate on a particular MCMC model, which is related to the James-Stein estimator [46]: where σ 2 V is assumed to be known, (Y 1 , . . . , Y n ) is the observed data, and x = (σ 2 A , µ, θ 1 , . . . , θ n ) are parameters. Note that we have the number of parameters p = n + 2 in this example. For simplicity, we will not mention p but only refer to n for this model. The posterior distribution satisfies A Gibbs sampler for the posterior distribution of this model has been originally analyzed in [46]. A quantitative bound has been derived by [46] using the drift-and-minorization method We first observe that this drift function doesn't lead to a fitted family of drift functions in high-dimensional setting. For example, select a "typical" statex = (σ 2 A ,μ,θ 1 , . . . ,θ n ) such thatθ Under reasonable assumptions on the observed data {Y i }, we can get the properly scaled drift function 1 [46]. Therefore, the definition of fitted family of drift functions is not satisfied. Furthermore, the established λ in [46] converges to 1 very fast, satisfying 1/(1 − λ) = Ω(n). Therefore, if we translate the quantitative bound in [46] into complexity orders, it requires the size of the "small set" to be Ω(n 2 ), which makes the minorization volume ǫ be exponentially small. This leads to upper bounds on the distance to stationarity which require exponentially large number of iterations to become small. This result also coincides with the observations by [38] when translating the work of [26,6]. We demonstrate the use of the modified drift-and-minorization approach by analyzing a Gibbs sampler for this MCMC model. Defining n ) to be the state of the Markov chain at the k-th iteration, we consider the following order of Gibbs sampling for computing the posterior distribution: Note that, in the language of [22], this is an "out-of-order" block Gibbs sampler, so inferences for the posterior distribution should be based on a "shifted" output sample In any case, it still has the same rate of convergence [22, Proposition 3] so our convergence analysis applies to both our version and the original block Gibbs version of [46].
We prove that convergence of the Gibbs sampler is actually very fast: the number of iterations required is O(1). More precisely, we first make the following assumptions on the observed data {Y i }: there exists δ > 0,σ 2 V < ∞, and a positive integer N 0 , such that, almost surely with respect to the randomness of {Y i }: The assumption in Eq. (38) is quite natural. For example, if the data is indeed generated from the model with a "true" variance σ 2 A > 0 then Eq. (38) obviously holds. More generally, the upper bound is just to ensure n i=1 (Y i −Ȳ ) 2 = O(n). For the lower bound, note that our MCMC model implies that the variance of Y i is larger than σ 2 V because of the uncertainty of θ i . Actually, under the MCMC model, conditional on the parameter σ 2 A , the variance of the data {Y i } equals σ 2 V + σ 2 A . Therefore, the assumption in Eq. (38) is just to assume the observed data is not abnormal under the MCMC model when n is large enough. Note that only the existence of δ is required for establishing our main results. More precisely, the existence of δ is needed to obtain an upper bound for π(R c 0 ). If such δ does not exist, the MCMC model is (seriously) misspecified so the posterior distribution of the parameter σ 2 A , which corresponds to the variance of a Normal distribution, may concentrate on 0. In that case, our upper bound on π(R c 0 ) does not hold. Then we show that, under the assumption Eq. (38), with initial statē and µ (0) arbitrary (since µ (0) will be updated in the first step of the Gibbs sampler), the mixing time of the Gibbs sampler to guarantee small total variation distance to stationarity is bounded by some constant when n is large enough.
3.3.1. Main Results. First, we obtain a quantitative bound for large enough n, which is given in the following theorem. THEOREM 3.6. Under the assumption Eq. (38), with initial state Eq. (39), there exists a positive integer N which does not depend on k, some constants C 1 > 0, C 2 > 0, C 3 > 0 and 0 < γ < 1, such that for all n ≥ N and for all k, we have PROOF. Let ∆ = n i=1 (Y i −Ȳ ) 2 and x = (σ 2 A , µ, θ 1 , . . . , θ n ). Define the fitted family of drift functions {f n (x)} by n ) be the state of the Markov chain at the k-th iteration, then we show in Lemma C.1 (see Appendix C) that where b = O(1).
Note that in Eq. (42), the term 2 depends on the coordinate A (k) of the state x (k) and is not bounded away from 1, since (σ 2 A ) (k) can be arbitrarily close to 0. Therefore, 2 cannot be bounded by some λ such that 0 < λ < 1 and we cannot directly establish the traditional drift condition Eq. (2) by Eq. (42). In the following, we establish the generalized drift condition Definition 2.3 using a "large set". According to Eq. (38), for large enough n, we have ∆ n−1 > σ 2 V . Then, we choose a threshold T such that, for large enough n, we have 0 < T < ∆ n−1 − σ 2 V . Defining λ T := where the "large set", R T , is defined by In order to satisfy the new drift condition in Definition 2.3, we verify (C1'). Note that in our example the transition kernel of the Gibbs sampler can be written as a composition of reversible steps and only the last step of the Gibbs sampler updates the parameter σ 2 A which is used for defining the "large set" R T . Therefore, in order to verify Eq. (8), it suffices to check the last step if the value of the drift function increases by updating This implies the value of f n (x) increases if the Markov chain is outside of the "large set" after updating σ 2 A . Therefore, the generalized drift condition in Definition 2.3 is satisfied. Now we can use Theorem 2.6 to derive a quantitative bound for the Gibbs sampler. We first show in Lemma D.1 (see Appendix D) that if T = Θ(1), by choosing the size of the "small set" R = {x ∈ X : f n (x) ≤ d} to satisfy d = O(1) and d > b 1−λT , there exists a probability measure Q(·) such that the Markov chain satisfies a minorization condition in Eq. (9) with the minorization volumne ǫ = Θ(1).
Next, we show in Lemma E.1 (see Appendix E) that with the initial state given by Eq. (39), there exists a positive integer N , which does not depend on k, such that for all n ≥ N , we have Now we derive a quantitative bound for the Gibbs sampler for large enough n by combing results together. First, from Eq.
Next, we translate the quantitative bound in Theorem 3.6 into the convergence complexity in terms of mixing time using similar arguments as Corollary 2.12 and Corollary 2.13. We show the convergence complexity is O(1). Intuitively, to make the term C 1 γ k in Eq. (40) arbitrarily small, k needs to have a complexity order of O(1) since γ does not depend on n. The residual terms C 2 . Therefore, the complexity bound on the mixing time of the Gibbs sampler equals the smaller complexity order between O(1) and o( √ n), which is O(1). The formal result is given in the following.
THEOREM 3.7. For any 0 < c < 1, recall the definition of the mixing time K c,x in Definition 2.11. We write K c,x as K c,x (n) to emphasize its dependence on n. Under the assumptions of Theorem 3.6, with initial state x (0) given by Eq. (39), there exists N c = Θ(1) and PROOF. See Appendix B.
Initial state.
The main results in Theorem 3.6 and Theorem 3.7 hold for a particular initial state given in Eq. (39). We discuss other initial states than the one given in Eq. (39). Note that the new bound in Lemma C.1 holds for any initial state that is in the "large set". Therefore, we can extend the results in Theorem 3.6 to get bounds when the Markov chain starts from some other initial states in the "large set". Recall the assumption on the observed data {Y i } in Eq. (38), we have assumed there exists δ > 0 such that n i=1 (Yi−Ȳ ) 2 n−1 ≥ σ 2 V + δ for large enough n. Note that the existence of such δ is sufficient to obtain the results in Theorem 3.6 and Theorem 3.7. In order to get bounds when the MCMC algorithm starts from other initial states, we assume δ is known and establish upper bounds using δ explicitly. We define the "large set" Eq. (44) using T = δ and the extension of Theorem 3.6 is given in the following.
Under the assumption Eq. (38), if the Markov chain starts with any initial state x (0) ∈ R δ (defined in Eq. (44) with T = δ), there exists a positive integer N , which does not depend on k, some constants C 1 > 0, C 2 > 0, C 3 > 0, C 4 > 0 and 0 < γ < 1, such that for all n ≥ N and for all k, we have where f n (·) is the fitted family of drift functions defined in Eq. (41).
PROOF. Following the same proof of Theorem 3.6 by keeping the term f n (x (0) ), the first two terms of the upper bound given in Eq. (10) can be replaced by [C 1 + f n (x (0) )]γ k and the last term of the upper bound in Eq. (10) can be replaced by From Theorem 3.8, using similar arguments as Corollary 2.12, we can immediately obtain a complexity bound when the Markov chain starts within a subset of the "large set", which is given in the following. This result suggests that if the Markov chain starts from an initial state which is not "too far" from the state given in Eq. Note that {x ∈ R δ : f n (x) = o(n/ log n)} defines a subset of the "large set" R δ , and the above result shows that the mixing time is O(log n) if the initial state is in this subset. The order o(n/ log n) comes from a balance between f n (x (0) )γ k and f n (x (0) ) k n . We conjecture the same complexity order of O(log n) on the mixing time may hold even if the initial state is in a larger subset, for example x (0) ∈ R δ : f n (x (0) ) = Θ(n) . However, in order to prove this, we need to derive tighter upper bound of k i=1 P i (x (0) , R c δ ) which is a non-trivial task. We therefore leave it as an open problem.
Finally, we do not have upper bounds for the Markov chain when the initial state is outside of the "large set" since the new bound in Theorem 2.6 requires the Markov chain starts within the "large set". For this particular Gibbs sampler example, numerical experiments suggest that, if the Markov chain starts from a "bad" state, the number of iterations required for the Markov chain to mix can be much larger than O(log n). In high-dimensional settings, when the dimension of the state space goes to infinity, the Markov chain may not mix fast starting from any state. This observation is loosely consistent with various observations in [20] (42) implies that those states whose value of σ 2 A are close to zero are "bad" states. Therefore, the goal of choosing the "large set" in Eq. (44) is to ruling out those states. Note that we have applied the trick that ruling more states with "high energy" could make Eq. (8) easier to establish. In the "large set" R T defined by Eq. (44), we have also ruled out the states x whose value of σ 2 A are larger than Note that these states are not "bad" states. However, by ruling them out, it is easy to establish Eq. (46) is loose, it is already enough for showing the mixing time of the Gibbs sampler is O(1). The proof of Lemma E.1 only makes use of the form of drift function and the definition of "large set", and does not depend on the particular form of the transition kernel of the Gibbs sampler. We expect that, in general, tighter upper bounds on k π(R c T ) + k i=1 P i (x (0) , R c T ) could be obtained, depending on the choice of "large set" and the MCMC algorithm to be analyzed. This may involve carefully bounding the tail probability of the transition kernel.
• The constants in Theorem 3.6: In Theorem 3.6, we do not compute the constants N , C 1 , C 2 , and C 3 explicitly. Actually, C 2 is given explicitly in Lemma E.1. C 3 is given in Lemma E.1 but it depends on the unknown constant δ > 0 from the assumption Eq. (38). Furthermore, C 1 can be explicitly computed under much more tedious computations. Finally, N depends on the unknown constant N 0 in Eq. (38) and the resulting concentration property of the posterior distribution for parameter σ 2 A by Eq. (38). Therefore, if we make stronger assumptions on the observed data {Y i }, it is then possible to compute all the constants in Theorem 3.6 explicitly under tedious computations, though we do not pursue that here.
APPENDIX A: PROOF OF THEOREM 2.6
Recall that R denotes the "small set" and R 0 denotes the "large set". We first construct a transition kernel for a "restricted" chain define on R 0 ,P (x, ·), ∀x ∈ R 0 . One goal of this construction is that the stationary distribution of the kernelP equals to the π(·) restricted on the "large set" R 0 , i.e., π ′ (dx) := π(dx)/π(R 0 ), ∀x ∈ R 0 . We consider two different constructions depending on (C1) or (C1') in Definition 2.3 holds.
• If (C1) in Definition 2.3 holds, then we define the kernelP as the transition kernel of the "trace chain" constructed as follows. Let X (m) be a Markov chain with kernel P , we define a sequence of random entrance time {m i } i∈N by m 0 := min{m ≥ 0 : Then {X (mi) } i∈N is the "trace chain" and the transition kernelP (x, B) := P(X (m1) ∈ B | X (m0) = x), ∀x ∈ R 0 . It is clear that the "trace chain" is obtained by "stopping the clock" when the original chain is outside R 0 , the constructedP is a valid transition kernel. It can be verified that the stationary distribution of this "trace chain" is π ′ . • If (C1') in Definition 2.3 holds, then we construct the "restricted chain" using the ker-nelP = I i=1P i whereP i (x, dy) := P i (x, dy) for x, y ∈ R 0 , x = y, andP i (x, x) := 1 − P i (x, R 0 \{x}), ∀x ∈ R 0 . Note that since each P i is reversible, one can easily verify that eachP i is also reversible and the stationary distribution ofP is π ′ .
Suppose that X (m) and Y (m) are two realizations of the Markov chain, where X (m) starts with the initial distribution ν(·) and Y (m) starts with the stationary distribution π(·). We defineX (m) andỸ (m) to be two realizations of a constructed "restricted" Markov chain on the "large set" with the transition kernelP (x, ·), ∀x ∈ R 0 . We assumeX (m) starts with the same initial distribution ν(·) as X (m) andỸ (m) starts with π ′ (·). Since ν(R 0 ) = 1, we assume X (0) =X (0) . This rest of the proof is a modification of the original proof of the drift-and-minorization method using coupling in [44].
We define the hitting times of (X (m) ,Ỹ (m) ) to R × R as follows.
Let N k := max{i : t i < k}. Then N k denotes the number of (X (m) ,Ỹ (m) ) to hit R × R in the first k iterations. The following result gives an upper bound for L(X (k) ) − L(Y (k) ) var .
LEMMA A.1. When the Markov chain satisfies the minorization condition in Eq. (9), for any j > 0, we have PROOF. First, by triangle inequality L(X (k) ) − L(Y (k) ) var ≤ L(X (k) ) − L(Ỹ (k) ) var + L(X (k) ) − L(X (k) ) var By the coupling inequality L(X (k) ) − L(X (k) ) var ≤ P( Finally, the Markov chain with kernelP (x, ·) satisfies both drift condition and minorization conditioñ Using the result from [44, Theorem 1], we have Next, we further upper bound the term P(N k < j) slightly tighter than [44]. Define the i-th gap of return times by r i := t i − t i−1 , ∀i > 1, then LEMMA A.2. For any α > 1 and j > 0, and k > j, PROOF. Note that {N k < j} = {t j ≥ k} = {r 1 + · · · + r j ≥ k} and r 1 + · · · + r j ≥ j by definition. Then the result comes from Markov's inequality P(N k < j) = P(r 1 + · · · + r j ≥ k) Next, we bound E j i=1 α ri following the exact same arguments as in [44, Proof of Lemma 4 and Theorem 12], which gives By the drift condition forP (x, ·) in Eq. (53), taking expectations on both sides of Eq. (53) leads to E π ′ (f (x)) ≤ b 1−λ . Therefore, setting j = rk + 1 and combining all results together yields Finally, we slightly relax the upper bound by replacing α rk+1 with α rk in both the denominator and numerator. Then Theorem 2.6 is proved by further relaxing APPENDIX B: PROOF OF THEOREM 3.7 Using Theorem 3.6, one sufficient condition for is that n ≥ N and This requires the number of iterations, k, satisfies Note that any k (if exists) satisfying the above equation provides an upper bound for the mixing time K c,x (0) (n).
That is, for any n ≥ N such that which is equivalent to n ) be the state of the Markov chain at the k-th iteration, then we have PROOF. In this proof, we write f n (x) as f (x) for simplicity. Recall that the order of Gibbs sampling for computing the first scan is: It suffices to show that for we have Note that we can compute the expectation in E[f (x (1) ) | x (0) ] by three steps, according to the reverse order of the Gibbs sampling. To simplify the notation, we define σ-algebras that we condition on: i }, µ (1) ), Then we have The three steps are as follows: 1. Compute the expectation over (σ 2 A ) (1) given {θ (1) i } and µ (1) . This is to compute the conditional expectation where we write E[· | G A ] to denote the the expectation is over (recall that a and b are constants from the prior IG(a, b)) for given θ (1) and µ (1) . 2. Compute the expectation over {θ (1) i } given µ (1) . This is to compute the conditional expectation where we use E[· | G θ ] to denote the expectation is over for given µ (1) and (σ 2 A ) (0) . 3. Compute the expectation over µ (1) . This is to compute the conditional expectation where we have used E[· | G µ ] to denote the expectation is over . In the following, we compute the three steps, respectively. We use O(1) to denote terms that can be upper bounded by some constant that does not depend on the state.
Note that Recall that E[· | G A ] denotes that the expectation is over where a and b are constants from the prior IG(a, b). The mean and variance of (σ 2 A ) (1) can be written in closed forms since (σ 2 A ) (1) follows from an inverse Gamma distribution. Denoting S := i (θ (1) i −θ (1) ) 2 n−1 , we can write the mean of (σ 2 A ) (1) using S as follows: Similarly, the variance of (σ 2 A ) (1) can be written in terms of S as well: Substituting the mean and variance of (σ 2 A ) (1) in terms of S, we have (83) Therefore, it suffices to compute the following terms Note that {θ (1) i } are independent (but not identically distributed) conditional on G θ . For the first term E (θ (1) For the other two terms involving S, we have the following lemma.
Next, using the following results we can first write f ′′ (x (1) ) by Then, using we further bound the terms Finally, combing all the results yields (98) Recall that the expectation E[· | G µ ] is over 28 In the obtained expression of f ′′ (x (1) ) from previous step, the only term involves µ (1) is we have Finally, we complete the proof by is asymptotically bounded away from 0. Denoting since R ⊆ R ′ , it suffices to show the minorization volume ǫ satisfying is asymptotically bounded away from 0. One common technique to obtain ǫ is by integrating the infimum of densities of P (x (0) , ·) where in our case the infimum is over allθ (0) and The size of uncertainties of the initialθ (0) and (σ 2 A ) (0) is of order O(1/ √ n). Therefore, for any fixed initial state concentrates at a rate of Ω(1/ √ n) then ǫ is bounded away from 0.
For the density function of the Markov transition kernel P (x (0) , ·), recall the order of Gibbs sampler Then ǫ can be computed using the three steps of integration according to the reverse order of the Gibbs sampler: 1. For given µ (1) and {θ (1) i }, integrating the infimum of the density of (σ 2 A ) (1) . Note that the infimum is over a subset ofθ (0) and (σ 2 A ) (0) . However, does not depend onθ (0) and (σ 2 A ) (0) . Therefore, the integration of the infimum of the density in this step always equals one; 2. For given µ (1) , integrating the infimum of the densities of {θ (1) i }. We first note that {θ (1) i } appear in the densities only in the forms ofθ (1) and S = i (θ (1) i −θ (1) ) 2 n−1 . Therefore, instead of integrating over (θ (1) 1 , . . . , θ (1) n ) we can integrate overθ (1) and S. Furthermore, we have shownθ (1) is conditional independent with S given (σ 2 A ) (0) in the proof of Lemma C.2, we can integrate them separately. Finally, we note that the infimum is over Overall, we need to showg n (µ (1) ) is lower bounded away from 0, which is defined bỹ g n (µ (1) ) := dSdθ inf where f S ((σ 2 A ) (0) , n; S) denotes the density function of and 3. Finally, we integrate the infimum of the densities of µ (1) to get ǫ. That is, ǫ = dµ g n (µ) inf In the following, we show ǫ is lower bounded away from 0 in three steps.
First, it is easy to see that the density of S does not depend on µ (1) . We show dS inf Second, we show dθ inf where erf(z) := 2 √ π z 0 e −t 2 dt and C and C ′ are some constants. Finally, we complete the proof by showing we know Therefore, defining and denoting f ′ S ′ (σ 2 A , n; S ′ ) as the density of S ′ , it suffices to show where and f ′′ S ′′ (σ 2 A , n; S ′′ ) is the density function of S ′′ . Next, note that σ 2 which does not depend on n. We definef (z, σ 2 A ; x), ∀z ∈ R as the density function of a random variableX z,σ 2 then we knowX z,σ 2 A d − → N (z, 1). The rest of the proof is first to lower bound dS ′′ inf σ 2 A , n; S ′′ ) using the density functionf (z, σ 2 A ; x) and then show it is asymptotically lower bounded away from 0.
n−1 is not random, and there exists a constant C 0 such that D.2. Proof of Eq. (114). We again omit the subscripts for simplicity. The goal is to lower bound dθ inf ;θ (127) Note that there exists some constants C 1 and C 2 such that and another constant C 3 such that Therefore, we have dθ inf where C 4 := C1 √ C3 and C 5 := C2 √ C3 .
D.3. Proof of Eq. (115).
We omit the subscripts for simplicity. We show the following is asymptotically bounded away from 0: Therefore, we have Finally, we show is asymptotically bounded away from 0. Note that when n → ∞, we have (σ 2 A ) ′ n →σ 2 A . So the density functions N ± d n , (σ 2 A ) ′ n n ; µ concentrate on 0. Therefore PROOF. In this proof, we write f n (x) as f (x) for simplicity. We first consider a Markov chain starting from initial state x (0) defined by Eq. (39). By Eq. (38), we have (σ 2 A ) (0) = n i=1 (Yi−Ȳ ) 2 n−1 − σ 2 V for large enough n, which implies f (x (0) ) = 0. Therefore, for large enough n, we have E(f (x (1) )) ≤ b from Lemma C.1. Furthermore, we can continue to get upper bounds E(f (x (i) )) ≤ ib for all i = 1, . . . , k. This implies By the Markov's inequality, we have for i = 1, . . . , k. Therefore, we have Next, we consider a Markov chain starting from π. According to Lemma C.1, we have where E π [·] denotes the expectation is over x ∼ π(·). Note that by Hölder's inequality (in the reverse way) Therefore, we have Next, according to Lemma E.2, we know that E π (1/σ 2 A ) ≤ 2/δ and E π (1/(σ 2 A ) 2 ) ≤ 2/δ 2 for large enough n.
More specifically, by Lemma E.2, we have 1 + 2σ 2 V /δ for large enough n. Therefore, we get Thus, by the Markov's inequality Finally, we have LEMMA E.2. There exists a positive integer N , which only depends on a, b, σ 2 V , and δ, such that for all n ≥ N , we have PROOF. The posterior distribution can be written as where we use f a (x, Y 1 , . . . , Y n ) to denote the joint distribution of x and {Y i } when IG(a, b) is used as the prior for σ 2 A . That is, Now using 1 Therefore, it suffices to show the ratios fa(x,Y1,...,Yn)dx are (asymptotically) bounded. Next, we focus on the first ratio. The second ratio can be proved using a similar argument.
Using the fact that and we can write E π (1/σ 2 A ) as a function of ∆ = i (Y i −Ȳ ) 2 . Denote h n (∆) := E π (1/σ 2 A ), then we have h n (∆) := Next, we show h n ((n − 1)(c + σ 2 V )) is (asymptotically) bounded for any fixed c > 0. Note that We change variable y = where the term O((n −1/2 ) only depends on constants a, b, and σ 2 V . Finally, since for all n ≥ N 0 we have ∆ ≥ (n − 1)(σ 2 V + δ), this implies h n (∆) ≤ 1 δ (1 + O(n −1/2 )), ∀n ≥ N 0 . Therefore, there exists large enough positive integer N 0 , which only depends on a, b, σ 2 V , and δ, such that for all n ≥ N 0 , we have E π (1/σ 2 A ) = h n (∆) ≤ 1 δ (1 + O(n −1/2 )) ≤ 2 δ . For E π (1/(σ 2 A ) 2 ), we can follow a similar argument to show that E π (1/(σ 2 A ) 2 ) ≤ 2 δ 2 for large enough n. Therefore, we can conclude that there exists large enough positive integer N , which only depends on a, b, σ 2 V , and δ, such that for all n ≥ N , we have both E π (1/σ 2 A ) ≤ 2 Therefore, according to the k-step drift condition, for all the states x in the small set, we have c √ p ≤ x 2 ≤ C √ p for some positive constant c < 1 and C > 1. Then we choose k such that x 2 /4 k = O(1/p) so that the integral of the minimum of the two one-dimensional densities Then by writing the multivariate Gaussian density as product of one-dimensional densities, the total minimization volume can be controlled so that ǫ = (1 − O(1/p)) p > 0 and bounded away from zero as p → ∞. Therefore, we can choose k = ⌊C log(p)⌋ + 1 a large enough constant C. Overall, we have proven that for a k-step drift condition and the corresponding minimization condition gives ǫ which is asymptotically bounded away from zero, which completes the proof.
where b = O(1/n). For simplicity of notation, we omit the index k in the rest of the proof. The computation of E[f n (X (k+1) ) | x (k) ] have two steps. We first compute the conditional expectation over β | λ ∼ Ga(ρ + nα, δ + nλ). Using the fact that 1/β has an inverse gamma distribution, we have Next, we compute the conditional expectation over λ given β. Note that by summing (conditional) independent Gamma distribution we know nλ | β ∼ Ga(n(Ȳ + α), 1 + β) (167) which gives Using the assumption onȲ and the fact that 1 1+β ∈ (0, 1], we have Now the proof can be completed by verifying the Gibbs sampler satisfies the minorization condition: P (x, ·) ≥ ǫQ(·) for all x in the small set λ − α β = O(1/ √ n) . We only need to show that ǫ is asymptotically bounded away from 0 as n → ∞. Note that the last step of updating β in the Gibbs sampler doesn't depend on the previous state, it then suffices to derive the minorization condition for the step nλ | β ∼ Ga(n(Ȳ + α), 1 + β) for all β in the small set. Let β max and β min be the maximum and minimum value of β in the small set. Then from the explicit form of the density ofλ, on can see that ǫ must be asymptotically bounded away from 0 if 1/(1 + β min ) − 1/(1 + β max ) = O(1/ √ n), which is satisfied by the small set.
This completes the proof.
APPENDIX H: PROOF OF REMARK 3.5 [23, Appendix C] states another way to obtain samples from the posterior of the MCMC model related to James-Stein estimator. More specifically, recall that the model Y i | θ i ∼ N (θ i , σ 2 V ), 1 ≤ i ≤ n, θ i | µ, σ 2 A ∼ N (µ, σ 2 A ), 1 ≤ i ≤ n, µ ∼ flat prior on R, where σ 2 V is assumed to be known, Y = (Y 1 , . . . , Y n ) is the observed data, and x = (σ 2 A , µ, θ 1 , . . . , θ n ) are parameters. Then the posterior can be written as π(θ, µ, σ 2 where π(θ | µ, σ 2 A , Y ) is a product of independent univariate normal densities and π(µ | σ 2 A , Y ) is a normal distribution Therefore, one can use a rejection sampler with proposal from IG(a, b) to obtain independent samples from π(σ 2 A | Y ). However, we show that the acceptance probability of this rejection sampler decreases (typically exponentially) fast with n. To see this, note that ).
We let g(σ 2 A ) be the density of IG(a, b), then using the fact where the upper bound M is achieved when Then the acceptance probability of the rejection sampler is where the last inequality comes from exp(x − 1) ≥ x.
We can see that under mild conditions such that n i=1 (Yi−Ȳ ) 2 n−1 converges to a constant, the acceptance probability of the rejection sampler goes to zero, E[Z (n−1)/2 ] → 0, very fast. | 2018-08-10T00:26:13.353Z | 2017-08-02T00:00:00.000 | {
"year": 2017,
"sha1": "6a334caae360fce0369f1929f0a4e3b1ec28bd9c",
"oa_license": null,
"oa_url": null,
"oa_status": null,
"pdf_src": "Arxiv",
"pdf_hash": "6a334caae360fce0369f1929f0a4e3b1ec28bd9c",
"s2fieldsofstudy": [
"Computer Science",
"Mathematics"
],
"extfieldsofstudy": [
"Mathematics"
]
} |
250677155 | pes2o/s2orc | v3-fos-license | A continuous-time capacitance to voltage converter for microcapacitive pressure sensors
This paper reports a continuous-time capacitance to voltage converter (CVC), based on the charge integration circuit, for capacitive pressure sensor applications. Unlike conventional charge integrators which need a large feedback resistor for biasing, the proposed method uses low duty cycle periodic reset to establish a robust dc bias at the sensing electrode. The CVC has the merits of low noise, linear transfer characteristics and low susceptibility to system offset. A CVC has been designed with standard 0.35µm CMOS technology. For a 3.3-V supply, it achieves 1mW power consumption, 0∼0.8pF detection range and 0.06% resolution.
Introduction
Precise measurement of capacitance difference or ratio in a continuous form is very important for capacitive pressure sensors. Compared with discrete-time approaches, for example the switchedcapacitor (SC) circuits, continuous-time voltage sensing has intrinsically higher resolution because it does not suffer from noise folding [1]. Recently, various methods have been reported [2]- [4] to deal with capacitance to voltage conversion. Some of them, for instance the method utilizing ratio-arm bridge [2], is symmetrical and sensitive, but it has the disadvantage that transformer coils have to be used which is difficult to implement monolithically. Others, for example the modified Martin oscillator with microcontroller [3], is not capable of handling capacitance changes with frequencies higher than 10Hz. Another approach is based on charge integration [4]. It is less susceptible to parasitics, however, a fairly large feedback resistor is usually needed to bias the sensing electrode. In the past, the large resistor can be implemented either by sub-threshold transistors or long transistors in triode region [5,6]. However, values of such MOS resistors depend on the terminal voltages which are difficult to control. For pressure sensor applications, since the capacitance and output voltage of the CVC change over a wide range, the linearity of the feedback resistor degrades seriously. Besides, a large transistor will introduce parasitic capacitance which would cause signal attenuation in a capacitive sensing front-end.
This paper presents a continuous-time CVC which does not have the limitations mentioned above. Instead of using large feedback resistors, the bias voltage at sensing electrode is established by low duty cycle periodic reset. The proposed CVC has the advantages of low noise, linear capacitance to voltage transfer characteristics and low susceptibility to system offset. In section 2, circuit configuration is presented. Section 3 describes the op amp design, followed by digital control logic implementation in section 4. In section 5, system level simulation results are provided to verify the circuit performance and conclusion is finally drawn in section 6.
Circuit configuration
The basic CVC structure is shown in Figure 1(a). C I is the feedback capacitor. C s and C s +ΔC are a pair of capacitive pressure sensors whose common node is connected to the inverting input of the op amp. C P represents the total parasitic capacitance at the sensing electrode. The square-wave bias reset signals are generated on-chip by four MOS switches, S1, S1', S2, S2'. The CVC operation can be divided into two consecutive phases, namely, the sensing phase and reset phase. As shown in Figure 1(b), in the sensing phase, S1 and S1' are open, S2 and S2' are closed. The CVC functions as a charge integrator and the output dc voltage is given by: During the reset phase, S1 and S1' are closed, S2 and S2' are open. By releasing the charge periodically, this switching bias strategy suppresses the undesirable charge leakage to or from the sensing electrode and eliminates the bias voltage drift caused by charging. When reset phase is over, S1 and S1' are open before S2 and S2' are closed to avoid shorting between power rails.
In order to assess the validity of the proposed CVC, it is used to convert the output of a three plate capacitive pressure sensor [7] to voltage. Under external pressure, the capacitance of the variable sensor changes from 1.26pF to 2.06pF while the reference sensor is fixed at 1.26pF. C P is typically around 4pF and feedback capacitor C I is selected to be 1.65pF as a tradeoff between the circuit noise density and the output voltage range [1]. According to the system requirements, the CVC should be able to detect 0.06% of the maximum capacitance change.
OP AMP design
Op amp being the key unit of the whole circuit, the CVC design is started with op amp design. To translate the CVC requirements to op amp specifications, several non-idealities have to be considered.
As indicated in equation (1), the CVC provides a nominal voltage gain of ΔC/C I . We now calculate the actual gain if the op amp exhibits a finite dc gain of A VO . An exact analysis shows that the output voltage of the CVC can be expressed as [8]: where C T =2C s +ΔC+C P +C I and F CV =C I /C T . Equation (2) implies that the amplifier suffers from a gain error of 1/(F CV A VO ). Assume F CV =0.193(C I =1.65pF and C T =8.57pF), the op amp dc gain should be greater than 8600 so that the gain error is below 0.06%.
Another non-ideality is the offset. There are two types of offset: the mismatch between nominal capacitance of the sensor pair and op amp offset. To explore their effects on the circuit performance, let's consider the schematic diagram shown in Figure 2.
Equation (3) indicates that the output voltage is modified by including two additional terms. However, the second term and third term corresponding to two offset components are both constants, which can be easily cancelled through calibration.
Once the system non-idealities are explored, the next step is to determine an appropriate op amp topology for the CVC. Commonly used op amps are two-stage Miller, telescopic and folded cascade amplifiers. For pressure sensor applications, since the op amp needs to drive capacitive load and the output voltage should be able to go down to the same potential as the input, Miller and telescopic amplifiers are not suitable. Therefore, folded cascade op amp is chosen. Schematic for the folded cascode amplifier is shown in Figure 3. Several design guidelines are followed to optimize the op amp noise performance. Thermal noise is minimized by choosing the (W/L) ratio of the input pair larger than one of the active load, whereas to curtail the flicker noise, the area of the input PMOS transistors is increased by using 0.6μm as the gate length and active load is made longer than the input pair [9]. Because the capacitance change of the variable pressure sensor and output voltage is always positive, modified cascode current mirror is used to improve the upward swing of the op amp. Table 1 lists some op amp specifications from simulation. Important performance indexes, for example the dc gain and output swing are over-designed a bit in case of fabrication variation.
Control logic implementation
To establish the bias voltage at sensing electrode, the CVC is reset once every 256 clock cycles to release the undesirable charge (this clock down-conversion ratio is non-critical and can be determined according to the system requirements). Small size MOS switches (W/L=1/0.35) are used to minimize both leakage current and parasitic capacitance. S1 and S2' are PMOS switches, S2 is NMOS switch and S1' is PMOS switch with half size dummy on the left. The generator circuit and simulation results are shown in Figure 4. In the sensing phase, switches S1 and S1' are open, S2 and S2' are closed. The CVC is basically a charge integrator. During the reset phase, switches S2 and S2' are open and S1 and S1' are closed and excess charge at the sensing electrode is released. When reset phase is over, S1' is switched off first and almost half of its channel charge injects into the sensing electrode. Next, S1 is open, although it has the same charge injection problem as S1', the injected charge from S1 does not disturb charge conservation at the sensing node because the bypass switch S1' is already open. As can be seen from Figure 4(a), two inverters are placed between S1 and S1' to introduce this small delay. The charge injected to compensate must remain in the sensing electrode and this cannot be the case if S1' is still closed in the process of closing its dummy. Therefore, three inverters are added between S1' and S1D' (the dummy switch control signal) to achieve the delay. Finally, S2 and S2' are closed and the CVC resumes sensing phase.
The ac impedance of the biasing circuit is determined by the off-resistance of the reset switches whose value is normally in the range of GΩ. Parasitic capacitance of the biasing network is the junction capacitance of the small size switches which is only several fF. Compared with resistor-based biasing, this biasing strategy has the advantages of robust dc path, high ac impedance and small parasitic capacitance. Also, unlike the SC circuits, by resetting once every 256 clock cycles, the noise folding is reduced by a factor of 256 and thus becomes insignificant.
Simulation results and discussion
Simulation results have been provided in section 3 and section 4 to verify theoretical performance of the op amp and digital control logic. In this section, the two function blocks will be integrated and system level simulation will be carried out using 0.35μm device BSIM3 models. First, transfer characteristics between the capacitance change ΔC and output voltage v out is studied through parameter sweep. Then, noise analysis is carried out to obtain the output noise and compare it with the required capacitance resolution.
To extract the C-V curve, 100kHz square wave is selected to be the clock base and ΔC sweeps from 0 to 0.8pF with a step of 0.1pF. Simulation results are plotted in Figure 5. As shown, the C-V relationship is highly linear and the slope is equal to one, which is in accordance with equation (1). Also, the C-V curve is parallel shifted by the same amount as the offset voltage, which indicates that the offset error can be easily removed through calibration. The capacitance resolution obtained from simulation is 0.03%. Circuit configuration for noise analysis is shown in Figure 6. R P represents the interconnect resistance. C P is decomposed into three parts: parasitic of the pressure sensor C PS , parasitic at the sensing electrode C PA and parasitic of the op amp input transistors (which is not shown in the diagram). ΔC is selected to be 0.4pF. As a simulation trick, a dummy huge resistor R bias =1GΩ is used for biasing whose noise contribution can be ignored. Noise analysis sweeps frequency from 0.1Hz to 10GHz.
From simulation, the output noise voltage at V x is 156.8μV, which corresponds to 0.0196% of the total capacitance change and is below the required capacitance resolution 0.06%.
Figure 6. Noise simulation
The simulation results show that the targeted specifications have been met and the CVC can measure the capacitance change with high resolution. The CVC is designed mainly for a pressure sensor; however, it can be modified for other types of capacitive sensors as well. The capacitance change of accelerometers and gyroscopes, for instance, is differential and can be either positive or negative. Therefore, fully differential op amp should be used and downward swing of the op amp needs to be improved to make the CVC output swing symmetrical. Other applications require the noise floor of the CVC ultra low. In such cases, more advanced circuit topologies such as chopper stabilization amplifier can be utilized.
Conclusion
This paper presents a continuous-time capacitance to voltage converter suitable for micro-capacitive pressure sensor applications. The proposed CVC has the merits of low noise, linear capacitance to voltage transfer characteristics and low susceptibility to system offset. A CVC has been designed with standard 0.35μm CMOS technology for a three plate capacitive pressure sensor. For a 3.3-V supply, it achieves 1mW power consumption, 0~0.8pF detection range and 0.06% resolution. Simulation results based on the proposed design are given which closely agree with the theoretical analysis.
The CVC will be sent for fabrication and experimental results will be obtained in the near future. | 2022-06-28T03:01:04.017Z | 2006-01-01T00:00:00.000 | {
"year": 2006,
"sha1": "6c86655a5fcd318a7d0236888093428caf10e7b9",
"oa_license": null,
"oa_url": "https://doi.org/10.1088/1742-6596/34/1/168",
"oa_status": "GOLD",
"pdf_src": "IOP",
"pdf_hash": "6c86655a5fcd318a7d0236888093428caf10e7b9",
"s2fieldsofstudy": [
"Engineering",
"Physics"
],
"extfieldsofstudy": [
"Physics"
]
} |
3857574 | pes2o/s2orc | v3-fos-license | Feature extraction without learning in an analog Spatial Pooler memristive-CMOS circuit design of Hierarchical Temporal Memory
Hierarchical Temporal Memory (HTM) is a neuromorphic algorithm that emulates sparsity, hierarchy and modularity resembling the working principles of neocortex. Feature encoding is an important step to create sparse binary patterns. This sparsity is introduced by the binary weights and random weight assignment in the initialization stage of the HTM. We propose the alternative deterministic method for the HTM initialization stage, which connects the HTM weights to the input data and preserves natural sparsity of the input information. Further, we introduce the hardware implementation of the deterministic approach and compare it to the traditional HTM and existing hardware implementation. We test the proposed approach on the face recognition problem and show that it outperforms the conventional HTM approach.
Introduction
Hierarchical Temporal Memory (HTM) is a neuromorphic algorithm that emulates the structure and functionality of the cortical neural networks [11]. HTM can serve as a tool for intelligent data processing in edge computing devices. The increase in the number of edge computing devices and Internet of things (IoT) applications in the recent years lead to the demand to introduce on sensor processing using analog hardware.
Therefore, the translation of the HTM algorithm into analog hardware can produce the promising solution to the computational speed problems [2,7,12].
HTM is divided into two parts: HTM Spatial Pooler (HTM SP) and HTM Temporal Memory (HTM TM). The HTM SP has been proven to be useful for visual data processing and classification problems, whereas the HTM TM is used as a prediction and learning algorithm. In this work, we focus on the SP part of HTM. The main functionality of the HTM SP is to form the sparse distributed pattern from the input data and perform the feature encoding. The recent works show that it is useful for feature extraction and pattern recognition problems [12].
In this work, we investigate the initialization stage of the HTM SP and proposed the rule-based deterministic approach instead of the random weight approach for the initial weight assignment. The main purpose of the rule-based approach is to connect the input to the HTM weights, which allows to preserve natural sparsity and structural information from the inputs. Moreover, we propose the hardware implementation for the rulebased approach and compare it with the conventional random weight approach in terms of power dissipation and on-chip area requirements. Also, we test the system level implementation of the proposed approach on the face recognition problem and show the improvements in the recognition accuracy [10,12]. This paper is organized into 8 sections. Section 2 provides the overview of the HTM algorithm and introduces the mathematical framework of HTM. Section 3 illustrates the difference between the conventional approach and the proposed rule-based approach. Section 4 discussed the hardware implementation of the HTM SP and illustrates the proposed hardware architecture. Section 5 shows how system level HTM SP algorithm can be used for the face recognition problem. Section 6 shows the results of the system-level and analog hardware implementations. Section 7 provided the discussion of the proposed rule-based method and corresponding analog hardware. Section 8 concludes the paper.
HTM overview
HTM is a neuromorphic machine learning algorithm that emulates the architecture and biological functionality of the neocortex in a human brain [6]. HTM algorithm focuses on the sparse distributed representation of the information, encoding of the input sensory data, learning and prediction making based on the temporal changes in the input data and previous inputs [5].
As discussed in the introduction section, the original HTM algorithm is divided into two main parts: Spatial Pooler (SP) and Temporal Memory (TM). The main purpose of the HTM SP is the encoding and producing sparse distributed representation (SDR) of the input data. This is useful for the feature extraction and visual data classification purposes [15]. The applications of the HTM SP include handwritten digits recognition [4], face recognition [12], speech recognition [11], gender classification [10], object categorization [2] and natural language processing [9]. The HTM TM is responsible for the learning and processing of temporal patterns and can be used for the prediction taking into account previous experiences [12].
HTM has a hierarchical structure, and an example 3-level HTM structure is shown in Fig. 1. Each level of HTM consists of certain regions with columns, and each column is comprised of cells. The columns in HTM is equivalent to neurons. The columns are connected to the input space with the connections through the dendrite segment with several synapses. Each synapse has a certain weight called synaptic permanence.
The HTM SP consists of four main phases: initialization, calculation of overlap value, inhibition and learning process [6]. In the initialization phase, the potential inputs in the HTM regions are identified and certain number of columns within the HTM regions are selected [3]. The potential input columns are those columns, which are considered to receive the input data. The inputs are connected to the potential input columns through the dendrite containing with potential synapses with certain permanence value (weight). In the initialization phase, the weights are assigned randomly following the uniform random distribution approach. If this weight (permanence value) is greater than the threshold, the potential synapse is considered to be connected. If the connected synapse is connected to the active input it is considered to be active connected. In the overlap phase the number of active connected synapses is computed. In the inhibition stage, the k columns with highest overlap values become active (assigned as high, 1), and the other columns are inhibited (assigned as low, 0). In the learning phase, the HTM SP weights of the synapses are updated based on the Hebb's leaning rules. After the update process, all phases, except initialization phase, are repeated.
Mathematical framework of the HTM SP
The arrangement of the input of the HTM SP and output space that is arranged into mini-columns is shown in Fig.2. The parameter x j denotes the j-th input neuron in the input space, and the y i refers to the ith output SP mini-column in the output space, which is connected to the region of the input space with the potential connections.
The synapse of the i-th SP mini-column is located in a hypercube of the input space centered at x c i with the edge length of γ. The potential connections are defined in Eq.1, where ı(x j ; x c i , γ) = 1, ∀x j ∈ (x c i , γ), and z ij ∼ U (0, 1), z is selected randomly and follows the uniform distribution rule (U has a range: [0, 1]) [8,12]. The parameter ρ denotes the assigned percentage of inputs that are considered to be potential connections within the hypercube of the input space.
For all synapses the synaptic permanence value (weight) is assigned. The synaptic permanence from the j-th input to the i-th SP mini-column is represented by the matrix S ij ∈ [0, 1] shown in Eq.2. If the synapse is located within the potential the region of potential inputs, the synaptic permanence value S ij is assigned as the uniform random distribution between 0 and 1; otherwise, the synaptic permanence is 0, so the synapse is not connected.
All the connected synapses are represented by the binary matrix B shown in Eq.3. Based on the synaptic permanence value, the synapse is either connected or not connected. If their value is greater the threshold value θ c , the synapse is connected and B = 1, and vise versa. The threshold θ c shows the percentage of the connected synapses.
Equation 4 refers to the process, where the local inhibition neighborhood region N i of the HTM SP of the i-th SP mini-column is determined. The parameter y i − y j refers to the the Euclidean distance between the mini-columns i and j, and the parameter φ controls the inhibition radius.
In the overlap phase of the HTM SP, the activation of the SP mini-columns for a particular input pattern Z is determined. The input overlap calculation is shown in Eq.5, where β i is a boosting factor that refers to the excitability of the SP mini-column.
In the inhibition phase, the activation of the SP mini-columns occurs. The activation depends of two conditions: the value of the input overlap of the SP mini-column should be above the threshold θ s and within the top s percent considering the other SP mini-columns in the inhibition neighborhood. The selection of the active column is shown in Eq.6, where the parameter α i is the activity of the SP mini-columns, prctile is a percentile function, and NO(i) = {o j |j ∈ N(i)} with the target activation density s. The activation of the columns is implemented according to the k-winnerstake-all rule considering all mini-columns in the particular neighborhood.
In the original HTM algorithm, the parameter k, can be changed based on the desired number of winning columns for a particular application [8]. However, in most of the existing hardware implementations of the HTM SP, k = 1 due to the limitations of the Winner-Takes-All(WTA) circuits [7].
In the learning phase of the HTM SP, feed-forward connections are learned using Hebb's learning rule and the boosting factor is updated. The Hebb's rule for the connection learning implies that the permanence value of the connections is either increased or decreased by the value ρ. The update process of the boosting factor is performed considering time-average activity the SP mini-columnsᾱ i (t) and recent activity of the SP minicolumns <ᾱ i (t) > [8]. Eq.7 shows the calculation of the time-average activity of the SP mini-columns in time t, where T is the number of considered previous inputs, and α i (t) is a current activity the i-th mini-column.
Equation 8 shows the calculation of the recent activity.
Equation 9 refers to the update process of the boosting factor, where η controls the adaptation of the HTM SP.
3 Rule-based approach To improve the initialization phase of the HTM SP, we proposed the rule-based approach for the weights assignment instead of the uniform weight distribution.
In the rule-based approach, we establish the connection between the input space and the synaptic permanence values (weights of the synapses). The Eq. 10 shows how synaptic permanence weights are assigned in the rule based approach. Eq. 10 is used instead of Eq. 2 and Eq. 3.
S ij = 1 if j ∈ P I(i) and P I(i) > mean(P I) 0 otherwise (10) In the rule-based approach, the synaptic permanence value is assigned based on the mean value of the inputs within the input space region with the potential connections. If the input is greater than the mean of the the inputs within this neighborhood, the synaptic permanence S ij is 1, otherwise S ij = 0.
In this work, we focus on the first three phases of the HTM SP: initialization, overlap and inhibition. The Algorithm 1 summaries the proposed approach. Lines 2-18 represent the HTM SP initialization stage, lines 20-22 refer to the overlap stage, and lines 24-27 correspond to the inhibition stage of the HTM SP.
Modified HTM SP
In this work, we investigate the modified HTM approach proposed in [12]. The difference between the original algorithm and the modified version of HTM is in the activation of the columns in the inhibition stage. The inhibition stage of the original algorithm is based on the WTA approach of k largest overlap values. In the modified version of the HTM SP, the selection of the winning columns occurs based on the mean value of the overlap in the inhibition region. If the overlap value of the column is greater than the mean value of the overlaps in the inhibition region, the columns is Algorithm 1 The HTM SP algorithm 1: HTM SP initialization 2: Define the size of input neighborhood with potential connections, x c i , γ, ρ, η, θ c , size of the local inhibition region, θ s 3: Determine φ by multiplying the average number of connected input spans of all the SP mini-columns by the number of mini-columns per inputs.
and (z ij < ρ) do 8: P I(i) = j 9: if j ∈ P I(i) and P I(i) > mean(P I) then 10: S ij = 1 11: else 12: S ij = 0 13: B ij = S ij 14: for |y i − y j | < φ, i = j do activated, otherwise, it is inhibited. The modified approach for the inhibition region is represented in Eq. 11, which is used instead of the Eq. 6.
As it is proven in [12] that the modified HTM approach results in higher accuracy and reduced on-chip area and power consumption, in this work, we focus on the modified HTM algorithm and check the effect of the rule-based initialization approach for the modified HTM hardware implementation. The overall architecture of the modified HTM illustrated in Fig.3. The receptor blocks correspond to the initialization and overlap calculation phases of the HTM SP and the inhibition block refers to the HTM SP inhibition phase.
The inhibition phase consists of the memristive threshold calculation block and threshold comparison block. In the threshold calculation block, the threshold value is determined as a mean of all input overlap values, which corresponds to the modified HTM SP approach. The value of the memristors in the threshold calculation block are the same. The threshold comparison block consists of the set of comparators and inverters. Each overlap voltage, corresponding to a particular connection in the inhibition block, is followed by a single comparator and inverter. The comparator is based on the Fig. 3 The HTM SP structure containing receptor block and inhibition block [12].
low voltage amplifier with 6 transistors and the current source. If the value of the overlap of a single column V RBj is greater than the overall mean of all overlaps V AV G , the output of the comparator is low, and vise versa. To invert the output of the comparator and normalize it to a certain level, the CMOS inverter is applied. The output of the inverter in the output of the inhibition block for the particular columns, which show whether the columns columns are activated or exhibited.
Random weight approach implementation
The difference between the traditional random weight approach and rule-based approach occurs in the receptor block of the hardware implementation of the HTM SP. The implementation of the traditional approach is illustrated in Fig. 4.
The receptor block structure for the conventional HTM SP approach consists of the randomization of the weight synapses and the receptor block mean calculator. The randomization of the weights of the synapses refer to the initialization stage, where the weights are completely randomized. This is implemented with the memristive the set of the memristive mean circuits, where the resistances of the memristors are assigned randomly. Separate sets of the memristors in the block of random weight synapses refer to several random iterations to ensure that the weights are completely randomized. The receptor mean block performs the summation of all the columns for the overlap calculation. Fig. 4 The HTM SP receptor block structure for the random weight approach [12]. The parameter V RBj corresponds to the final overlap of the particular column. The tradition summation of the overlap values in the HTM SP algorithm is replaced with the mean calculation on hardware, which does not have any impact on the performance of the modified HTM SP. The resistances of the memristors in the receptor block mean are the same.
Rule-based approach implementation
In this work, we proposed the analog hardware implementation of the rule-based approach for the HTM SP. If the tradition hardware implementation of the HTM SP is based on the memristive circuits, the rule based approach is based on the CMOS circuits. The proposed receptor block is shown in Fig.5.
In the proposed architecture, the memristive mean calculation block and CMOS comparator circuit correspond to the initialization phase on the rule-based HTM SP approach. The memristive mean calculation block calculates the average of the inputs in the mean of the inputs from the set of the potential inputs. The voltage V mean refers to the threshold for assigning the potential inputs as connected or disconnected. The CMOS comparator circuit compares the input of the particular column with the mean of all columns from the potential inputs. If the input is greater than the threshold, the output of the comparator V comp is low and vice versa. The CMOS analog switch block refers to the implementation of the overlap stage of the HTM SP. The voltage V out = V in1 , if the comparator output V comp is low, which corresponds to the case when the column is connected. The voltage V out = 0, when V comp is high, which means that the columns is disconnected. The voltage V out refers to the overlap value of the column.
System level implementation
In this work, we apply the HTM SP with two different initialization stage approach for the face recognition problem. The overall system implementation of the face recognition module with the HTM SP is illustrated in Fig.6. The input RGB images are read by the image sensor and applied to the input data controller. In this stage, the sampling process occurs if it is required and the sampled images are preprocessed. In this method, we use only RGB to gray-scale conversion as a preprocessing step. In the existed HTM SP face and speech recognition systems [12], the standard deviation filter is applied in the preprocessing stage to improve the recognition process. However, in this work, we show the effect of the different approaches for the initialization stage; therefore, we remove the filtering stage to obtain the actual results from the HTM SP.
After the controller, the image is applied to the HTM SP stage, which performs the encoding of the image and outputs the sparse binary image with the preserved important image features. The output data controller controls, where the images are directed in the training and testing stages. In the training stage, the output from the HTM SP is preserved in the training template storage. The training continues until all image class templates are preserved. In the testing stage, the output data controller directs the images into the comparison circuit. The comparison circuit can be implemented as a memristive pattern matcher, which compares all templates form the training template storage with the current input image. Finally, the image class is determined.
The algorithmic implementation of the face recognition system approach is shown in Algorithm 2 in Appendix.
System level simulation
The experiments for the system level simulation were performed in MATLAB for 3 different databases: AR, ORL and YALE. The AR database contains 100 classes of faces with 26 face images per class with various natural variation and occlusions [13]. The ORL database includes 40 classes with 10 image per class with occlusions, scale variations and rotations [1]. The YALE database contains 15 classes with 11 images per class including different facial expressions and natural variabilities [14]. For the experiments in this work, 50% of the images with used for training and the other 50% for testing. The exemplar images for the random weight approach are shown in Fig.7, and for the rule-based approach in Fig.8. The recognition accuracy of the random weight and rule base approaches with the variation of the size of the inhibition region is shown in Fig.9. Fig.9(a) illustrates the simulation results for AR database, Fig.9(b) for ORL database and Fig.9(c) for YALE database. The rule-base approach improves face recognition accuracy for AR and ORL databases. However, for the YALE database, the recognition accuracy is decreased. This can be explained by the small number of classes and face samples in the YALE database. The average and maximum recognition accuracies for two approaches are compared in Table 1.
Analog hardware simulation
The simulation of the proposed rule-based approach was performed in SPICE for TSMC 180nm CMOS technology. Fig.10 illustrates the timing diagram for the proposed rule-based receptor block, shown in Fig.5. Fig.10(a) shows the inputs in the receptor block. Fig.10(b) illustrates the main input and the total mean of all the inputs. This main input is compared with the mean in the following stages. Fig.10(c) shows the comparator Fig. 6 The overall system implementation of the face recognition module with the HTM SP. circuit output and Fig.10(d) illustrates the final output of a single receptor block. Table 2 compares the on-chip area and power dissipation for random weight and rule-based approaches.
Discussion
As it was illustrated in Section 6, the proposed rulebased approach outperforms the traditional HTM random weight approach. This can be explained by the fact that the rule-based approach that draws the correlation between the HTM SP weights to the input space. The main goal of the HTM SP is to create the SDR from the input. However, the facial images contain the natural sparseness. The rule based approach ensures the preservation of this natural sparseness of the images, which results in the increase of the recognition accuracy. In addition, this allows to preserve the structural information from the images, such as edges.
The hardware implementation of the rule-based approach required larger on-chip area and power consumption, comparing to the traditional random weight method. However, to achieve high recognition accuracy in the rule-based approach, the image filtering stage is not required, which is performed on the separate software. Moreover, the rule-based approach does not require the programming of the memristors to the random weights, which can be achieved combining either software-based or mixed-signal random number generation approach. The programming of the memristors requires additional time and reduces the processing speed. Also, the high accuracy of the rule-based approach result allows to remove the learning phase from the HTM SP, which can be implemented using digital or analog circuits and requires a significant amount of extra power and on-chip area [12].
Conclusion
In this paper, the hardware implementation of a rulebased approach for the initialization phase of the HTM SP has been proposed. The proposed rule-based approach allows to achieve significant increase in recognition accuracy. The maximum accuracy is approximately 86%, which is equivalent to the processing of the HTM SP with the learning phase. The on-chip area and power requirements to implement the rule-based initialization phase of the HTM SP are 13.31µm 2 and 135µW for a single receptor block, respectively. | 2018-03-14T04:18:47.000Z | 2018-03-14T00:00:00.000 | {
"year": 2018,
"sha1": "c03fdf0f43f393e04743f9858d0950c28256560e",
"oa_license": null,
"oa_url": "http://arxiv.org/pdf/1803.05131",
"oa_status": "GREEN",
"pdf_src": "Arxiv",
"pdf_hash": "ce55a92cef48b66e97cb08573969c7a77d7c6e0a",
"s2fieldsofstudy": [
"Computer Science"
],
"extfieldsofstudy": [
"Computer Science"
]
} |
239662753 | pes2o/s2orc | v3-fos-license | On the Performance of a UAV-Aided Wireless Network Based on NB-IoT
: In recent years, interest in Unmanned Aerial Vehicles (UAVs) as a means to provide wireless connectivity has substantially increased thanks to their easy, fast and flexible deployment. Among the several possible applications of UAV networks explored by the current literature, they can be efficiently employed to collect Internet-of-Things (IoT) data because the non-stringent latency and small-size traffic type is particularly suited for UAVs’ inherent characteristics. However, the implications coming from the implementation of existing technology in such kinds of nodes are not straightforward. In this article, we consider a Narrow Band IoT (NB-IoT) network served by a UAV base station. Because of the many configurations possible within the NB-IoT standard, such as the access structure and numerology, we thoroughly review the technical aspects that have to be implemented and may be affected by the proposed UAV-aided IoT network. For proper remarks, we investigate the network performance jointly in terms of the number of successful transmissions, access rate, latency, throughput and energy consumption. Then, we compare the obtained results on different and known trajectories in the research community and study the impact of varying UAV parameters such as speed and height. Moreover, the numerical assessment allows us to extend the discussion to the potential implications of this model in different scenarios. Thus, this article summarizes all the main aspects that must be considered in planning NB-IoT networks with UAVs.
Introduction
Machine-type communications have witnessed a renewed interest from the scientific community thanks to advancements in technology and the plethora of new applications involving them. Statistical reports already proved a steady increase in the number of machinetype or Internet of Things (IoT) links (see, e.g., https://www.statista.com/statistics/ 802690/worldwide-connected-devices-by-access-technology/ or https://www.ericsson. com/en/mobility-report/reports/november-2019/iot-connections-outlook, accessed on 7 September 2021.), but the massive presence of IoT devices might not be the only major challenge to be addressed for the future. Other key challenges lie on the more differentiated and stringent requirements on communication performance-the demand-imposed by the several applications and use cases possible. These may include autonomous vehicles, wearables, industrial IoT for Industry 4.0, data monitoring, alarm detection, municipality services and many others, in which one can observe that commonalities are few. These aspects call for new paradigms to network design. To avoid the densification and deployment of new terrestrial bases needing huge investments in capital and operational expenditures, a viable and largely foreseen solution can be found in mobile base stations (BSs).
Unmmaned aerial vehicles (UAVs) (a.k.a. drones), where mobile BSs can be mounted, represent very interesting means to add the required flexibility and scalability for the future networks. UAVs have, in fact, the potential to fly on-demand and exactly where is needed. Moreover, they are not tied to roads, not affected by traffic congestion and can (denoted with the acronym UxNB) at the beginning of Rel. 17 [2,3]. Because of the several 54 activities considering the aid of UAVs to cellular users, we are focusing on the cellular 55 radio access technology, as stated in the 3GPP documents [4]. In fact, if a UAV could have 56 installed the same radio-frequency equipment with a similar protocol stack to target both 57 IoT nodes and broadband users, it would be convenient for mobile operators. To this 58 purpose, there exist a number of technologies targeting IoT applications which follow 59 the fourth-generation (4G) numerology [5]. In particular, we will focus on NarrowBand- Thanks to their versatility, flying network nodes like UAVs have gained an everincreasing interest from researchers concerning standardization bodies. The 3rd Generation Partnership Project (3GPP), after considering the feasibility of UAVs being user equipment (UE) (i.e., end users of the cellular network), started approaching on-board radio access for UAVs (denoted with the acronym UxNB) at the beginning of Rel. 17 [2,3]. Because of the several activities considering the aid of UAVs to cellular users, we focus on the cellular radio access technology, as stated in the 3GPP documents [4]. In fact, if a UAV could have installed the same radio-frequency equipment with a similar protocol stack to target both IoT nodes and broadband users, it would be convenient for mobile operators. To this purpose, there exist a number of technologies targeting IoT applications which follow the fourth-generation (4G) numerology [5]. In particular, we will focus on NarrowBand-IoT (NB-IoT) for the design intended to target low-end IoT applications with low data rates, delay tolerance, massive connections, and extremely wide coverage [6]. To this end, this work can be considered an extension of [1], where an initial and simple approach was proposed to UAV-aided NB-IoT networks.
In this article, we throroughly study different approaches to aerial support for NB-IoT networks, in order to provide a general overview of the challenges and potentials of these systems. To properly assess the network performance of UAVs serving NB-IoT nodes, we jointly consider as performance metrics the percentage of completely served nodes, the throughput provided and the latency which has to be expected and the IoT nodes' energy consumption. Our key contributions can be summarized as follows: • We propose a UAV-aided NB-IoT scenario with several hundreds of nodes, located in different parts of the area as to simulate diverse applications. Differently from other papers in literature, the NB-IoT technology is considered in details as specified by the 3GPP documents and studied in all of its features, including the signaling procedures and the parameters' implications of the three NB-IoT coverage classes; • we investigate the performance in terms of number of completely served nodes, achieved network throughput, perceived latency and energy consumption of nodes; • we analyze the impact of using different UAV trajectories and the effect of varying UAV parameters, such as speed and height.
The article is organized as follows. In Section 2, we give an overview of the literature about UAV-aided networks in IoT applications. NB-IoT and its features are presented in Section 3. Section 4 describes the scenario and the network model. Final simulation results are reported and discussed in Section 5. Finally, Section 6 concludes the article.
Literature Overview
Initial studies on UAV-aided networks focused on the key link-level considerations, and specifically on the characterization of path loss and its impact on the so-called airto-ground (ATG) channel [7,8]. These activities on UABs were followed by the aim of finding an acceptable trade-off between coverage, capacity and connectivity, as in [9]. To be more specific, in [8] the effect of the user-UAV incident angle w.r.t. the ground plane as a function of drone height is studied. It is defined as elevation angle , and its aperture may determine a link being or not being free from obstructions and in LoS conditions. In the remainder of this article, we refer to the authors' ATG model. In recent years, the interest in UAV-aided networking gradually increased, introducing studies ranging from UAV deployment issues to grant adequate coverage [10,11] to more complex problems of UAV trajectory design for fair and satisfactory quality of service to ground users [12][13][14]. Differently from our objective, the majority of papers addresses a general user or device, forgetting the implications and limitations of the specific protocol procedures. For example. dynamic trajectories for the 3D space are studied in [12] with the purpose to connect IoT nodes at their activation time. The authors jointly optimize the transmission power of ground nodes, the overall energy spent in movement and the choice of the next stop of each UAV. Some activities deal with the definition of an optimal trajectory for UABs. In [13,14] the trajectory is optimized with the aim of maximizing the minimum user rate. Optimization algorithms have also been studied for UAV placement to achieve optimal energy efficiency [15,16] by minimizing power consumption and transmission delays, which are interesting requirements for IoT applications. However, they do not consider specific protocol constraints or overhead as it is in 3GPP standards.
Similarly to this study, some other works related to UAVs are targeting the IoT field. As for recent examples, references [17,18] employ UAVs to gather IoT data from remote areas in which devices experience low connectivity or capacity issues. In [17], the authors address the trajectory and resource allocation optimization for time-constrained IoT nodes, while the authors in [18] optimize the UAB 3D placement and resource allocation minimizing the total transmission power of IoT devices and balancing the multi-UABs' tasks. Reference [19] focuses on the modelling and optimization of the carrier sensingbased medium access control (MAC) layer protocol of UAV-aided IoT networks; this paper analyses the achievable throughput as the performance metric and CSMA/CA as the MAC protocol. Moreover, reference [20] studies different mechanism for the energy and delay aware task assignment of UAVs with onboard IoT nodes. More specific to the technology investigated hereby, reference [21] studies trajectories for energy minimization in a NB-IoT context, reference [22] investigates connectivity strategies for a specific NB-IoT application, and [23] introduces a coverage analysis for UAV-aided NB-IoT networks. The NB-IoT protocol is considered in these articles, too, since it belongs to the machine-type emerging technologies [6] and targets low data rates, delay tolerance, massive connections, and extremely wide coverage [24]. However, these works usually lack either a fine-grained protocol study or focus on the optimization of one metric above the others. On the contrary, we aim at providing a more general overview of potentials and challenges of UAV-aided NB-IoT networks, with a deeper focus on protocol details rather than the optimization of a single performance metric. Moreover, optimization frameworks struggle to handle large input instances (e.g., a massive number of nodes in the scenario) because of the excessive computation times, in terms of days or weeks. For example, references [13,14] consider less than 10 users in the service area. Furthermore, in [25], it is optimized the 3D locations of UAVs for wireless powered NB-IoT. Another work worth to mention is [26]. The authors propose a NB-IoT model to collect underground soil parameters in potato crops using a UAV-aided network. The analysis in this case is mostly application-dependent, and therefore differs from our general evaluation with different metrics.
This activity can be considered an extension of [1]. To the best of the authors' knowledge, the literature still lacks the detailed model and protocol analysis of similar scenarios and setups. Therefore, the focus of this work is to extend and further discuss the system dynamics of NB-IoT networks served by an UAV, rather than compare our approach with other research activities. This study helps us to extract the major impacts of the overall protocol stack of the NB-IoT technology on UAV-aided networks.
The Narrowband-IoT Technology
The NB-IoT technology is intended to address the needs of mMTC, and has been standardized by 3GPP initially on Rel. 13 in 2016, with new functionalities introduced in the subsequent releases. It will be briefly described in this section, taking as references [5,27]. Particular emphasis is given to the uplink, which is the scenario of interest for this paper. The NB-IoT technical solution originates from the Long Term Evolution (LTE) technology, which has been substantially simplified to reduce the overheads, minimizing complexity, cost and consumption, and, at the same time, it keeps its usual mechanisms, such as synchronization, radio access and resource assignment. In addition, the NB-IoT technology features substantial flexibility allowing to deploy NB-IoTs cells by running a software update on already existing LTE cells. Indeed, three deployment options are available: Standalone, to reuse 200 kHz GSM carriers; guard-band, to exploit the guard band of two adjacent LTE carriers; and in-band, where one LTE Physical Resource Block (PRB) is reserved for NB-IoT within an LTE carrier bigger than 1.4 MHz. NB-IoT implements several mMTC-oriented enhancements, such as narrowband transmission and enhanced power saving techniques to increase battery life of UEs. In addition, up to 2048 and 128 repetitions in DL and UL, respectively, can be used to exploit the time variation of the radio channel, so that each replica can be decoded separately, or multiple replicas can be combined to further increase the reception probability. Furthermore, NB-IoT's coverage enhancement is provided by defining three coverage classes: Normal, robust, and extreme. Classes are differentiated through thresholds based on Reference Signals Received Power (RSRP), defined to introduce three levels of coverage extension. Such thresholds depend on the cell deployment, the propagation environment (i.e., outdoor, indoor, deep-indoor, underground), and the spatial distribution of devices. Network parameters as the number of repetitions can be tuned separately for each class.
Before Rel. 15 introducing time division duplex (TDD) operation mode, the frequency division duplex (FDD) mode was the only option for NB-IoT. In this paper we consider the latter, since it is the primary mode used in most commercial networks and enables the maximum performance. The FDD mode implies different frequency bands to be used for UL and DL transmissions. The channel bandwidth of NB-IoT is 180 kHz, that corresponds to one LTE PRB. In UL, subcarrier spacing of either 15 or 3.75 kHz are possible, thus providing either 12 or 48 possible subcarriers within a 180 kHz resource block. The 15 kHz spacing allows transmission of either single or multitone (over up to 12 carriers) signals, while only single-carrier transmissions are possible for 3.75 kHz grid. On the contrary, in DL, only the 15 kHz resource grid is used.
Random Access Procedure
The random access (RA) procedure of a NB-IoT UE, which is necessary to allow an uplink packet transmission, is composed of multiple steps. To start with, the UE scans the channels for the synchronization signals. When it gets correctly synchronized to the base station, the UE obtains first the Master and then a number of Secondary Information Blocks (MIB and SIBs, respectively), containing all the relevant information about the network, the cell and its resource allocations. Then, to connect to the cell, the RA procedure starts by sending a preamble (or Msg1) during one of the periodic random access windows (NPRACH). After sending Msg1, the UE waits for Msg2 from the eNB. Note that, at this stage, the network is not aware of possible overlapping among UEs' transmissions, which may happen if more than one UE chooses the same random preamble. With Msg2, the eNB notifies that it has received the specific preamble sequence and allocates NPUSCH resources for Msg3 for UEs which transmitted that specific sequence. In Msg3, each of the UEs sends different ID data and, in this case, collisions may actually happen, since UEs which used the same preamble sequence will use the same resources to send their Msg3. If the eNB can correctly receive at least one of the Msg3 packets, it will respond with Msg4 to complete the procedure. If the access procedure is carried out correctly by an UE, the resources for the UL and downlink (DL) transmissions are scheduled by the cell eNB with data integrity insured through the Hybrid Automatic Repeat Request (HARQ) processes.
Energy Consumption
Clearly, the exchange of these signalling messages will affect the energy consumed by the nodes. Therefore, NB-IoT has to introduce appropriate techniques to power savings while keeping the synchronization with the cellular network. To this purpose, NB-IoT introduces predefined intervals for discontinuous reception (DRX), enhanced DRX (eDRX) and power saving mode (PSM) to reduce at minimum the energy consumption when its demand is fulfilled. In our case, considering all the different power saving techniques is not desirable. In fact, the UAB is a mobile BS, and therefore spends a limited time interval in which grants connectivity to a group of NB-IoT nodes. Therefore, if a node has a packet (or a queue of them) ready to be transmitted, we assume attempts the RA procedure as soon as it gets under the UAB coverage, to go in PSM immediately after.
Uplink Channels, Parameters, and Implications
Only two channels are defined in the UL, the narrowband physical random access channel (NPRACH) and the narrowband physical uplink shared channel (NPUSCH). The NPRACH is used to trigger the RA procedure. It is composed of a contiguous set of either 12, 24, 36, or 48 subcarriers with 3.75 kHz spacing, which are repeated with a predefined periodicity, that may take several discrete values between 40 ms and 2560 ms. The RA procedure starts with the transmission of a preamble, with a duration of either 5.6 ms or 6.4 ms (Format 0 and 1, respectively) depending on the size of the cell, and can be repeated up to 128 times to improve coverage. A preamble is composed of four symbol groups, each transmitted on a different subcarrier. The initial subcarrier is chosen randomly by the UE, while the following ones are determined according to a specific sequence depending on the first one. Two UEs selecting the same initial subcarrier will thus collide for the entire sequence, as described in Section 3. A special mechanism can help resolving the collisions and thus the access probability of a node u can be approximated as Equation (1): where n RA is the number of nodes entering RA and N RU are the total available subcarriers.
In case of standalone deployment, the NPUSCH occupies all the UL resources left available after the allocation of the NPRACH. NPUSCH is used for UL data and UL control information. The eNB decides how many resources to allocate to the UEs depending on the amount of data to be sent, the modulation-coding scheme (MCS) used and the number of repetitions needed to correctly receive the data. The minimum resource block which can be allocated, referred to as a resource unit (RU), depends on the UE capabilities and the configured numerology. Specifically, in the case of 3.75 kHz subcarrier spacing and single-tone operation, which is the configuration assumed in this paper, the RU is 32 ms long. The number of the RUs (ranging from one to ten) to be allocated depends on the size of the transport block size (up to 2536 bits in Rel. 15) and the MCS chosen to meet the required success probability. In addition, the eNB specifies the desired number of repetitions. Thus, the total number of resources allocated to a UE is equal to: where N RU is the number of resources needed to send a packet of size B depending on the chosen MCS and N REP is the number of transmission repetitions the UE is configured to send. Without the loss of generality, in what follows we imply 3.75 kHz subcarrier spacing with 48 carriers allocated for RACH. Furthermore, we need to set a number of other uplink parameters to define a resource unit and resource availability on the NPUSCH. For example, we have to define the setting for the MCS used to determine the Transport Block Size (TBS), that is the number of bits which can be transmitted given a certain number of RUs assigned in the NPUSCH. In our model, it holds I TBS = I MCS = 6, and, for clarity, we report in Table 1 only the possible TBSs for the selected scheme. Moreover, we need to set the uplink parameters for the three different coverage classes. For simplicity, we refer to them as coverage extension (CE), so that the Normal becomes CE0, the Robust CE1 and the Extreme CE2. These are characterized for a different number of repetitions, subcarrier NPRACH assignment, periodicity and receiver sensitivity. Please note that the optimal choice of these parameters' values is out of the scope of this paper, and most of them are taken from [5]. Table 2 summarizes our settings, that should be handled easy by a UAB.
System Model
The scenario taken into consideration has the scope to recreate a realistic deployment of IoT nodes, denser in service areas and absent in other locations. For example, IoT devices can be positioned at smart traffic junctions, in city parks, at waste collection points, in the parking lots, or into buildings, to name just a few. Thus, practically, we can state we have clusters of IoT nodes, characterized by close vicinity when they implement the same application requirements.
The spots scattered with NB-IoT nodes are considered not to be served adequately by the terrestrial infrastructure, and, for this reason, an unmanned aerial BS (UAB) equipped with NB-IoT radio access is sent to supply the service instead.
Network Scenario
To be specific, we model the scenario using a Poisson Cluster Process, namely the Thomas cluster process (TCP) [28], as proposed in [1] and conventionally done in the literature (see, e.g., [29]). The TCP is a stationary and isotropic Poisson cluster process generated by a set of offspring points independently and identically distributed (i.i.d.) around each point of a parent Poisson Point Process (PPP) [28]. In particular, the locations of parent points are modeled as a homogenous PPP, with intensity λ p , around which offspring points are distributed according to a symmetric normal distribution with variance σ 2 and mean value m. As a conquence, the intensity of the offspring points can be written as λ = λ p · m. In our scenario, offspring points represent the IoT nodes asking for service, while parent points are only reference coordinates for cluster centers.
We simulate a square area of size L × L m 2 , where offspring points are located according to the description above. We picture a sample scenario together with possible UAB trajectories in the following (see Figures 2-4 below). We consider a single UAB to decrease capital and operational expenditures, and to simplify the final numerical evaluation. Please note that, in this model, the extension to the case of multiple UABs does not necessitate additional complex settings, and therefore it is not a major focus of this work. However, for the sake of completeness, it will be discussed in the following.
We assume the UAV starts its flight from a fixed position, which can be considered as a recharge station, where it has to come back at the end of the trajectory. In this way, it can recharge or change its battery for the next flight. In this scenario, we assume that the capacity of the UAV battery is sufficient to enable a full round trip over any trajectory. Provided that the UAB has no heavy payload other than the RF equipment and the flight time is no longer than half an hour (that is always our case), this is reasonable [30,31]. The UAB is assumed to fly at a constant altitude from the ground between 200 m and 300 m (not violating the regulations in EU- [32]).
Channel Model
Motivated by the short-sized traffic demand, we assume the backhaul link UABterrestrial BS undergoes free-space propagation, and the capacity achieved is sufficient for both the UAB control links (for manouvrability and command and control signals) and data forward. In this article, the propagation model affects the UAB-ground node link, and is therefore known as ATG channel. We compute the received power, P rx as a function of the transmit power, P tx , as: The ATG propagation can statistically model the loss value, L ATG , as the reference considered here for drones in urban environment [7,33]. According to this model, connections between drone and nodes can either be LoS or Non-LoS (NLoS). For NLoS links, the signals travel in LoS before interacting with objects located close to the ground which result in shadowing effects. We denote as p LoS the probability of connection being LoS. The probability p LoS at a given elevation angle, θ, is computed according to the following equation with α and β being environment-dependent constants, i.e., rural, urban, etc, and adopted as given in [7,8]. Equation (3) determines for every link if it is in LoS or NLoS condition, impacting then the value of ξ LoS in Equation (4). The LoS path loss model is given by: where x LoS equals 1 in the case of realization of LoS links and 0 otherwise. ξ represents the shadowing coefficient which depends on LoS or NLoS conditions as well and is set as described in [7,8]. Then, c is the speed of light, f c is the center frequency, and d is the transmitter-receiver distance in meters. An additional penetration loss, η, as for in indoor monitoring or basement applications is considered. If the received power is above the receiver sensitivity, we consider the node to be in the connectivity range of the UAB. Because of the fact that NB-IoT may have three coverage classes, we have three sensitivity thresholds, one for each class ce, denoted as P ce,min . Once the device is connected and synchronized to its coverage class signalling, it can attempt to access the channel through the NB-IoT NPRACH (see Section 3.3), so that, if succeeded, it may be given resources to transmit its data. The number of resources assigned determines the packet size that the node can transmit in the given time window (see Section 3.3 and Table 1 for scheduling details). Note that, since the IoT nodes are the devices more limited in their characteristics, for example considering the maximum transmit power, we assume: • the connectivity range is defined by the uplink, • the downlink control communication is error-less.
Traffic Model and Metrics
Each node will then request to the UAB to transmit one uplink packet of size B. We assume NB-IoT nodes are already synchronized to the cellular network; they start their operations from the RA, exchanging the required NB-IoT signalling messages, and, if it is completed correctly, the required uplink resources are scheduled in NPUSCH. As one can imagine, the procedure is robust but its completion is not guaranteed. In fact, the main obstacles may be found in the UAB movement, channel fluctuations and collisions. The last two are application and environment dependent, while the first may be subject to a proper trade-off of the different latency, energy consumption, throughput and success rate metrics.
To later assess this trade-off, let us formulate here the performance metrics which are node-dependent; network-dependent metrics are formalized in the following. In the following, pedex u will indicate a generic node in the network.
Latency, that is the interval elapsing between the instant when the node has a packet to transmit to the service completion, is computed as: where τ u,tx is the time instant in which the node transmits its packet and τ u,start when this data is first available. For ease of evaluation, we assume τ u,start equals the time in which the UAB starts its trajectory for all nodes u. Then, the throughput, t u , achieved by a single node u whose transmission is considered successful depends on ∆τ u as: If a node u n is not able to transmit its packet demand, it would be t u n = 0 b/s. Finally, the energy consumption has to account for the signalling related to the RA procedure. It holds: where V indicates the voltage with which the IoT node is powered, I tx and I rx the current needed in transmission and reception mode, respectively, I idle the current present when the node stays in idle, and I sleep the current during PSM. Similarly, we indicate with T tx,u , T rx,u , T idle,u , T speed,u the times spent by each node u for being in the corresponding operation mode. Of course, this depends on the message exchange described in Section 3 (that includes an alternating of transmission and reception modes). Some of these parameters are fixed and shown in Table 3 [34].
Different Trajectories for UABs
As mentioned, we analyze multiple possible trajectories for one UAB flying over clusters of IoT nodes. These trajectories follow a predefined path, since IoT nodes are placed in fixed positions and usually have a traffic demand that is easily predicted or periodical. In this way, we can avoid static positioning of multiple drones, that would require increasing capital expenses. Moreover, a static deployment of hovering UABs has further energy consumption issues from the UAB side, that may be less under control. We consider the following possible trajectory design: • Circular path; • Paparazzi-like trajectory; • Flight following the solution of a Traveling Salesman Problem (TSP) over clusters' parent points.
Each of these trajectories has its pros and cons. Thanks to the wide coverage which can be achieved by the three coverage classes of NB-IoT, the circular path might be an option for its short path length. On the other side, if IoT nodes are not adequately covered, the paparazzi-like trajectory is able to scan the entire area. However, since the UAB has to serve clusters of fixed nodes, there is a third option. We can consider the locations of the parent points as reference coordinates to model the trajectory as a TSP [35], as also proposed in [1]. In this way, we can observe the effectiveness of these choice compared to other known alternatives. The TSP determines, for a finite set of points whose pairwise distances are known, the shortest route connecting all points. The circular path has a radius length equal to half the length of the circumscribed circle of the service area. To better adapt the circular trajectory to the nodes deployment, we consider as the perimeter of the service area the maximum extension of nodes' location in all directions, centering it and the circular path consequently. A similar implementation is repeated for the paparazzi trajectory, which considers also a sensing radius for the UAV to define the width of the serpentine, fixed to 500 m. Examples of trajectories and cluster positions for a scenario snapshot are represented in Figures 2-4. As one can easily see, the performance of the considered network depends both on the UAB mobility pattern and the UAB NB-IoT cell configuration. In the following section we consider how the respective parameters (e.g., UAB height and speed, NB-IoT coverage class configuration and cluster dimension) affect the performance.
Results and Discussion
In this section, we will analyze our NB-IoT network performance and discuss the achieved results. Simulations are carried out with the parameters listed in Table 4. As previously mentioned, we are jointly studying a number of metrics, being: • the access rate, R acc ; • the percentage of nodes completely served, S suc ; • the mean latency perceived by nodes, ∆τ avg ; • the network throughput, T net ; • the mean energy consumption of NB-IoT nodes, E.
These metrics are computed via the following formulas: R acc = n suc n att (8) S suc = n suc n tot · 100 (9) We denote as access rate, R acc in Equation (8), the fraction of the number of succeeded transmissions, n suc , over the overall attempted by anyone of the IoT nodes, n att . In this way, we analyze how frequently a node u may get access to the channel. When the R acc value gets close to zero, the number of access attempts has to increase before success. Equation (9) let us identify how effective is the UAB service. In fact, S suc counts the number of nodes successfully transmitting their traffic demand over the total number of present IoT nodes, n tot , in percentage. Then, Equation (10) computes the latency or delay for the node u spanning from the time in which the request is started, τ u,start , until when the transmission succeeded, τ u,tx . If a node is not able to transmit its packet, we do not consider it in the average latency computation. Equation (11) computes the overall network throughput by summing the one of each node u, t u . Finally, Equation (12) averages the energy consumed by all nodes present in the service area, n tot . To start with, Figure 5 shows the percentage of IoT nodes served by the UAB, that means their traffic demand is completely fulfilled. Its value is represented while varying the UAB speed, v, and for the different trajectories. This picture has some relevant outcomes. First, the performance is not the same when varying speed; for each trajectory, it drops fairly below the 50% of served nodes. This is clearly related to the time interval in which the UAB is able to maintain a robust radio connection with each node. In fact, if the received power falls below the receiver sensitivity before the signalling is completed and the node is scheduled an uplink resource on the NPUSCH (for example, if the UAB has flown away too fast), it will not be able to successfully transmit its packet. This effect becomes relevant when discussing latency, because the value of v creates a trade-off between low application delays and successful transmissions. At first glance, it is also evident that the TSP trajectory is able to serve a higher number of nodes for every UAB speed, making more robust what first proposed in [1]. In fact, this trajectory ensures the UAB gets in close vicinity with each cluster and each node, while minimizing the distance travelled. On the opposite, the circular trajectory decreases the percentage of successful transmissions of 50% with respect to TSP and Paparazzi ones, on average. The coverage extension and the time spent over each node, does not allow in this case a sufficient service. The analysis on the other metrics would allow us to finally assess and discuss the final trajectory-dependent performance. Figure 6 represents the access rate, R acc , again while varying the UAB speed for different trajectories. By looking at increasing speeds and for the same trajectory, we have the value of R acc dropping for each trajectory at 25 m/s, as it was for the service in Figure 5. With the UAB driving faster, the occasions to attempt channel access decrease. From Equation (8), we observe this metric evaluates together the number of overall attempts tried by nodes and, from these, the ones which are successful. This might explain why the Paparazzi and circular trajectories have higher access rates; these can be achieved if the total number of attempts is small (the denominator) with respect to other trajectories. If a lower number of nodes try to access at the same time to the channel (maybe for connectivity issues) the probability of successful transmission increases (see Equation (1)). Then, another figure of merit is the latency with which packets are transmitted (on average), ∆τ avg , in Figure 7. As expected, the latency decreases with increasing speed for each trajectory; the NB-IoT nodes which can successfully transmit their packet are served faster because the UAB will reach them with a smaller delay. Furthermore, if we focus on the latency of the different trajectories, we notice the TSP path performs better, just followed by the circular one. In fact, the distances covered by these two trajectories are fairly smaller with respect to the other. On the contrary, for its characteristic of scanning the entire service area, the Paparazzi trajectory employs much larger delays than the other two. Furthermore, we are interested in the network throughput, T net , that can be achieved. Numerical results are presented in Figure 8. As formulated in Equation (11), these results will also depend on the ones of Figures 5 and 7 because of the dependence on service completion and latency in Equation (6). Interestingly, we can see three different trends with varying UAB speeds, v, for the three different trajectories: (i) The TSP has a maximum, (ii) the circular does not change significantly, while (iii) the Paparazzi shows an improvement with increasing velocities. The maximum shown by the network throughput in the TSP trajectory corresponds exactly to the service-latency trade-off with UAB speed mentioned before. However, because of the low total service offered, the circular trajectory is hardly showing any maxima. Similarly, the maximum cannot be appreciated for the Paparazzi path, but the network throughput appears to be increasing with increasing speed. In fact, we observed that, at lower UAB speeds as 10 to 15 m/s, the average latency is so large (almost one or a half hour) that the maximum cannot occur for the chosen UAB speed range. As expected, the TSP trajectory shows a relevantly higher throughput with respect to the other two, up to 16 kb/s. Moreover, because of a much larger average latency, the Paparazzi trajectory achieves an even lower throughput than the circular one (that had a much worse percentage of served nodes). As last performance metric, we study the impact of trajectories on the average energy consumption, E, in Figure 9. For decreasing values of the UAB speed, v, the energy consumed by NB-IoT nodes tend to decrease until few hundreds of mJ. Even if this appears to be a positive effect, we should consider the motivations behind this behaviour. To avoid cluttering, we now show the impact of two different UAB heights, h, for the same trajectory. The TSP path is the trajectory chosen for its better results in terms of almost all metrics.
In this case, the percentage of served NB-IoT nodes with varying UAB speed, v, is plotted in Figure 10. The behaviour of the two curves with respect to speed is the same as already discussed in Figure 5. However, we can see a slight difference from before, that is a lesser sharp decrease of successful transmissions when the speed is 25 m/s. We observe that for lower speeds the system performs better at lower UAB heights, h, while for higher speeds increasing heights improve the service. One might expect that, since for higher altitudes the transmitter-receiver distance increases on average, the curve with the larger UAB height would perform always worse than the other (as it happens for values of v lower than or equal to 20 m/s). However, this does not consider the NB-IoT signalling messages procedure and timing. In fact, for increasing values of h, not only the transmitter-receiver distance gets higher, but also the coverage range of the UAB increases (given by the angle of incidence of the UAB with the ground, or elevation angle in [7,8]). Consequently, it increases the time interval during which, on average, the NB-IoT node remains in the coverage range of the UAB. As shown in Figure 10, this effect can be more appreciated with increasing values of speed.
What has been discussed above is confirmed in Figure 11, showing the average energy consumption, E, with varying UAB speeds and heights. Being an average, its value depends on the number of nodes able to reach the UAB connectivity and enter the RA (i.e., signalling) procedure. The curves' decreasing trend with speed is the same for the two h values, and was already discussed for the TSP in Figure 9. A higher altitude lets a larger number of nodes being in connectivity with the UAB, and try and start the RA procedure. This fact increases the overall energy consumption, regardless of successful transmission or not. With lower speeds, more nodes would be able to complete the signalling procedure, whether with higher speeds the energy consumed could be only for the transmission of NB-IoT Msg1 and/or Msg2.
Our analysis on the energy consumption can be validated by Figure 12. It represents the access rate, R acc , for different UAB heights and speeds in the TSP trajectory. Here, the curves trend with respect the to speed, v, is the same, as for Figure 11, and the curve with the higher height, h, has a lower access rate. This means that we have a larger number of attempts that does not correspond to the same number of successful transmissions. In fact, also due to the average larger transmitter-receiver distances with a higher-height UAB, it gets more difficult for nodes to complete the RA procedure.
Related to average latency, ∆τ avg , we observe a quite less significant impact. In Figure 13, the value of ∆τ avg is affected more by increasing UAB speeds rather than increasing UAB heights. To summarize our findings into three main points, which reflect the previously mentioned key contributions, we can state that:
•
Simulations performed have accounted for all the described NB-IoT standard parameters, from the random access procedure to the precise scheduling in uplink transmission with the three NB-IoT coverage classes; • we jointly investigated a number of performance metrics related to the IoT field, namely (i) the number of completely served nodes, (ii) the achieved network throughput, (iii) the perceived latency and (iv) energy consumption of nodes. Numerical values achieved are related to the scenario under consideration and may scale accordingly. Different IoT applications may prefer one metric over the others as its main requirements, but our results show that pursuing one usually improves also the other metrics; • as evinced from the analyzed results, the TSP trajectory performs better than the others in terms of successful transmissions, latency and throughput, making it the most promising among the three for cellular IoT (e.g., NB-IoT) applications. As expected, the prevailing trajectory in terms of performance is scenario-dependent. Moreover, in our results, one can extract the behaviour of UAV networks with respect to speed and height; in particular, the UAV speed should be taken under control, since it might abruptly decrease the performance metric if kept too high.
Thanks to our number of outcomes and simulations (and the variety of real scenarios that could apply to our system model), we can discuss results further and broaden our findings to different cases.
In the numerical evaluation, we first identify which are the relevant factors an operator must take into account when deploying this kind of system. First, the speed undergoes a trade-off between average perceived latency and throughput from one side to total number of successfully served nodes on the other. In this sense, the NB-IoT protocol plays a relevant role, since too higher speeds do not allow the completion of the RA procedure, and therefore the scheduling of uplink resources in the NPUSCH. Moreover, though a larger altitude would grant an increased NB-IoT nodes connectivity, increases the transmitterreceiver distance, which has again a negative impact on the successful completion of the RA procedure.
From our results, we can also infer conclusions on the trajectory selection. We can state that the circular trajectory, as would be for any path that travels the perimeter of a convex figure, is neither an effective trajectory in terms of latency nor a robust choice for the service of a massive number of IoT nodes. In fact, it is not able to follow the particular deployment of nodes for a given service area. On the other hand, a Paparazzi trajectory seems a good alternative, since it ensures to scan the overall service area. This might be a fair solution if a mobile operator does not know a priori the location of nodes. However, because of the fact that this information is usually easy to retrieve and that the Paparazzi trajectory becomes expensive in terms of energy consumption and latency, is probably not desirable. This confirms the expectations on the robustness of the TSP path.
If we want to achieve the 100% successfully served nodes, the use of multiple UABs might be necessary. However, this can apply only for those trajectories reaching full nodes connectivity, as for the case of TSP and Paparazzi. In fact, an n-th UAB, following the path of its predecessors, will address only those nodes not previously served. This fact would lighten the access procedure, since there will be less nodes contending and more radio resources available in a smaller time interval. In fact, a similar behaviour with respect to speed in Figures 5 and 6 suggests that the number of access attempts may be a tighter bottleneck than the time needed for signalling. Multiple UABs may lower the traffic load allowing a larger number of nodes to get access to the channel.
Conclusions
In summary, we proposed a thorough network performance evaluation of an UABaided NB-IoT network through detailed simulations. We considered the different aspects of the NB-IoT protocol, including the signalling granting resources for uplink transmission and the NB-IoT coverage classes. Then, we jointly evaluated the system performance on the service offered, access rate, average latency, network throughput and energy consumption metrics. UAB speed and height reveal a noticeable impact on the final performance, requiring a performance trade-off on the different metrics. Finally, we also observed the implications of different trajectory selection. A trajectory given by the TSP solution is the most suitable for clustered environments. The presented approach applies to IoT applications not constrained in time and does not consider the UAV battery expiration before the end of the trajectory. Indeed, this would modify all the trajectories to include the time dimension meeting the application requirements and an energy threshold before which the UAV needs to recharge its batteries. This study is left for further works, together with the introduction of multiple UAB service. | 2021-10-21T15:17:37.752Z | 2021-09-09T00:00:00.000 | {
"year": 2021,
"sha1": "89613b87608037e17e5de534f53bafb876f88ac2",
"oa_license": "CCBY",
"oa_url": "https://www.mdpi.com/2504-446X/5/3/94/pdf",
"oa_status": "GOLD",
"pdf_src": "Adhoc",
"pdf_hash": "cf7cfb670700952b27ea60819b0be4549e5e45d0",
"s2fieldsofstudy": [
"Computer Science"
],
"extfieldsofstudy": [
"Computer Science"
]
} |
235645843 | pes2o/s2orc | v3-fos-license | Meta-Analysis of the Accuracy of Abbreviated Magnetic Resonance Imaging for Hepatocellular Carcinoma Surveillance: Non-Contrast versus Hepatobiliary Phase-Abbreviated Magnetic Resonance Imaging
Simple Summary Ultrasonography is recommended as a standard surveillance modality, but the performance of surveillance ultrasound for detecting early-stage hepatocellular carcinoma (HCC) is limited. Motivated to provide a more sensitive method, abbreviated magnetic resonance imaging (AMRI) protocols have been introduced for HCC surveillance. We aimed to systematically determine the diagnostic performance of surveillance AMRI for detecting HCC. This meta-analysis of 10 studies comprising 1547 patients found that the pooled sensitivity and specificity of surveillance AMRI for detecting HCC were 86% and 96%, respectively. Hepatobiliary phase contrast-enhanced AMRI showed significantly higher sensitivities for detecting HCC than non-contrast AMRI (87% vs. 82%), but significantly lower specificities (93% vs. 98%). Therefore, surveillance AMRI had overall good diagnostic performance for detecting HCC and might be clinically useful for HCC surveillance. In addition, AMRI protocol should be selected with consideration of the advantages and disadvantages of each protocol. Abstract We aimed to determine the performance of surveillance abbreviated magnetic resonance imaging (AMRI) for detecting hepatocellular carcinoma (HCC), and to compare the performance of surveillance AMRI according to different protocols. Original research studies reporting the performance of surveillance AMRI for the detection of HCC were identified in MEDLINE, EMBASE, and Cochrane databases. The pooled sensitivity and specificity of surveillance AMRI were calculated using a hierarchical model. The pooled sensitivity and specificity of contrast-enhanced hepatobiliary phase (HBP)-AMRI and non-contrast (NC)-AMRI were calculated and compared using bivariate meta-regression. Ten studies, including 1547 patients, reported the accuracy of surveillance AMRI. The pooled sensitivity and specificity of surveillance AMRI for detecting any-stage HCC were 86% (95% confidence interval (CI), 80–90%; I2 = 0%) and 96% (95% CI, 93–98%; I2 = 80.5%), respectively. HBP-AMRI showed a significantly higher sensitivity for detecting HCC than NC-AMRI (87% vs. 82%), but significantly lower specificity (93% vs. 98%) (p = 0.03). Study quality and MRI magnet field strength were factors significantly associated with study heterogeneity (p ≤ 0.01). In conclusion, surveillance AMRI showed good overall diagnostic performance for detecting HCC. HBP-AMRI had significantly higher sensitivity for detecting HCC than NC-AMRI, but lower specificity.
Introduction
Hepatocellular carcinoma (HCC) is the third most leading cause of cancer-related deaths [1], and the incidence of HCC in North America and Europe has risen rapidly over the last 2 decades [2]. Although the prognosis for patients with HCC is quite poor, with an overall 5-year survival rate below 20%, those detected at an early stage are eligible for curative treatments and may have improved survival [3,4]. Therefore, regular surveillance to detect early-stage HCC is generally recommended for at-risk populations [5,6].
Updated guidelines recommend ultrasonography (US) as a standard tool for HCC surveillance [5][6][7]. However, the sensitivity of US for detecting early-stage HCC is not high (47%) [8]. Given this limitation of US surveillance, the recent guidelines suggest alternative surveillance tools, including magnetic resonance imaging (MRI), in selected patients with a high probability of having an inadequate US examination [5,6].
Recent studies showed that surveillance MRI had a higher sensitivity than US for detecting early-stage HCC [9], and it might be more cost-effective than US in patients with virus-associated compensated cirrhosis with a sufficiently high risk of HCC [10]. However, due to its cost, the long exam time, and complexity, the broad application of complete MRI with full sequences is likely to remain limited in a surveillance setting. In this context, abbreviated MRI (AMRI) protocols using a small number of selected sequences that can reduce scanner time and present a lower cost have been introduced [11][12][13].
AMRI protocols can be divided into two categories according to the image sequences included, the first being contrast-enhanced hepatobiliary phase (HBP)-AMRI and the second being non-contrast (NC)-AMRI. HBP-AMRI is conducted after administration of a hepatobiliary agent, i.e., gadoxetate disodium, and consists of T2-weighted imaging (T2WI) and HBP imaging with or without diffusion-weighed imaging (DWI). NC-AMRI consists of up to three sequences from DWI, T2WI, and T1-weighted dual gradient-echo imaging, without the use of contrast media. Given the increased attention to AMRI in HCC surveillance, it is time to clearly determine the performance of AMRI, especially according to the type of protocol. Although a recent meta-analysis reported comparable performance between the two AMRI protocols [14], this result is limited in application to clinical practice for HCC surveillance as it not only includes studies conducted in surveillance patient cohorts, but also studies conducted in diagnostic cohorts that simulate the surveillance setting. Therefore, our study aimed to determine the performance of surveillance AMRI for detecting HCC, and to compare the performance according to different protocols.
Materials and Methods
This study followed the Preferred Reporting Items for Systematic Reviews and Meta-Analyses (PRISMA) guideline for conduct and reporting [15]. The following literature search, study selection, data extraction, and study quality assessment were independently conducted by two reviewers (both with ≥3 years of experience in meta-analysis and ≥9 years of experience in liver MRI), with all discrepancies being resolved by consensus.
Literature Search Strategy
Thorough searches of MEDLINE, EMBASE, and Cochrane databases were conducted to find studies investigating the diagnostic performance of surveillance MRI using an abbreviated protocol for the detection of HCC. The search query was developed to provide a sensitive literature search. In order to narrow down the number of relevant articles, the identified articles were manually evaluated. The search terms included "Hepatocellular carcinoma", "MRI", "abbreviate", "Surveillance", and "Screen" (Table S1). The beginning date for the literature search was 1 January 2000, and the search was updated until 3 December 2020. The search was limited to original studies on human subjects written in English.
Eligible Criteria
After removing duplicates, the articles were reviewed for eligibility according to the following criteria: (1) population: patients at risk of HCC without prior history of (2) index test: liver MRI with abbreviated protocols; (3) reference standard: clinical diagnosis or pathological diagnosis; and (4) outcomes: diagnostic accuracy, including both sensitivity and specificity of AMRI for detecting HCC. Patients at risk for HCC included patients with cirrhosis or chronic liver disease [5,6]. Surveillance was defined as the repeated use of the index test with a regular time interval for the detection of previously undiagnosed lesions [8], and studies performing evaluations for diagnostic purposes instead of surveillance were excluded in our study. The exclusion criteria were as follows: (1) review articles, case reports, protocols, editorials, or conference abstracts; (2) studies that were not within the field of interest; (3) studies not reporting sufficient information to make a diagnostic 2 × 2 table of the imaging results and reference standard findings; and (4) studies with overlapping patient cohorts and data. Articles were first screened by titles and abstracts, and fully reviewed after the first screening.
Data Extraction
The following data were extracted: (1) study characteristics (authors, published year, study country, and study design (retrospective vs. prospective)); (2) subject characteristics, including sample size, age, sex, underlying liver disease, prevalence of HCC, and lesion size; (3) MRI techniques, including MRI sequences, scanner field strength, and interpretation method of AMRI (simulation vs. clinical practice); (4) details of reference standards; (5) surveillance strategies, including repeated surveillance, surveillance interval, and followup time; and (6) outcomes, i.e., the accuracy of AMRI for detecting HCC. To determine diagnostic accuracy, the numbers of true-positive, false-positive, true-negative, and falsenegative hepatic lesions were counted. When these were not explicitly reported, data were manually extracted using the text, tables, and figures.
Evaluation of Study Quality
The quality of the included articles was evaluated using the Quality Assessment of Diagnostic Accuracy Studies (QUADAS-2) tool [16]. The QUADAS-2 tool assesses study quality according to the four different domains (patient selection, index test, reference standard, and flow and timing). Studies with a high risk of bias in any domain were considered to have a high overall risk of bias.
Summary estimates synthesis
To determine the performance of AMRI for detecting any-stage or early-stage HCC, the sensitivity and specificity with 95% confidence intervals (CIs) were calculated for each individual study. Early-stage HCC was defined as Barcelona Clinic Liver Cancer (BCLC) stage 0 or A [17], or solitary HCC <5 cm or with up to three nodules <3 cm according to the Milan criteria [18]. The pooled sensitivity and specificity were calculated and the summary receiver operating characteristics curve was acquired using hierarchical models. Study heterogeneity was assessed by Higgins I 2 statistic (I 2 > 50%: substantial heterogeneity). The presence of a threshold effect was evaluated by visual assessment of the coupled forest plots. In addition, we evaluated the presence of threshold effect by the Spearman correlation coefficient between false-positive rate and sensitivity (i.e., 1−specificity). A correlation coefficient >0.6 was considered to represent a considerable threshold effect.
To compare the performance of AMRI according to AMRI protocols (HBP-AMRI vs. NC-AMRI), the HBP-AMRI and NC-AMRI results of all studies were separated and analyzed. The pooled sensitivity and specificity of HBP-AMRI and NC-AMRI were calculated and then compared using joint-model bivariate meta-regression.
Deeks' funnel plot and Deeks' asymmetry test were used to evaluate the presence of publication bias. Stata version 16.0 (StataCorp LP, College Station, TX, USA) was used for the statistical analyses.
lyzed. The pooled sensitivity and specificity of HBP-AMRI and NC-AMRI were calculated and then compared using joint-model bivariate meta-regression.
Deeks' funnel plot and Deeks' asymmetry test were used to evaluate the presence of publication bias. Stata version 16.0 (StataCorp LP, College Station, TX, USA) was used for the statistical analyses.
Quality Assessment
The results of the study qualities of the 10 included studies are shown in Figure S1. Of the 10 included studies, five had a high risk of bias in at least one of the four domains [12,26,[29][30][31]. In the patient-selection domain, three studies had an unclear risk of bias because they were unclear about whether patients were consecutively or randomly enrolled or not [27,28,31]. In the reference standard domain, seven studies were unclear about whether the results of reference standard were determined without knowledge of the index test results [11,12,26,28,[30][31][32], and two studies only used multiphase CT or MRI as a reference standard [30,31]. In the flow and timing domain, three studies had a high risk of bias because of an inappropriate time interval between the reference standard and index test (i.e., approximately 1 year), and a failure to use the same reference standard [12,26,29].
Quality Assessment
The results of the study qualities of the 10 included studies are shown in Figure S1. Of the 10 included studies, five had a high risk of bias in at least one of the four domains [12,26,[29][30][31]. In the patient-selection domain, three studies had an unclear risk of bias because they were unclear about whether patients were consecutively or randomly enrolled or not [27,28,31]. In the reference standard domain, seven studies were unclear about whether the results of reference standard were determined without knowledge of the index test results [11,12,26,28,[30][31][32], and two studies only used multiphase CT or MRI as a reference standard [30,31]. In the flow and timing domain, three studies had a high risk of bias because of an inappropriate time interval between the reference standard and index test (i.e., approximately 1 year), and a failure to use the same reference standard [12,26,29].
Meta-regression Analysis
The meta-regression analysis results for the diagnostic performance of AMRI are shown in Table 3. Study quality and MRI magnet field strength were significant factors for study heterogeneity (p ≤ 0.01). Studies with a low or unclear risk of bias had lower sensitivity (82% vs. 89%) and higher specificity (98% vs. 92%) than those with a high risk of bias. In addition, studies using 1.5T MRI showed lower sensitivity than those using 3.0T or both 1.5T and 3.0T MRI (84% vs. 87%), but a higher specificity (98% vs. 93%). Studies exclusively enrolling patients with cirrhosis showed similar sensitivity to those also enrolling other patients (85% vs. 86%; p = 0.34).
No significant publication bias was found across the studies (p = 0.56) ( Figure S2).
Discussion
Our meta-analysis showed that surveillance AMRI had a good overall diagnostic performance for detecting HCC, with pooled sensitivities for detection of any-stage and early-stage HCC of 86% (95% CI, 80-90%) and 81% (95% CI, 69-89%), respectively. Both HBP-AMRI and NC-AMRI protocols demonstrated acceptable diagnostic performance for HCC surveillance, and would therefore be clinically useful for HCC surveillance.
We found that surveillance AMRI showed a high sensitivity for any-stage and earlystage HCC, without statistical heterogeneity across the studies (I 2 for sensitivity = 0%). The results of our analyses can be usefully applied to HCC surveillance in clinical practice because we restricted the scope of our meta-analysis to studies evaluating the performance of MRI for surveillance purposes. In our results, the pooled sensitivity of AMRI for earlystage HCC detection was 81%, which was remarkably higher than that of US reported in a previous meta-analysis (47%) while maintaining high specificity [8]. In addition, the performance of AMRI in our study was similar to that of MRI in a previous prospective study using a complete MRI with full sequences (sensitivity of 81% vs. 85.7%, respectively, and specificity of 97% vs. 97% for early-stage HCC) [9]. Given the advantages of AMRI examinations over full MRI examinations, such as reduced scanner time (i.e., approximately 10 min or less of scan time), reduced cost, less complexity, and simplified workflow (i.e., no need for a power injector for contrast media), AMRI can be considered a cost-effective strategy. Likewise, recent studies suggested that AMRI could be the most cost-effective test for HCC surveillance for high-and intermediate-risk patients with cirrhosis [33], or in a conservative surveillance scenario [34]. Therefore, considering our results together with those of recent cost-effectiveness studies, AMRI may be clinically useful for HCC surveillance, but further prospective studies for evaluating both the diagnostic performance and cost-effectiveness of AMRI in comparison with US in HCC surveillance cohorts are still necessary.
Our results showed that HBP-AMRI demonstrated significantly higher sensitivity than NC-AMRI, at the expense of significantly lower specificity, although both protocols showed acceptable performance for HCC surveillance. The higher sensitivity of HBP-AMRI is largely attributable to the high contrast-to-noise ratio of HBP, which aids in lesion detection. However, because dysplastic nodules and confluent fibrosis can also show HBP hypointensity, HBP-AMRI may result in false-positive diagnoses [32]. In addition, in patients with advanced cirrhosis, who can have reduced hepatocyte function, the hepatocyte uptake of contrast agents is limited, which may hinder the detection of HCC [35]. By comparison, NC-AMRI offers the benefits associated with avoiding the use of a gadolinium-based contrast agent, such as cost-saving and the elimination of the potential risk of long-term retention in human tissues [36], or nephrogenic systemic fibrosis [37]. However, NC-AMRI has a relatively low lesion-to-liver contrast, and some HCCs may be isointense to the liver on T2WI [38] or obscured by heterogeneous background liver parenchymal signal caused by advanced cirrhosis [35], which explains the relatively low sensitivity of NC-AMRI. In addition, DWI, the key sequence in NC-AMRI acquisitions, is vulnerable to artifacts, has blind spots, including the liver dome [39], and early-stage HCC may not exhibit diffusion restriction [40,41]. Taken together, AMRI protocols should be selected with consideration of the advantages and disadvantages of each protocol, and future studies are needed to determine which protocol is better for HCC surveillance.
Meta-regression analysis revealed that study quality as well as MRI magnet field strength were significant factors affecting study heterogeneity. As between-study differences in the use of blinding or in the way the outcomes are defined and measured may lead to differences in the observed measurements, study heterogeneity could be associated with different degrees of bias [42]. Regarding the MRI magnetic field strength, 1.5T MRI has a lower signal-to-noise ratio and lower lesion-to-liver contrast in comparison with 3.0T MRI, which may explain the relatively lower sensitivity of 1.5T MRI compared with 3.0T MRI [43,44].
There are some limitations to our study. First, we could not evaluate the performance of dynamic contrast-enhanced AMRI, which includes pre-contrast, arterial-phase, portal venous-phase, and delayed-phase imaging in HCC surveillance because of a lack of eligible studies, i.e., studies assessing the performance of dynamic contrast-enhanced AMRI acquired for surveillance purposes. Second, the specificity was affected by substantial study heterogeneity; hence, caution was needed when determining the exact pooled specificity of AMRI. To overcome this limitation, we robustly performed further analyses, such as meta-regression. On the contrary, sensitivity was not affected by statistical heterogeneity, and sensitivity is generally considered to be of more importance than specificity in a surveillance setting. Third, although our study evaluated the diagnostic performance of AMRI for detecting HCC, the cost-effectiveness of AMRI should be evaluated before the implementation of AMRI in an HCC surveillance program. Fourth, the comparison between the performance of the HBP-AMRI and NC-AMRI might have been statistically underpowered due to the small number of the included studies and the indirect comparative design.
Conclusions
In conclusion, surveillance AMRI had a good overall diagnostic performance for detecting both any-stage HCC and early-stage HCC. For detecting HCC, HBP-AMRI had significantly higher sensitivity but lower specificity than NC-AMRI. Therefore, the selection of the AMRI protocol should be determined by considering the advantages of each protocol. | 2021-06-27T05:25:27.391Z | 2021-06-01T00:00:00.000 | {
"year": 2021,
"sha1": "e66ca9881fcd9cba2464673a5ac565ade512ab1b",
"oa_license": "CCBY",
"oa_url": "https://www.mdpi.com/2072-6694/13/12/2975/pdf",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "e66ca9881fcd9cba2464673a5ac565ade512ab1b",
"s2fieldsofstudy": [
"Medicine",
"Biology"
],
"extfieldsofstudy": [
"Medicine"
]
} |
225549095 | pes2o/s2orc | v3-fos-license | Design of Wide Interval Frequency Hopping Pattern for Frequency Hopping Mobile Ad-hoc Networks
The frequency hopping mobile ad-hoc network (FH MANET) is a communication network that combines frequency hopping (FH) technology and mobile ad-hoc network (MANET) technology. FH technology can improve the anti-interference performance of the network, and MANET technology improves the mobility and anti-destructiveness of the network. It can be networked autonomously without manual intervention and other network facilities. It has strong application value in complex battlefield environment and occasions with harsh environment and high confidentiality requirements. The frequency hopping pattern (FHP) design is a key technology in the FH MANET, which is directly related to the network’s anti-interference and multipath fading capabilities. In this paper, a wide interval expansion method (WIEBM) based on m-sequence and non-continuous tap L-G model is proposed. The experimental results demonstrate that proposed scheme can effectively improve the anti-interference ability of the system. It has the ability to resist more than 30% of broadband interference.
Introduction
A frequency hopping mobile ad-hoc network (FH MANET) consists of multiple special nodes. It can complete the self-organization of the network and realize specific network functions through information interaction. Nodes can join or leave the network at any time without infrastructure support. At the same time, the data is transmitted in frequency hopping (FH) mode to improve the system's anti-interference, anti-interception and confidentiality capabilities. Therefore, it has a wide range of applications in military, confidential occasions and harsh conditions. For the frequency hopping system with frequency hopping rate determination, designing the hopping sequence to a wide hopping interval can more effectively combat narrowband interference, tracking interference and broadband blocking interference, and multipath fading [1]. Therefore, the design of frequency hopping pattern (FHP) has become one of the important issues in the study of FH MANET systems.
At present, there are mainly the following methods for the designing of wide-interval FHP. The deintermediate frequency band method proposed in [2]. The disadvantages are: low frequency hopping patterns, poor randomness and poor anti-decipherability. Fuming Hong and Shiping Zhang proposed the dual band (DB) method in [3]. The DB method is much better than the de-intermediate frequency band method, but the randomness and confidentiality are still not high. Reference [4] proposed three methods for constructing a family of wide-interval hopping sequences based on a family of prime [5] describes a new class of methods based on the DB method and the L-G model to construct a family of wide-interval hopping sequences. Reference [6] proposed the WIDBS method. The performance of the generated sequence is significantly improved compared to the DB method. A random translation alternative method is proposed in [7]. A class of improved random translation alternatives constructed by wide-interval hopping sequences was introduced in [8]. In [9], a new method for constructing chaotic wide-interval hopping sequences is proposed. In [10], the chaotic-based hopping sequences are widely spaced by the circular band method and the DB method. The results of the anti-interference ability of the two are obtained under different application backgrounds.
The scenario in this paper is a FH MANET, which has its particularity. First, the working frequency band and channel bandwidth requirements are fixed. Second, the system uses TDMA (Time Division Multiple Access). Considering that the m-sequence has the characteristics of simple implementation and good correlation, this paper uses the non-continuous tap L-G model based on m sequence. In this scenario, there are maximum frequency value and minimum frequency value, which can also be understood as a fixed number of frequencies. In this paper, we select the DB and the WIDBS methods, which are based on m sequence and have representative, for comparative analysis. On one hand, there are upper limits for the pattern interval designed by the above methods. The maximum interval is q/4, where q is the number of frequencies. On the other hand, since the frequency hopping self-organizing network adopts the TDMA method, it belongs to the synchronous system. The advantages of the WIDBS method in auto-correlation and cross-correlation will also be greatly weakened, because the auto-correlation and cross-correlation factors have little effect on the synchronous system. Therefore, the interval parameter and the balance characteristic become the main considerations. This paper proposes a method --wide interval expansion method (WIEBM) to increase the frequency hopping pattern spacing, so that the system's anti-broadband interference capability is more than 30%.
The remainder of this paper is organized as follows. Section II presents WIEBM principle and theoretical analysis. Simulation results of the proposed method compared with DB and WIDBS method are introduced in Section III. The conclusions are drawn in Section V.
WIEBM principle and theoretical analysis
In this paper, the non-continuous tap L-G model based on m sequence is used to generate the hopping sequence. The m sequence generator employs n-level on the finite field GF (2). As shown in Figure 1. Details can be found in [8]. Figure 1, U = ( −1 , … , 0 ) indicates the user taps. Using the discontinuous tap L-G model, r(r ≤ n) stages are optionally selected from the n-level shift registers, which can form −1 −1 sequences. WIEBM method: A set of frequencies generated on a given bandwidth B is f = {f(k)|0 ≤ k ≤ q − 1}, where q is the frequency number. Let F be a cycle length hopping sequence, F(i) is the ℎ hopping sequence element, i = 1,2, … , 2 −1 . For F, if |F(i + 1) − F(i) ≥ d + 1|, it means that the frequency hopping interval of the F sequence is d, and F is called a wide interval hopping sequence.
Simulation results
In this section, we present simulation results to illustrate the performance of the proposed scheme. The simulation parameters are set as follows. The shift register is n bits. The number of frequencies is q. The sequence length L is 2 − 1. , where g(i) is the number of times of the ℎ frequency F(i) in a frequency hopping cycle. Ideally, g(i) = L/q, ie σ = 0. The closer σ is to 0, the better the balance and the more uniform the frequency distribution. The average hopping interval is defined as average interval between two consecutive hopping frequencies. The WIEBM, WIDBS and DB were compared in the simulation. In Figure 2, we show the balance parameters for the three methods at different intervals when the number of frequencies q is 16. As shown in the figure, with the increase of n, WIEBM and WIDBS are basically the same and the balance characteristic increase. Both of them are bigger than the DB method. The closer σ is to 0, the better the balance and the more uniform the frequency distribution. So the DB method is the best. Figure 3 presents the minimum spacing of the three methods. The shift register n is 8. The frequency number q is 16. As shown in Figure 4, with the frequency interval increasing, the minimum interval of the WIEBM increases and the WIDBS and DB methods begin to rise and then fall. It shows that WIEBM has a larger frequency interval. Because all frequency intervals of the WIEBM method meet the interval requirements, while the other two methods have frequencies that do not meet the interval requirements. From this perspective, the anti-interference ability of WIDBS and DB method is about 4/16=25%, and the anti-interference ability of WIEBM can reach 5/16=31.25% or even higher. Figure 4 compares the average spacing for the three methods. The shift register n is 8. The frequency number q is 16. As shown in the figure, with the frequency interval increasing, the average interval of WIEBM increases, however, WIDBS and DB methods begin to rise and then fall. It shows that WIEBM has a good mean value of frequency interval. This also confirms that WIEBM has a stronger anti-interference ability compared to WIDBS and DB methods.
Conclusion
In this paper, considering the scene of a FH MANET system, based on the non-continuous tap L-G model and m-sequence, we applied the WIEBM method for generating FHP. Simulation results show that the proposed method is superior to WIDBS and DB methods in frequency interval. Although there is an upper limit of the frequency interval, the WIEBM scheme breaks through the upper limits of the two other methods and improves the anti-interference ability of the system. We can also see that these parameters are mutually constrained, so a suitable solution must be adopted according to the actual needs. This program has practical guiding significance. | 2020-07-23T09:09:36.947Z | 2020-07-01T00:00:00.000 | {
"year": 2020,
"sha1": "322a3bf049404fe64e66ef7e08074d6d9dc15914",
"oa_license": null,
"oa_url": "https://doi.org/10.1088/1742-6596/1584/1/012037",
"oa_status": "GOLD",
"pdf_src": "IOP",
"pdf_hash": "3459bea57b0cdc707b71dbefef842aa4c7f4aebe",
"s2fieldsofstudy": [
"Business"
],
"extfieldsofstudy": [
"Computer Science"
]
} |
125217229 | pes2o/s2orc | v3-fos-license | Semi-leptonic ZZ / ZW Diboson Final State Search at 8 TeV with ATLAS ∗
Processes involving pairs of bosons in the final state play an important role in a wide range of measurements and searches at the LHC. Presented here is a search for high mass diboson resonances in the semi-leptonic ZZ/ZW channel, interpreted in terms of bulk Randall-Sundrum gravitons decaying to a pair of Z bosons, using 7.2 fb−1 of 8 TeV center of mass energy data produced by the LHC and collected with the ATLAS detector. Upper limits on the cross section times branching ratio are set at the 95% confidence level in a graviton mass range from 300 GeV to 2 TeV and a lower limit on the graviton mass is found to be 850 GeV.
The European Physical Society Conference on High Energy Physics 18-24 July, 2013 Stockholm, Sweden
Introduction
Many extensions of the Standard Model [1,2] predict the existence of heavy resonances that couple to pairs of electroweak gauge bosons (W and Z).The subsequent decays of the W and Z bosons, into either leptons or quarks, offer distinct high transverse momentum (p T ) signatures that allow for searches to be made.Furthermore, the semi-leptonic final state, in which the final state includes a high-p T W or Z boson decaying hadronically, offers increased statistics for the probing of higher mass scales and an important venue for testing jet substructure techniques that are becoming increasingly important to searches at the LHC.Searches of this variety have previously been performed both at the Tevatron [3] and LHC [4].They have been surpassed by this latest search [5] at ATLAS [6] using 7.2 fb −1 of data at a center of mass energy of 8 TeV.
Search Summary
This analysis is focused on searching for diboson resonances in the q q final state across the broad mass range of 300 GeV to 2 TeV.The Z → decay is initially identified as a pair of well reconstructed, isolated, and prompt same flavor electrons or muons (requiring the pair to have opposite charge in only the muon channel) whose combined invariant mass is within 25 GeV of the Z boson mass.This removes the large multijet background, leaving Standard Model Z+jets as the primary background, with small contributions from t t and Standard Model diboson production.To increase sensitivity to the resonant signal, the high-p T characteristics of the resonance decay are used to impose kinematic selections on both the leptonic and hadronic decay products of the boson pair.The first selection used to identify the signal decay is a lower bound on the transverse momentum of the dilepton system.However, due to the large mass range covered in the search, the hadronic final state topology of the signal is very different below and above a mass of 1 TeV.Below 1 TeV, the W /Z → q q decay is reconstructed using two anti-k T (R = 0.4) jets [7].However, above 1 TeV, this decay becomes merged in the calorimeter and it becomes beneficial to reconstruct the W/Z boson as a single massive jet.This divides the search into resolved and merged selection regions.The signal is identified by using the azimuthal separation of the two highest p T jets and the invariant mass of the dijet system in the resolved selection and the p T and invariant mass of the leading jet in the merged selection.In both selections, no requirement is made on the missing transverse energy of an event.After performing the selection, the 4-body m( , j j) or 3-body m( , j 1 ) invariant mass, analogous to the mass of the diboson resonance, is formed and used to perform the search for resonant excesses.
Results
After the optimization is performed using a Monte Carlo estimate of the background composition, the qualitative understanding of the background is confirmed by a comparison to data as in Figure 1(a) and Figure 1(b).However, after the full selection, the final background is parametrized by performing a binned fit of the reconstructed invariant mass of the four-vector reconstructed from the leptonic Z → boson decay and the hadronic W /Z → q q decay, that being a single or two jet system using the function f (m; p 0,1,2,3 ) = p 0 • (1−x) p 1 x p 2 +p 3 •ln(x) where x is the reconstructed diboson resonance mass m (in units of 8 TeV) and p 0,1,2,3 are four free parameters.This background estimation
PoS( EPS-HEP 2013)112
Semi-leptonic ZZ/ZW Diboson Final State Search at 8 TeV with ATLAS is used to initially perform a search with the BUMPHUNTER algorithm [8] in all mass windows for the largest excess in data above the smooth background hypothesis.In both the resolved and merged selection regions, no significant deviation is found from the smooth background hypothesis.Since these results are consistent with a background-only hypothesis, Bayesian limits are set on σ (pp → G * ) × BR(G * → ZZ) for the benchmark bulk Randall-Sundrum G* signal.These limits, shown in Figure 1(c), are generated for signal mass points between 300 GeV and 2 TeV using the fitted background estimation, with systematic uncertainties from the background fit and signal modelling integrated into the likelihood function with nuisance parameters.
Conclusion
A search for heavy diboson resonances has been performed using 7.2 fb −1 of pp collision data taken in 2012 by the ATLAS experiment at a center of mass energy of 8 TeV.No evidence of resonance like excesses above the smooth background hypothesis is observed in the reconstructed resonance mass spectrum, and 95% confidence level upper limits are set on the production cross section for signal masses between 300 GeV and 2 TeV.This constraint is used to set a lower limit on the bulk Randall-Sundrum G* mass of 850 GeV.
Figure 1 :
Figure 1:The comparison of the Monte Carlo estimated backgrounds to data for combined electron and muon channels for the resolved (a) and merged (b) selections with theoretical signal predictions.Note that the search is performed using a background estimation taken directly from a smooth fit to data.Figure(c)shows the expected and observed 95% confidence level upper limits on σ (pp → G * ) × BR(G * → ZZ)[5]. | 2019-04-22T13:10:41.944Z | 2014-03-18T00:00:00.000 | {
"year": 2014,
"sha1": "6742fbad4ea520ac93709bfbe05b36e4dfd71638",
"oa_license": "CCBYNCSA",
"oa_url": "https://pos.sissa.it/180/112/pdf",
"oa_status": "HYBRID",
"pdf_src": "Anansi",
"pdf_hash": "6742fbad4ea520ac93709bfbe05b36e4dfd71638",
"s2fieldsofstudy": [
"Physics"
],
"extfieldsofstudy": [
"Physics"
]
} |
199056362 | pes2o/s2orc | v3-fos-license | An Effective Protocol for Proteome Analysis of Medaka (Oryzias latipes) after Acute Exposure to Ionizing Radiation
All terrestrial organisms are subject to evolutionary pressures associated with natural sources of ionizing radiation (IR). The legacy of human-induced IR associated with energy, weapons production, medicine, and research has changed the distribution and magnitude of these evolutionary pressures. To date, no study has systematically examined the effects of environmentally relevant doses of radiation exposure across an organismal proteome. This void in knowledge has been due, in part, to technological deficiencies that have hampered quantifiable environmentally relevant IR doses and sensitive detection of proteomic responses. Here, we describe a protocol that addresses both needs, combining quantifiable IR delivery with a reliable method to yield proteomic comparisons of control and irradiated Medaka fish. Exposures were conducted at the Savannah River Ecology Laboratory (SREL, in Aiken, SC), where fish were subsequently dissected into three tissue sets (carcasses, organs and intestines) and frozen until analysis. Tissue proteins were extracted, resolved by Sodium Dodecyl Sulfate-Polyacrylamide Gel Electrophoresis (SDS-PAGE), and each sample lane was divided into ten equal portions. Following in-gel tryptic digestion, peptides released from each gel portion were identified and quantified by Liquid Chromatography-Mass Spectrometry (LC-MS/MS) to obtain the most complete, comparative study to date of proteomic responses to environmentally relevant doses of IR. This method provides a simple approach for use in ongoing epidemiologic studies of chronic exposure to environmentally relevant levels of IR and should also serve well in physiological, developmental, and toxicological studies.
Introduction
Ionizing radiation (IR), from other than natural sources, has become an aspect of daily life over the course of the last century. While sites such as Fukushima and Chernobyl are well-known and well documented sources of exposure to radiation, there remain over 1000 locations within the United States alone that are contaminated with radiation and have yet to be sufficiently studied to fully understand the risk to human health and to the environment. Testing and manufacturing related to nuclear proliferation (for both energy and weapons) and rapid increases in the use of nuclear medicine [1], are becoming increasingly identified as sources of radionuclide contamination. Such contamination can have long lasting effects on public health and the environment, particularly in aquatic systems.
The effects of radionuclides on organisms can vary depending on the dose and exposure time and may result in changes in morphology and functional activity, both at the cellular and system levels.
Experimental Design
The methods described for this study provide a simple approach to detect proteomic responses to irradiation across different tissues (carcasses, organs and intestines) in Medaka. The in-gel digestion protocol described is an economical, easy, and reliable protocol that could be applied to other epidemiological studies with large sets of samples. We used the in-gel digestion method to compare proteins in control samples as well as samples irradiated at a moderate level (500 mGy), since previous research has shown that exposure to this level is high enough to induce detectable changes but low enough not to immediately kill fish [9,28,29]. Our goal was to ascertain the optimal protocol for assessing proteomic changes to environmentally relevant levels of IR by conducting an experiment with both sham control and 0.5 Gy of exposure. This comparative dataset provides a baseline for use in future physiological, developmental, and toxicological studies at levels of resolution that have previously been unattainable. This method uses state-of-the-art techniques to allow us to obtain robust results which we describe in detail as follows: (1) How exposure to moderate levels of IR was accomplished; (2) how to prepare samples for comparative analysis, including in-gel digestion; and (3) how the data were handled to obtain basic biological information. This protocol was developed for use in exploratory analysis after exposure to stressors such as IR. The results of this initial exploratory analysis also demonstrate the need for additional strategies to obtain a more detailed understanding of the organismal response.
1.
Divide the fish into 2 groups: 6 adult fish for the control group and 6 for the treatment group.
2.
Place each group in small plastic containers with 0.5 L of filtered water.
3.
Expose the treatment group to ionizing radiation (0.5 Gy) at the Savannah River site calibration facility using a 1300 Ci CS-137 source calibrated to a dose rate of 0.028 Gy/minute, for a total exposure of 17.9 min obtaining a total dose of 0.5 Gy. An additional sham control group (no exposure) is subjected to the same protocol to account for handling stress.
4.
After exposure fish are returned to the laboratory and kept in tanks for 24 h.
Euthanize the fish at 24 h post exposure according to the requirements of Animal Care and Use at the University of Georgia, AUP #A201305-018-Y1-A0 part C: Experimental procedures: "Euthanasia of animals: Animals sacrificed for proteomic tissue research (or sick diseased fish) will be euthanatized by an overdose via immersion in anesthetic solution. A concentration of 250-500 mg/L (5-10 times the anesthetic dosage) is effective for Medaka according to AVMA 2013 guidelines [30]. Medaka will be left in the anesthetic solution for a minimum of 10 min after cessation of opercular movement. Tissues used for the radiation/proteomics study will be frozen in liquid nitrogen and stored at −80 • C until extracted for proteomics analysis. Euthanasia of animals will occur only at the Savannah River Ecology Laboratory". Note: The full AUP document can be found in the Supplementary Material File S1.
2.
Place the fish into a glass petri dish (bottom or cover) and using a dissecting microscope open the fish with a scalpel, starting from the anus and continuing to the beginning of the head. Note: All the instruments and glassware must be clean, pre-washed with 50% methanol twice and 100% methanol once in order to avoid contamination of the samples. Use of plastic should be avoided, as it may result in contamination of the tissues with phthalates, complicating the mass spectrometry analysis. 3.
Using dressing forceps, open the ventral area of the fish and take out the kidney, heart, liver, and gonads and put together in a plastic zip bag previously labeled. This will be the organs group. CRITICAL STEP The tissues have to be keep on ice until they are frozen to avoid degradation and/or expression of proteins associated with death.
4.
Separate the intestines and stomach and place in another zip bag, and finally place the carcass (muscle, brain, eyes, gills, spinal cord, fins, and scales) in a third plastic bag. Figure 1 shows a dissected Medaka highlighting the different tissue groups. 5.
Using liquid nitrogen freeze all the tissues for 30-60 s. PAUSE STEP The samples are stored at −80 • C until the next step.
Methods Protoc. 2019, 2, x FOR PEER REVIEW 5 of 18 1. Euthanize the fish at 24 h post exposure according to the requirements of Animal Care and Use at the University of Georgia, AUP #A201305-018-Y1-A0 part C: Experimental procedures: "Euthanasia of animals: Animals sacrificed for proteomic tissue research (or sick diseased fish) will be euthanatized by an overdose via immersion in anesthetic solution. A concentration of 250-500 mg/L (5-10 times the anesthetic dosage) is effective for Medaka according to AVMA 2013 guidelines [30]. Medaka will be left in the anesthetic solution for a minimum of 10 min after cessation of opercular movement. Tissues used for the radiation/proteomics study will be frozen in liquid nitrogen and stored at −80 °C until extracted for proteomics analysis. Euthanasia of animals will occur only at the Savannah River Ecology Laboratory". Note: The full AUP document can be found in the supplementary material File S1. 2. Place the fish into a glass petri dish (bottom or cover) and using a dissecting microscope open the fish with a scalpel, starting from the anus and continuing to the beginning of the head. Note: All the instruments and glassware must be clean, pre-washed with 50% methanol twice and 100% methanol once in order to avoid contamination of the samples. Use of plastic should be avoided, as it may result in contamination of the tissues with phthalates, complicating the mass spectrometry analysis. 3. Using dressing forceps, open the ventral area of the fish and take out the kidney, heart, liver, and gonads and put together in a plastic zip bag previously labeled. This will be the organs group. CRITICAL STEP The tissues have to be keep on ice until they are frozen to avoid degradation and/or expression of proteins associated with death. 4. Separate the intestines and stomach and place in another zip bag, and finally place the carcass (muscle, brain, eyes, gills, spinal cord, fins, and scales) in a third plastic bag. Figure 1 shows a dissected Medaka highlighting the different tissue groups. 5. Using liquid nitrogen freeze all the tissues for 30-60 s. PAUSE STEP The samples are stored at −80 °C until the next step.
Preparing Protein-Rich Powder (7 h)
The tissues need to be delipidated and prepared for total protein analyses as described previously [31] with some modifications. Note: Starting at this point all glassware must be new, and pre-washed twice with 50% methanol and once with 100% methanol to avoid contaminants that will interfere during the mass spectrometry analysis. 1. In a mortar and pestle, add the sample and 3 mL of the homogenizing solution using glass Pasteur pipets and homogenize the tissue. 2. Transfer the homogenized sample into a 15 mL glass tube. Rinse the mortar and pestle with 3 mL of the homogenizing solution and add the rinse to the homogenized sample. 3. Allow the sample to incubate at room temperature on a vertical rocker for 3 h. 4. Centrifuge the sample for 15 min at 4 °C at 3500 rpm. CRITICAL STEP The centrifugation generates heat, and thus refrigeration is necessary to avoid degradation of proteins. 5. Decant the supernatant (glycosphingolipids) and, then, dry down the protein pellet using vacuum centrifugation for approximately 15-20 min. CRITICAL STEP Do not over dry. Over drying will result in an incomplete/difficult homogenization and can cause degradation of the
Preparing Protein-Rich Powder (7 h)
The tissues need to be delipidated and prepared for total protein analyses as described previously [31] with some modifications. Note: Starting at this point all glassware must be new, and pre-washed twice with 50% methanol and once with 100% methanol to avoid contaminants that will interfere during the mass spectrometry analysis.
1.
In a mortar and pestle, add the sample and 3 mL of the homogenizing solution using glass Pasteur pipets and homogenize the tissue.
2.
Transfer the homogenized sample into a 15 mL glass tube. Rinse the mortar and pestle with 3 mL of the homogenizing solution and add the rinse to the homogenized sample.
3.
Allow the sample to incubate at room temperature on a vertical rocker for 3 h.
4.
Centrifuge the sample for 15 min at 4 • C at 3500 rpm. CRITICAL STEP The centrifugation generates heat, and thus refrigeration is necessary to avoid degradation of proteins.
5.
Decant the supernatant (glycosphingolipids) and, then, dry down the protein pellet using vacuum centrifugation for approximately 15-20 min. CRITICAL STEP Do not over dry. Over drying will result in an incomplete/difficult homogenization and can cause degradation of the samples. Note: If there is any interest in analyzing the glycosphingolipids, the supernatant from step 5 and 8 should be preserved in a pre-cleaned glass tube, dried under nitrogen, and kept at −20 • C for further analyses. 6.
Using Pasteur pipets, cover the sample in the bottom of the tube with homogenizing solution and incubate on the rocker for an additional 2 h at room temperature. 7.
Add 1 mL of cold (4 • C) Milli-Q water, and mix using the vortex. Add 4 mL of cold (4 • C) acetone, mix using the vortex, and incubate on ice for 15 min. 10. Centrifuge sample for 15 min at 4 • C at 3500 rpm. Decant supernatant into waste and dry down the protein pellet. 11. Repeat steps 9-10. 12. Freeze protein powder (−80 • C) and lyophilize overnight. 13. Once dry, store protein powder at −20 • C. PAUSE STEP the protein-rich powder can be keep at −20 • C for several months (In our case we have stored samples for up to 3 years with no significant change in the analyses). Note: the glass tubes have to be well capped to avoid humidity getting into the samples.
SDS-Electrophoresis (1.5 h)
1. Weigh 3-5 mg of protein-rich powder and resuspend with Tris-HCl buffer. Insoluble material is removed by centrifugation. Note: If necessary, use a microcentrifuge tube pestle before centrifugation to get a better homogenization of the sample. Prior to use, clean the pestle with 70% ethanol.
2.
Determine the protein concentration using the Pierce BCA protein assay kit with bovine serum albumin as standard.
3.
Prepare aliquots of 100 µg of protein and dry under vacuum centrifugation. 4.
Add 15 µL of Milli-Q water to dissolve the dry sample and add the same volume of the 2× Laemmli Sample Buffer. Mix with the vortex and centrifuge. Note: The final volume cannot be more than 35 µL, this is due to the capacity of the loading wells being 40 µL. CRITICAL STEP Observe the color of the mix, if yellowish, add 2 µL 100 mM NaOH at a time and mix until it turns blue. Mix using the vortex and centrifuge again.
5.
Boil the samples for 5 min and then put the samples into a refrigerator set at 7 • C for 5 min. Note: Be sure to cap the tubes well, or the sample will evaporate. 6.
Add 10 µL of protein ladder in the first well. Add protein samples leaving an empty well between samples, this will simplify cutting out the individual gel sections for the in-gel digestion step. 7.
Place the gel in a clean clear plastic container and add enough deionized water to cover it, swish back and forth 5 times. Pour out the water. Repeat the wash at least 3 times. Note: The plastic container must be dust and detergent free. It should be cleaned prior to use with 70% ethanol and allowed to dry. 9.
Pour off the last water wash and add enough Instant Blue Coomasie stain to cover the gel, leave for 30 min to 1 h with gentle shaking. Note: Be sure that the gel can move freely in the staining solution to facilitate diffusion. Usually a 100 µg of protein will stain well after 30 min. 10. Discard the stain solution and wash 2-3 times with deionized water. 11. PAUSE STEP Keep the gel in water inside a Ziploc bag until the next step to avoid the gel drying out.
3.5. In-Gel Digestion (48 h after Full Distain of Gels Pieces)
1.
Place the gel on sanitized glass. Use a razor blade to remove top and bottom of the gel. Note: prior to use, clean the glass with 50% methanol and 100% methanol, and then one time with isopropanol, then, let it dry.
2.
Carefully cut each lane sample run into 10 equally sized sections and then cut each section into smaller pieces (1 × 1 mm 2 ). Place all the gel pieces for each section into an Eppendorf tube. Note: Label the tube with sample and fraction information, i.e., control, fraction 5 can be CF5.
3.
Add 500 µL destaining solution to the gels and put on a rocker. Replace the solution 2-3 times during the day or let it rock overnight at room temperature. Repeat this until the gels are completely destained. NOTE: The time to completely destain the pieces of gel will depend on the frequency of changing the destaining solution, but 24 h is the fastest that the gel pieces can be destained.
4.
Once that the gel pieces are completely destained, remove destaining solution from each tube, using a different tip for each tube, then add 150 µL of HPLC grade water, and wait 5-10 min. Pull off water. Note: Starting at this point the tips and tubes used should be new and not been autoclaved, due to concerns of contamination that are detectable in the mass spectrometer.
PAUSE STEP Properly capped tubes containing dried gel pieces can be stored at room temperature until the next step. 9.
Add 150 µL of 100 mM DTT, incubate at 65 • C for 1 h. Remove the samples from bath, let cool to room temperature and pull off the solution. 10. Add 150 µL of 55 mM iodoacetamide for 1 h at room temperature in the dark. Pull off the solution. 11. Wash gel pieces with 150 µL of Ambic solution for 5-10 min. Add 150 µL ACN, wait 5-10 min.
Pull off the solution. 12. Dry the gels under vacuum centrifugation for 45-60 min.
1.
Add trypsin solution 50:1 ww (protein/trypsin ratio) and, then, add enough Ambic to a final volume of 125 µL to ensure that the dry gel pieces are completely submerged. Note: For 100 µg of protein use 2 µg of trypsin (100 µL of trypsin solution). To ensure a better distribution and absorption of trypsin into the gels, we mix 100 µL of trypsin solution with 2.4 mL of Ambic to obtain a total of 2.5 mL (125 µL per sample × 20 samples = 2.5 mL). Vortex and add 125 µL of the mix.
3.
After incubation, spin tubes, collect the supernatants and transfer each into a new prewashed tube (tube A). Change pipette tips between each tube. 4.
Add 150 µL of extraction solution to the tubes containing the gel pieces and wait 5 min. Transfer the liquid to a fresh set of tubes (tube A). Repeat these extractions two more times.
5.
Transfer all the liquid from the set of tube A's to a set of Nanosep centrifugal filter units. Centrifuge at 12,000 rpm for 15-30 min.
6.
The filtrates, containing tryptic peptides, are then dried on a speed vacuum, usually overnight. The samples can be stored at −20 • C until MS analyses.
Suspend the dried peptide in 19 µL of buffer A and 1 µL of buffer B and, then, transfer the dilute peptides into glass crimp top vials pre-cleaned with methanol 50% and 100%.
2.
Load the sample vials into the autosampler of an Ultimate 3000 LC System (Thermo Scientific Dionex).
3.
Mass spectrometry parameters: Peptides are separated on a 15 cm C18 analytical PepMap Column (Thermo Fisher Scientific) and eluted into an Orbitrap Fusion Tribrid mass spectrometer (Thermo Fisher Scientific) utilizing a nanoelectrospray ionization source via a 90 min gradient of increasing buffer B at a flow rate of approximately 200 nL/min. The gradient goes from 1% to 99% of buffer B between 3-60 min and holds at 99% for 5 min, then, there is a ramp back down to 1% over 5 min and holding 1% for the last 20 min for equilibration. Full MS scans are acquired at 60K resolution and MS2 scans following collision-induced dissociation are collected in the ion trap for the most intense ions in top-speed mode within a three second cycle using Fusion instrumentation software (version 4.1, Thermo Fisher Scientific). Dynamic exclusion is utilized to exclude precursor ions from the selection process for 60 s following a second selection within a 10 s window. We perform "blank-runs" (only buffer B) in between samples injections to ensure no carryover from sample to sample.
4.
Results of the mass spectral analysis are in Raw format and are ready for the bioinformatics analysis that the user chooses. Below are summarized the bioinformatics and search options that we performed. Note: As an example, the raw data corresponding to the carcasses samples can be found in the public JPOST repository [32] under the Announced ID JPST000608.
Database Searching and Protein Identification (6 h)
1. Raw files obtained from the mass spectra analysis following each preparation/separation protocol were converted to mzXML files and then to pkl (peak list format) using Trans-Proteomic Pipeline Software (Seattle Proteome Center, Seattle, WA, USA). Each pkl file was searched for protein identification against concatenated database (normal and reverse database) containing proteins from the following species: Oryzias latipes and Dario rerio, from the Broad Institute and National Center for Biotechnology Information (NCBI) using MASCOT (Matrix Scientific, Boston, MA, USA). The reverse database is created by reversing all protein sequences from the target database using an in-house utility. Note: The concatenated fasta file can be found in the Supplementary Material File S2.
2.
Mascot settings were as follows: tryptic enzymatic cleavages allowing for up to 2 missed cleavages, peptide tolerance of 20 parts-per-million, fragment ion tolerance of 0.5 Da, fixed modification due to carboxyamidomethylation of cysteine (+57 Da), and variable modifications of oxidation of methionine (+16 Da) and deamidation of asparagine or glutamine (+0.98 Da). Note: the pkl and mascot files corresponding to all the tissue groups can be found in the public JPOST repository under the Announced ID JPST000608.
3.
Proteins were organized and filtered using a 1% protein false discovery rate applied, minimum 2 peptides, and 40 score in proteins via ProteoIQ software (Provalt_3.1.12_03-21-18, NuSep, Bogart, GA, USA) to obtain a nonredundant list of homologous protein groups [33], by loading Mascot target and decoy search files into the software program. (See Table 1 in results sections for an example of the list of some identified protein in Carcasses). Score: Refers to either Mascot ion score, SEQUEST Xcorr, or tandem hyper score. Total protein score: Sums peptide score for all peptides matching to a protein.
Protein Functional Annotation
Use the fasta sequence of each identified protein to obtain relevant biological information using the follow websites.
3.
The family classification and functional category was obtained by using the pFam database (https://pfam.xfam.org/). (see Table 2 in results sections for an example of biological information of some proteins identified in carcasses).
Results and Discussion
Three tissue sets were harvested (carcasses, organs, and intestines) from control and treatment fish. Proteomic search parameters were set to require a minimum of two peptides for each protein identification, in order to minimize false positives [34]. A total of 993 proteins in the control sample and 1004 in the treated samples were identified in the present study. Figure 2 presents the distribution of the number of proteins detected, showing the common proteins in the different tissues tested, as well as those which were unique from the irradiated or control samples. In total there were 409,545 and 98 proteins in intestines, organs, and carcasses, respectively, that fulfilled the search parameters. From these, there were 106, 91, and nine proteins in intestines, organs, and carcasses, respectively, which were identified as unique to the treatment group and might represent a response to radiation. Across all proteins, 33 were uncharacterized, which implies that they have been experimentally documented but are not characterized in biochemical terms [35]. Future investigation of these proteins (unique and uncharacterized) may open a door to a better understanding of the effects of IR and possibly to the bystander effects that occur after exposure; this is an area of study which is largely unexploited.
Protein Functional Annotation
Use the fasta sequence of each identified protein to obtain relevant biological information using the follow websites. 1. Gene Ontology terms are extracted from the Interpro and ProteoFun web sites (https://www.ebi.ac.uk/interpro/ and http://www.cbs.dtu.dk/services/ProtFun/). 2. Signal peptides in the deduced amino acid sequences are examined using the SignalP Web site (http://www.cbs.dtu.dk/services/SignalP/) and the SecretomeP 2.0 Web site (http://www.cbs.dtu.dk/services/SecretomeP/). 3. The family classification and functional category was obtained by using the pFam database (https://pfam.xfam.org/). (see Table 2 in results sections for an example of biological information of some proteins identified in carcasses).
Results and Discussion
Three tissue sets were harvested (carcasses, organs, and intestines) from control and treatment fish. Proteomic search parameters were set to require a minimum of two peptides for each protein identification, in order to minimize false positives [34]. A total of 993 proteins in the control sample and 1004 in the treated samples were identified in the present study. Figure 2 presents the distribution of the number of proteins detected, showing the common proteins in the different tissues tested, as well as those which were unique from the irradiated or control samples. In total there were 409,545 and 98 proteins in intestines, organs, and carcasses, respectively, that fulfilled the search parameters. From these, there were 106, 91, and nine proteins in intestines, organs, and carcasses, respectively, which were identified as unique to the treatment group and might represent a response to radiation. Across all proteins, 33 were uncharacterized, which implies that they have been experimentally documented but are not characterized in biochemical terms [35]. Future investigation of these proteins (unique and uncharacterized) may open a door to a better understanding of the effects of IR and possibly to the bystander effects that occur after exposure; this is an area of study which is largely unexploited. An example of the results from ProteoIQ is presented in Table 1 and the ProteoIQ information for all the proteins identified are available in the Supplementary Tables S1-S3 for carcasses, intestines, An example of the results from ProteoIQ is presented in Table 1 and the ProteoIQ information for all the proteins identified are available in the Supplementary Tables S1-S3 for carcasses, intestines, and organs respectively. Our results suggest that the protocol presented in this was able to identify changes at the protein level and the data obtained represent a valuable starting point for further research. From here, the data analyses will depend on the purpose of the study. For example, the spectral counts obtained after filtering the data with ProteoIQ can be used to evaluate the levels of protein expression and compare the control versus the treated samples. Relative spectral counts can be used to identify upregulation or repression in comparison to control using, for example, the relative spectral abundance factor (RSAF) [36,37]. In the current dataset, we observed that organs and intestines are more likely to be affected by IR exposure than carcasses, since only 39 proteins present in carcasses have an increase or decrease in relative spectral counts two-fold or greater, as compared to 200 in intestines and 264 in organs. The functional annotation described in procedures Section 3.8 usually is applied to upregulated, downregulated, or unique proteins to provide insight into those processes that may be impacted by the stressor (biological information of the proteins with greater than or equal to two-fold change is presented in Supplementary Tables S4-S6). An example of the result from the bioinformatics search of some upregulated proteins is presented in the Table 2.
On the basis of the functional analysis (Supplementary Tables S4-S6), a few classes of proteins merit extra discussion due to their expression (up-/down-regulated) and/or high frequency of appearance in our results. Figure 3 shows two out of the four differentially expressed families discussed below. Sixteen proteins related to the EF-hand family exhibit a tissue-family dependent response to radiation. The EF-hand seven group showed repression in carcasses (two proteins only detected in control) but overexpression in organs (two proteins), while the EF-hand six group were repressed in intestines. Some proteins belonging to the EF-hand family can contribute to multiple processes like growth, cell motility, transcription, transduction, cell survival, and apoptosis [38], and are related with Alzheimer's disease, Downs Syndrome, and ALS [39]. Proteins belonging to the ribosomal family were detected in organs (27) and intestines (26) with varying expression; in intestine these proteins are 50% repressed and 50% overexpressed, while in organs most of the ribosomal family proteins (21) are overexpressed. Ribosomal proteins can respond in different ways to IR exposure. Changes in the expression levels of proteins from this family as a result to exposure to IR have been reported [40,41], sometimes resulting in IR-sensitivity [42]. In addition, we detected proteins belonging to families that participate in dehydrogenase activity, such as Ldh, Aldedh, and ADH families. Proteins belonging to families with dehydrogenase function were repressed in treated organs but overexpressed in treated intestines. Previous studies have demonstrated that exposure to low and moderate levels of IR (0.02-1.0 Gy) reduces production of pyruvate dehydrogenase [14]. Reduction in enzymes like glucose 6-phosphate dehydrogenase increase the sensitivity to oxidative stress [43], which could increase sensitivity to IR. Our results suggest the same tendency toward repression of these families of proteins in treated organs. An increase in reactive oxygen species (ROS) in cells is a well know consequence after exposure to IR [44]. Lastly, 11 different proteins belonging to the Zona pellucida (ZP) family were overexpressed in our treated Medaka, mainly in the organs. The ZP domain is found in a variety of receptor-like eukaryotic glycoproteins that play fundamental roles in development, hearing, immunity, and cancer [45]. Our results demonstrate that our protocol identifies environmentally relevant IR-induced changes in the Medaka proteome. We detected members of several families of proteins that have been previously shown to respond to IR exposure, especially at high levels, providing us with additional confidence in our protocol's effectiveness and indicating that environmentally relevant exposures may mimic high level exposure in some regards. The detection of differentially expressed proteins after exposure to environmentally relevant levels of IR provides us with a clearer understanding of organismal responses and adaptations to radiation. Our findings indicate certain protein families that may be critical to our understanding of the biological response of Medaka to environmentally relevant doses of IR, and they are likely candidates for future research in radiation biomarkers. Finally, the protocol presented here will enable studies of whole body response to IR and uncover trending expression changes during the course of chronic exposure to IR, ultimately leading to a more comprehensive understanding of the molecular and systemic impacts of IR. Our results demonstrate that our protocol identifies environmentally relevant IR-induced changes in the Medaka proteome. We detected members of several families of proteins that have been previously shown to respond to IR exposure, especially at high levels, providing us with additional confidence in our protocol's effectiveness and indicating that environmentally relevant exposures may mimic high level exposure in some regards. The detection of differentially expressed proteins after exposure to environmentally relevant levels of IR provides us with a clearer understanding of organismal responses and adaptations to radiation. Our findings indicate certain protein families that may be critical to our understanding of the biological response of Medaka to environmentally relevant doses of IR, and they are likely candidates for future research in radiation biomarkers. Finally, the protocol presented here will enable studies of whole body response to IR and uncover trending expression changes during the course of chronic exposure to IR, ultimately leading to a more comprehensive understanding of the molecular and systemic impacts of IR. Table S1: Proteins identified in Carcasses in control and treated samples. Table S2: Proteins identified in Intestines in control and treated samples. Table S3: Proteins identified in Organs in control and treated samples. Table S4. Proteins with a ≥2 Fold change identified in Carcasses. Table S5 Proteins with a ≥2 Fold change identified in Intestines. Table S6. Proteins with a ≥2 Fold change identified in Carcasses. File S1: Animal Care and Use at the University of Georgia, AUP #A201305-018-Y1-A0. File S2: concatenated fasta file. | 2019-08-02T13:25:32.346Z | 2019-07-30T00:00:00.000 | {
"year": 2019,
"sha1": "1c4ea9febca7428d20c3e2abccae9bfcf8e9bfbe",
"oa_license": "CCBY",
"oa_url": "https://www.mdpi.com/2409-9279/2/3/66/pdf",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "2c11774a950b1ab30ffb239f36805fab7e2110f7",
"s2fieldsofstudy": [
"Biology"
],
"extfieldsofstudy": [
"Medicine",
"Biology"
]
} |
167727897 | pes2o/s2orc | v3-fos-license | Lots quality coverage survey technique for assessment of immunization performance and quality in an urban slum of Mumbai
In the past decade and half, all the districts in the country have been covered under the Universal Immunization Programme. However, providing immunization, by itself, does not guarantee a reduction in disease morbidity and mortality. The full course of vaccines must be given at the right age. WHO estimated that 1.5 million of deaths among children under 5 years were due to diseases that could have been prevented by routine vaccination in 2008. 1 Despite routine immunization services, vaccine preventable diseases remain the important cause of childhood mortality. Uptake of immunization services is dependent not only on provision of these services but also on other factors including knowledge and attitude of mothers health status of child, density of health workers, accessibility to vaccination clinics and availability of vaccines, safe needles and syringes. 2-4
INTRODUCTION
In the past decade and half, all the districts in the country have been covered under the Universal Immunization Programme. However, providing immunization, by itself, does not guarantee a reduction in disease morbidity and mortality. The full course of vaccines must be given at the right age. WHO estimated that 1.5 million of deaths among children under 5 years were due to diseases that could have been prevented by routine vaccination in 2008. 1 Despite routine immunization services, vaccine preventable diseases remain the important cause of childhood mortality. Uptake of immunization services is dependent not only on provision of these services but also on other factors including knowledge and attitude of mothers health status of child, density of health workers, accessibility to vaccination clinics and availability of vaccines, safe needles and syringes. [2][3][4] immunization coverage provides evidence whether substantial progress towards achieving immunization targets is being made. Such positive evidence is required for continuing support from donor-supported initiatives like the Global Alliance for Vaccines and Immunizations (GAVI). 6 This paper reports on a survey assessing immunization coverage for infants and factors impacting coverage in urban slum of Mumbai. Expanded program of immunization was launched in India in January 1978 and the Indian version, the Universal Immunization Programme (UIP), was launched in 1985 aimed at achieving universal immunization coverage of the eligible population. 7 For infants vaccines provided under UIP are Bacille Calmette Guerin (BCG), diphtheria, pertussis and tetanus (DPT), oral polio (OPV), hepatitis B (HBV), measles. 8 In India, only 44 percent of children age12-23 months are fully vaccinated, and 5 percent have not received any vaccinations in year 2005-06. 9 Primary immunization coverage in Mumbai suburb was 72%. 10 The difference between the percentages of children receiving the first and third doses is 21 % for DPT ,15 % for polio and 59% of children age 12-23 months have been vaccinated against measles. The relatively low percentages of children vaccinated with the third dose of DPT and measles are mainly responsible for the low proportion of children fully vaccinated. 9 Despite all efforts put by governmental as well as nongovernmental institutes for 100% immunization coverage, there are still pockets of low coverage areas. Urban slums constitute one of high risk areas for vaccine preventable diseases. 11,12 Especially in urban areas there is increased reporting of vaccine preventable diseases, possibly due to migration leading to congestion and extra pressure on already overburdened health infra-structure of the cities. In order to find the unprotected pockets among the urban slum population, the present study was undertaken to assess the immunization coverage of children aged 12-23 months in urban slum and also the efforts were made to know the reasons for the delayed and non-immunization. Since lot quality sampling method requires only a small sample size and easier for staff to use, it is feasible for routine monitoring of vaccination coverage. 13 The purpose of utilization of lot quality technique is to identify quickly & scientifically the areas with poor performance and provide information for developing strategies to improve service quality.
METHODS
A cross-sectional, community based, descriptive epidemiological study was carried out in the field practice area (Shivajinagar urban health centre, Govandi, Mumbai) of the Topiwala National Medical College, Mumbai during the period of January 2013 to December 2013. The inclusion criteria for study subjects were all children between 12 months and 23 months of age with availability of either an immunization card or a responsible person for key information regarding immunization and who were permanent residents (residing for more than 6 months) of the study area. Mother and child not available at the time of actual visit to the respective home and children who do not satisfy above conditions were excluded from the survey purposes. The area was divided into 21 lots based on geographical service areas under 21 community health volunteers (CHV) functioning in health post. The study population comprised of all children aged 12-23 months. This age group was chosen for analysis because both International and Government of India guidelines specify that children should be fully immunized by the time they complete their first year of life. Children who received BCG, measles, and three doses each of DPT and polio (excluding polio 0) are considered to be fully immunized. Partially immunized child is one who has missed any one or more of the above doses irrespective of having received polio vaccination on Pulse polio days and a child who has not received even a single dose of any of the vaccines under UIP schedule other than polio vaccination on Pulse polio days is considered unimmunized . All the vaccines must be administered by the time the child is one year of age. Sample size for the study was calculated to be 336, based on 5% level of accuracy and 95% level of significance. 14 The estimated sample size for each lot was 16. A decision value (highest number of individuals in a lot not receiving a quality service and yet lot is acceptable) of 2 was selected based on lot sample size of 16 and low and high threshold set at 65% and 95%, respectively. Trained investigators collected the information from 16 children in each lot. Only one child was selected from each household. Households were selected by simple random sampling method by using random number tables. Information regarding birth date, immunization card, dates of vaccines received, presence of BCG scar and reasons for incomplete or no vaccination was collected through pretested questionnaire and interview schedule. Dates of vaccines received were verified from office record in case vaccination card was not available. Response rate was 100%. Criteria that meet the 'Quality' vaccination include those children who have received all vaccinations recommended in National immunization schedule at appropriate age and interval with presence of immunization card and BCG scar in those who received BCG vaccine. Information collected was analyzed to check number of children fulfilling the quality criteria of vaccination, lot-wise. Lot performance was judged unacceptable if it finds more than two children not accepting quality criteria. To get an overall single estimate of individual qualities of vaccination, data was aggregated from all 16 lots. Reasons for below quality immunization were analyzed in aggregate. The ethics committee of the institute approved the study. Socio economic status of the study population was determined as per the Modified Prasad's classification April 2013. 15 Results were analyzed by using Statistical Package of Social Sciences (SPSS) version 16.0. Statistical significance was set at P ≤0.05.
RESULTS
Three hundred and thirty six children were surveyed under this study. Immunization coverage: 75% children were fully immunized, 22.3% were partially immunized and 2.7% were unimmunized. Immunization card was available with 84.9% caregivers/mothers. About 87% children were having BCG scar.
As evident from table 1, the number of children in lot sample not satisfying quality criteria (i.e. children who were partially immunized or unimmunized) were 2 in lot no. 3 and 11. As the number of children in lot 3 and 11 was less than or equal to decision value of 2, performance of these lots was acceptable and lots were protected according to Lots Quality Survey Technique methodology. All remaining lots were unprotected and performance of these lots was not acceptable since children in lot sample not satisfying quality criteria were more than 2. The percentage of fully immunized children in different lots ranged from 87.5% to 62.5%. Maximum number of unimmunized children was present in Lot no. 4 and 6 ( Table 1).
As observed from table 2, the overall coverage of different vaccine ranges from 97.87% for OPV1 to 88.7% for Measles. The dropout rate was found to be 7.40% from BCG to Measles in study group. There was difference in coverage level of vaccines which are given in set (DPT,HBV,OPV) as first, second and third dose at 6th, 10th,14th weeks of age, due to non-availability of any vaccine (Table 2). and not brought to hospital(17.53%), followed by the child being to native place(15.98%), unaware of need of immunization(9.79%), mother too busy (7.22%), postponed till another time(6.19%), and fear of side effects (4.64%) etc (Table 3).
DISCUSSION
In this study, immunization coverage was: 75% children were fully immunized, 22.3% were partially immunized and 2.7% were unimmunized, which is less than the desired goal of achieving 85% coverage. 9 The present study shows higher immunization coverage 80.95% as compared to NFHS-III (2005-06) data (43.5%). 9 It was due to efforts taken by health services in urban slum. Yadav et al revealed that percentage for fully immunized children was 73.3% and for partially immunized children it was 23.8%, and for unimmunized it was 2.8%. 16 Somewhat similar findings were seen in the study by Tapare et al at Miraj. 17 Another study by Punith et al also found that overall vaccination coverage of completely immunized children was 92.10% and the percentage of partially immunized was 6.58%, and unimmunized children accounted for 1.31%. 18 [19][20][21] Although overall coverage is good, the quality of services are not acceptable in some subgroups of population in the present study. As this study points out performance of immunization was not acceptable in 19 lots out of 21 lots.
So corrective actions and interventions should be carried out in particular lot to improve reach, acceptability and quality of immunization services. In a present study, the immunization was received in 92.8% of children at inappropriate interval. It was observed only in 7.2% of children, immunization was at right time and right interval. Similar finding was found in study done by Kulkarni et al (2013). 22 Vaccination coverage of Measles is more than that of Vitamin A due to shortage of supply of Vitamin A at the health centre and anganwadis. Poor knowledge about immunization schedule and unaware about minimum interval between two subsequent doses of vaccines as well as improper history taking of immunization status of child are reasons behind immunization given less than specified time.
These variations in reasons for non-immunization in different areas and different studies might probably be due to variations in the literacy, socio demographic variation in different geographical locations, availability of health facility, efficiency of immunization services, lack of supervision and health monitoring systems across the country.
Since immunization is multi-sectoral activity, it definitely needs active intersectoral cooperation. Parents are to be educated about the importance of right time of immunization and maintaining immunization records and its role in the health of the child. Vigilant and frequent supervision and monitoring of immunization services is required. Timely reporting of new migrants by anganwadi workers will help to improve coverage at local level and reduce cases of non-immunization. Regular health education sessions and motivation through an encouraging and persuasive interpersonal approach, regular reminders and removal of misconceptions prevailing among people and improving the quality of the services at the health facility will solve the problems of delayed, partial and non-immunization. Pulse polio days should be utilized as a good opportunity for the advocacy of routine immunization to caregivers.
CONCLUSION
Though the overall coverage of immunization was good in urban slum but still it has pockets of partial or nonimmunization. In areas with high immunization coverage Lots Quality technique should be used to detect of poor coverage and quality and to take appropriate action. | 2019-05-29T13:15:00.258Z | 2016-01-01T00:00:00.000 | {
"year": 2016,
"sha1": "63e42a2dea0a5ce0458f828b6eeddb49672d2828",
"oa_license": null,
"oa_url": "https://ijcmph.com/index.php/ijcmph/article/download/668/568",
"oa_status": "GOLD",
"pdf_src": "MergedPDFExtraction",
"pdf_hash": "6aa584cfa9baa67f8785c1c303438343b8086893",
"s2fieldsofstudy": [
"Medicine"
],
"extfieldsofstudy": [
"Geography"
]
} |
219120943 | pes2o/s2orc | v3-fos-license | ‘The Jews Are Just Like Any Other Human Being.’ An Attempt to Measure the Impact of Informal Education on Teenagers’ Views of Intergroup Tolerance
Our paper presents the results of a study which was conducted between 2016 and 2019 in a high school in Budapest. The research attempted to measure the impact of the Haver Foundation’s activities on high-school students. The Foundation implements activities about Jewish identity, thus we intended to see whether the different activities of the Foundation changed the attitudes of high-school students, and whether they affected the formers’ level of knowledge and the associations they make with Jews. In line with the sensitivity and complexity of the research topic, and in order to create the broadest analytical framework, we followed several classes in a longitudinal setting by triangulating our methods. Results confirm the importance of these activities, especially with regard to the increase in the level of knowledge about Jews and Judaism. They also indicate that there is a need for such informal settings in high-school education. However, more extensive research needs to be carried out to obtain more accurate results about the reduction of prejudices.
Introduction
Prejudice reduction has become a widely researched topic in social psychology in past decades (see an extensive review of hundreds of published and unpublished studies by Paluck and Green, 2009). Our goal was twofold: First, we wanted to assess how the Haver Foundation's activities affect students' mindsets. This could have been manifested in a tolerant (open-minded) attitude (towards Jews and other minorities), some development of a culture of debate, and actual knowledge about Jews and Judaism. Second, the research was meant to be a pilot study for examining whether the former methods were applicable in such settings: Could we measure the impact of such activities on the teenage target group with our specific research design, or was the issue more complex and nuanced, thus needing to be tackled somewhat differently? To the best of our knowledge, only sporadic experimental research has been undertaken in Hungary in this field; most importantly, by Váradi (2013) and Kende and her colleagues (2016) who both focused their attention on the impact of peer influence on anti-Roma prejudice; and also by Orosz and his colleagues (2016) who recently pointed out the effectiveness of the 'living-library' intervention for reducing prejudice towards Roma and LGBTQ people.
This issue seems to be relevant in Hungary, taking into account the alarming findings of recent research into intergroup prejudice. Hungarians can be characterized as typically having strong prejudices towards different ethnic and religious minorities. International comparative research found widespread prejudice against various minority groups, such as immigrants, Jews, and LGBTQ people, indicating that levels of negative attitudes towards the out-group in Hungary are among the highest of all countries in Europe (Messing and Ságvári, 2018;Örkény and Váradi, 2010;Zick et al., 2011).
When it comes to anti-Semitic incidents, according to Kovács and Barna (2018) the number has decreased over the past two decades (since 1999), and the number of actual incidents is much lower than perceived by the Jewish respondents of a survey implemented in 2017. It is important to note, however, that even though the Jewish community constitutes the most significant religious community, making up approximately one per cent of the total Hungarian population (Kovács, 2011;Kovács and Barna, 2018), it is mostly concentrated in Budapest. It is worth mentioning that most Jews are not members of a synagogue (Kovács and Barna, 2018); however, the trauma of the Holocaust is one of the main cores of their Jewish identity.
As far as the political and social context is concerned, Hungary is a rather homogenous country in ethnic and national terms, but xenophobia has been strong since the change of the socialist regime. A relatively low level of anti-Semitism was initially paired with strong anti-Roma sentiment, which even led to incidents of homicide against Roma (Vágvölgyi, 2014). Since 2015, however, anti-immigrant and anti-refugee campaigns 1 have been instigated which have affected Jews, as George Soros (an American Jew of Hungarian origin) was made one of the scapegoats for the so-called 'refugee crisis.' The present research took part after this highly intense period of enmification (involving a series of billboard campaigns).
The role of informal methods in the Hungarian education system, with a special focus on the Haver Foundation
In most Hungarian schools, traditional frontal teaching is the usual method, and the application of project-based or informal elements in the official curriculum is only sporadic (Lannert and Nagy, 2006). Furthermore, topics related to the subject of the present research, such as inter-group tolerance and the protection of and respect for minority groups, are scarcely included in the official curriculum (Csákó, 2009). Regarding content, the National Curriculum includes latent anti-Semitic features (such as the inclusion of Albert Wass, a poet with anti-Semitic views; Szily, 2019), and does not give space to education about minorities (for example, TEV [2015] research focused on the lack of a Jewish presence in both Hungarian literature and history), and such tendencies are strengthening (Szombat, 2020). 2 Therefore, most students have no or very little knowledge about Jews, apart from concerning one historical event (the Holocaust).
To compensate for the shortcomings of the Hungarian education system, certain non-governmental organizations provide informal education classes to high schools. These NGOs typically aim to fight hatred against immigrants and refugees, the Roma, LGBTQ people, and Jews. (A list of organizations with their sensitizing activities can be found in Appendix 1.) The Haver Foundation is one of these organizations: their young, voluntary team (who self-identify as Jews) regularly go to high schools (upon invitation) to hold ninety-minute sessions about Jewish identity, using various means. This inter-religious/intercultural form of interaction is aimed at familiarizing non-Jewish students with all things Jewish.
In an increasingly centralized and formal educational system, the role of informal (non-formal) activities is becoming more and more important in schools and classes, about which teachers are also interested or concerned (via oral correspondence; it also happens that teachers or directors do not permit such activities, or only some of them are welcome 3 ). Most such activities focus on one of the minorities, or a specific deprived group. The main goal is increasing students' social responsibility and tolerance towards their classmates. Most of these activities are led by volunteers: (i) who are 'experienced experts' (e.g. members of a minority group); (ii) who are close to the students in age; and, (iii) whose activities are based on informal educational methods as opposed to frontal ones. Regarding age, peer-group relations can promote specific effects or meanings in these INTERSECTIONS. EAST EUROPEAN JOURNAL OF SOCIETY AND POLITICS, 5(4): 79-109. settings, in contrast to the teaching of much older individuals. This claim is supported by the research of Rogers (1967).
Concerning methods, Coombs and his colleagues (1973) have defined nonformal education as that which takes place within educational institutions, but with different or unconventional methods, while informal education includes every type of organized educational activity which takes place outside of these educational institutions. According to Coombs, students learn the most in nonformal and informal settings. Csoma's (1985) differentiation slightly deviates from this definition (however, his research involved adult learning). According to Coombs, formal education is equivalent to school; non-formal education to courses; and informal means 'unbound' learning.
The Haver Foundation's mission is to foster dialogue and spread tolerance through informal education. The activities they provide for classes cover topics that include identity, heritage, the Holocaust, the Jewish quarter (a city area of Budapest), and community challenges, and are implemented using informal educational methodologies and tools. From their view, dialogue between Jews (trained volunteers between the age of 20 and 35) and non-Jews leads to tolerance and common understanding, a claim which is also in line with Rogers' experimental learning methods. Furthermore, the age of the high-school students is also a crucial point with regard to attitude formation. It is almost common knowledge that ethnic, racial, and other stereotypes and prejudices are developed during (early) adolescence (Piaget, 1970) -the period when individuals develop their own identity (see Erikson, 1950).
However, the Foundation faces numerous difficulties when its volunteers arrive at high schools because students have mostly been exposed to a frontal, knowledge-based educational system in which critical thinking, creativity, and other skills are not taught. Therefore, they need to adapt to a new style of teaching. Furthermore, because of the high concentration of Jews in Budapest, as opposed to the scarce Jewish communities in the countryside, most students do not encounter Jews in person outside of these classes.
The theoretical and conceptual framework
The theoretical framework of our research is primarily based on Intergroup Contact Theory, originally developed by Gordon W. Allport (1954). The basic idea of Allport's theory -also known as Contact Hypothesis -is that under appropriate conditions interpersonal contact is one of the most effective ways of reducing prejudice between majority and minority group members. While Intergroup Contact Theory originally held that Allport's optimal conditions 4 are essential for reducing intergroup prejudice effectively, a comprehensive review of more than 500 empirical studies that examined Contact Hypothesis (Pettigrew and Tropp, 2006) helped refine the original theory, drawing attention to other important elements of the working mechanisms of intergroup contact. In their meta-analysis, the latter authors highlighted that intergroup contact has a positive effect on negative stereotyping, and scholars have drawn attention to the effect that the quantity of intergroup contact has on reducing prejudices as frequency of contact helps with the decategorization of out-group members and diminishes stereotypical ways of thinking (see also Velasco-Gonzalez et al., 2008). Pettingrew and Tropp concluded that 94 percent of the more than 500 studies they reviewedincluding surveys and different types of experiments -found that intergroup contact significantly reduces prejudice. Paluck (2011) carried out field experimental research among high-school students in the United States that was aimed at testing peer influence on intergroup prejudice; more specifically, on assessing students' perceived social distance from stigmatized social groups and overheard cases of harassment against gay and overweight students. Paluck's study is somewhat similar to ours in terms of its target group (teenagers) and method (field experiment); nevertheless, the scale of the former was much larger (ten high schools were included in the US field experiment vs. only one high school in the present Hungarian study) and the experimental treatment lasted much longer (the five-month presence of selected Peer Trainers in the experimental schools vs. only three ninety-minute occasions in our study). Paluck pointed out that at the end of the experimental period a significant and widespread pattern of effects had occurred that could be attributable to the intervention. The researcher concluded that peer influence on intergroup prejudice can spread throughout social networks; moreover, the effects of peer influence across time and in a context outside of school was also measured.
Furthermore, a meta-analysis (N=985 published and unpublished reports) was carried out by Paluck and Green (2009) who evaluated observational, laboratory, and field experimental studies aimed at reducing prejudice. The authors concluded that most of the studies that focused on prejudice-reduction interventions, (e.g. workplace diversity training and media campaigns) were unable to identify causal effects. Although some intergroup contact and cooperation interventions were evaluated as promising, the authors suggested a much more rigorous and broad-ranging empirical assessment of this work. Beyond the methodological concerns that emerged from the meta-analysis, a summary of prejudice-reduction approaches, theories, and future directions for research were also compiled which may serve as a useful guideline for further research in this field. From a methodological point of view, the most important point the authors made is that, based on non-experimental research, we are unable to measure prejudice reduction in real-world settings through applying experimental designs. Field experiments are primarily evaluated as the most useful and promising, but also underutilized approach.
Focusing more on the target group of our study (Hungarian high-school students), extensive research (Váradi, 2014) has aimed at increasing understanding of the formation of Hungarian teenagers' attitudes towards the Roma. As adolescence is a crucial period in identity development, Váradi's objective was to determine to what extent classical and more recent theories about the formation of prejudice can be applied in a context in which there is no public consensus about the need for respect towards minorities. Váradi concluded that students' attitudes towards the Roma do not considerably differ from those of their parents, as 'every third participant [was] fully prejudiced, rejecting all social contact with the Roma, agreeing with derogatory remarks against the Roma, and willing to take action against the members of this group' (Váradi, 2014: 206). Furthermore, it is important to mention that Intergroup Contact Theory was reinforced by Váradi's research, as students who had Roma friends or acquaintances were much less prejudiced towards the Roma, generally. 5 As we have already stated, only sporadic experimental research has been done into the ethnic prejudices of Hungarian youth, and the studies we describe below both focus on anti-Roma attitudes. Váradi (2013) tested how majority students reacted to UCCU's 6 Roma Informal Educational Foundation's programs. We treated this study as a pilot project in relation to our research, as both in terms of its focus (analysis of the same methods of informal education among teenagers 7 ) and methods (qualitative and quantitative pre-and post-tests with students, completed with group interviews with teachers) it was similar to our study. Váradi (2013) attempted to measure changes in attitudes based on the answers of 228 students from ten classes in Hungary in 2013. We also used this questionnaire as a starting point, but adapted some questions for our research. The most important lesson from this research is that these kinds of short-term informal educational programs cannot significantly reduce prejudices towards the Roma in the case of the vast majority of students. To be more specific, the proportion of those students who reported 'feeling awkward when other people criticize the Roma' slightly increased after the program. Moreover, increasing empathy and getting to know how Roma people live their lives in Hungary was more successfully accomplished by Roma volunteers. In summing up, Váradi (2013) concluded that the UCCU program successfully initiated the process of questioning prejudices towards outgroups.
Most recently, Kende, Tropp and Lantos (2016) tested the effects of intergroup friendship between Roma and non-Roma Hungarians on attitudes, relying on a quasi-experimental research design of a small sample (N=61) of university students. Comparing pre-and post-test measures for the experimental and control group, the researchers observed significant positive changes in attitudes and intentions in terms of contacts exclusively created among participants subject to the contact condition in the experiment. Kende and her colleagues also concluded that positive changes were moderated by perceived institutional norms, which finding might corroborate the potential of contactbased interventions. In contrast to Kende and her colleagues' intervention -which was implemented in a university setting -our experiment was carried out in a high-school environment in Budapest, led by an open-minded headmaster who is also supported by mostly liberal and open-minded teachers. Based on the former two experimental pieces of research we conclude as well as assume for our own 5 For further research on the identity, intergroup attitudes, and prejudices of Hungarian adolescence, also see Váradi (2014: 63-70). Furthermore, Váradi is presently leading an ongoing research effort in Hungary entitled 'Class climate, attitude climate' that includes 60 school classes of approximately 1500 Hungarian teenagers who started secondary school in the academic year 2016/2017. The aim of the longitudinal study is basically to understand the interplay between group norms and prejudice. See more about the project at https://nationalism.ceu.edu/dynade 6 See the Foundation's homepage at: http://www.uccualapitvany.hu/english/ 7 The UCCU foundation applied the same methods of informal education as Haver does. UCCU adapted the curriculum of the Haver program and adjusted it to have a Roma focus. study that contact-based intervention may work (i.e. affect intergroup relations positively); however, also that the measured effect is not comprehensive.
As in our study the experimental groups were exposed to interactions only three times, which cannot be considered 'frequent contact,' we did not expect major changes in the target groups' attitudes but rather a 'first step' towards the long process of questioning intergroup prejudice.
Research methods
We used a series of field experimental interventions to test whether different approaches to informal education can foster intergroup tolerance in the form of intergroup attitudes. Designing an appropriate measurement for impact assessment was indeed a challenging task. To make our approach as comprehensive as possible, we decided to use mixed methods (Denzin, 1978).
The advantage of triangulating methods is that quantitative methods can be combined with qualitative ones. In our case, with quantitative data we benefitted from a high level of outreach and comparability, while with the qualitative approach we could obtain answers to more in-depth questions. Furthermore, there were phases during which quantitative research would not have been possible due to the sensitivity of the research topic.
Experimental context
Our research was partly based on a quasi-experimental design that included control and experimental groups, as well as pre-and post-tests carried out before and after each intervention (i.e. in the Haver-affected classrooms and during related outdoor activities). Both the pre-and post-tests included quantitative measures (repeated question blocks for measuring shifts in attitudes) and qualitative ones (namely, focus groups with students and an extensive content analysis of study participants' associations with the word 'Jewish').
In more technical terms, we used a 2×2 (two conditions × two time points) mixed factorial design with one experimental condition (interventions with the volunteers from the Haver foundation) and one control condition (no intervention apart from the 'dilemma café' 8 ), and the measurement of changes in intergroup tolerance and levels of information about Jewishness over time through comparisons of pre-test and post-test scores and associations. This was completed with the above-mentioned qualitative and observational research tools.
The selection process for the experimental and control subjects, however, was far from 'ideal,' 'textbook-type' randomization or matching. Originally, we wanted to select the members of the control group randomly, across the four classes, but for logistical reasons and for the convenience of the school 9 this was not possible. Instead, one entire class (namely, 'Class B') served as the 'control' group, which approach obviously caused some selection bias. Moreover, as the study school was committed to taking part in the experiment but only one class could serve as a control group, this resulted in an uneven distribution of experimental and control subjects in the design.
The experimental subjects took part in an 'Identity' activity, as well as a walk through the Jewish quarter in Budapest and a 'dilemma café,' whereas control subjects only took part in the dilemma café. The Identity activity involved a ninety-minute session during which students sat in class with Haver volunteers and discussed topics related to Jews and Judaism. The main goal was to create a safe space where students were not shy about asking questions from the two Jewish volunteers. Such sessions are typically split into several exercises (see Appendix 2), using different tools to reach as many students as possible. The ninety-minute interactive walk took place in the historical Jewish quarter of Budapest. The volunteers showed students places and buildings which are related to Hungarian Jewish history, and talked about the past and present of the community, touching upon traditions and religion. The dilemma café is a ninetyminute activity that involves students -in small groups -discussing four dilemmas. The topics are introduced in Section 5.4.2 and are elaborated on in Appendix 3.
Materials and methods
While the quantitative research was carried out both with the experimental and the control groups, the qualitative research was only partly implemented in an experimental setting. Questionnaires were developed jointly by our research group and representatives of the Haver Foundation. 10 Some of the questions were based on the materials from the Identity activity, while some were borrowed from earlier research; most importantly, from Váradi (2013) who carried out similar research involving an impact assessment of UCCU's informal training, We carried out interviews with all the four form teachers, but we only organized focus groups among those students who were included in the study due to the lack of further research capacity. Further details about the research design and schedule of the experimental research are shown in Table 1.
As Table 1 shows, we gathered data in four waves -before and after each intervention during 2016 and 2019. The almost one-and-half-year gap between the second and the third intervention was not optimal, but due to organizational and other management issues we could not complete the fieldwork earlier. On the other hand, in this way we could follow the experimental groups for more than three years. One of the pillars of the qualitative research was the two focus groups conducted in three classes, while the other involved the interviews with the form teachers. The participants of the focus groups were chosen by one of the teachers (not the form teacher) through random selection. It is important to mention that the students were the same in the pre-activity and post-activity discussions. Four or five students -selected randomly -participated from each class. We also took into account pre-and post-activity associations.
The dilemma café was the third intervention. This time -as opposed to in the earlier waves -the activity constituted the research itself: the dilemma café was organized with the help of observers who paid attention both to the students' and the volunteers' activity. Similarly to the previous waves, all classes took part in the research. Approximately five groups were created in each class. The number of participants in one group varied from three to eight. The groups participated in a dilemma café: they were presented with five dilemmas from which they had to choose four, based on their titles. They discussed each dilemma in depth for fifteen minutes with a moderator (a Haver volunteer). Every group had an observer who took notes according to the researchers' detailed instructions, which were later analyzed by the researchers. The goal was to assess the impact of the previous activities: first, whether participants had managed to acquire a certain culture of debate; second, to what extent they had received and processed the main message of Haver (tolerance towards minorities, and open-mindedness) and third, whether they manifested any sign of an increase in knowledge about Jews and Judaism. In other words, the analysis was undertaken in line with the Foundation's goals.
The quantitative research was based on a self-administered questionnaire consisting of a core question block, with repeated questions about attitudes and knowledge about Jewishness, completed with an additional block of questions about various topics. The core blocks of the questionnaires, as well as the topics of the qualitative research, are compiled in Appendix 3 (the entire research materials are available upon demand). Also, because of the lack of space, we present only the most important results from the quantitative survey in Section 4.
Methodological concerns
While with the complex design we tried to eliminate many possible flaws, we encountered some difficulties. Generally, the high school we selected can be characterized as a very liberal and open-minded community. This school does not represent the average Hungarian or even Budapest-based high school well, but they let us carry out our extensive research due to their openness. Another important issue that should be mentioned here is that it transpired during the implementation period that the form teacher of the control group (Class B) had already paid significant attention to discussing social issues such as tolerance with their class. In this sense, selecting this class as a control group was not the best choice.
Regarding the dilemma café, there were two further concerns. First, the volunteers moderated the groups in various ways -as they were instructed to be flexible -, therefore intergroup comparison was rather challenging. Second, in lacking recorded data (due to ethical issues), we had to rely completely on our observers who understood their jobs differently. However, even with the abovementioned problems, we managed to carry out a limited content analysis.
We are well aware that the study school we based our analysis on was chosen due to 'convenience' sampling, and that the selection process of experimental and control subjects was far from ideal for the above-mentioned reasons. While interpreting the results of our study, these facts should be kept in mind.
Results
We present our findings in a more or less chronological order, mixing qualitative with quantitative results. In order to place the study in its context, in the first section we show what the form teachers and the students in the focus group discussions said about the school's values, focusing on the dynamics of the intergroup relations. The second section illustrates results related to the first activity -i.e. using a combination of qualitative and quantitative methods. The third section shows the results of the quantitative survey regarding the students' perceptions about Jewish identity, which is followed by results connected to the second activity. The last section describes the third intervention.
Exploring the research context: Values of the school
Regarding the mentality and values of the examined high school, the form teachers had different opinions. While three of them said there are no values to which the school is committed, one believed that freedom is an important concern. Concerning the approach towards students, they agreed that the school is more humane: while keeping high standards, they try to accommodate students' needs. The philosophy of the form teachers -compared to that of teachers in an average state school -is more student-oriented. All of them seemed to be open towards minorities.
Students who took part in the focus groups seemed to like going to the school: 'all students here like the school… everyone participates in the programs,' said one of the students from Class C. None of the groups were specifically interested in public affairs: they read and listen to things which were important to their parents.
As far as attitudes towards minorities are concerned, opinions differed among the students, but were not class specific. Some students believed in a multicultural country, but 'if people are similar, there are no conflicts… so it might be better' (class A). In both discussions students tried to approach this issue by referring to a real topic which concerns Hungary; namely, the situation of immigrants and refugees. Their opinions clearly reflected the arguments which divide society and the pros and cons of the former issue which can be heard in the media.
In every class there was someone who belonged to a minority group, but the participants did not perceive them as 'others' because they were born in Hungary and were familiar with Hungarian culture, etc. According to the students, this cannot be compared to the situation of refugees who come from a less familiar culture: 'the Arabs lie down at noon and do their praying or whatever' (Class C), while their (half-Polish) classmate would never do such thing. This quote indicates rather limited and stereotypical thinking. The distance between the former culture and that of Hungary seems to be the main determinant of students' judgments. understandable, and should not be judged. According to these individuals, this is not discrimination; calling someone a Jew or gay is already embedded in their vocabulary. 'You cannot do anything about this. […] You don't stop, but accept this' (Class D). The other group -when they were asked -emphasized the positive impact of their Chinese classmate, who had given them a presentation about Chinese history when they were studying this topic. They did not study about Jews (or the Holocaust) because they had not reached this topic in their history class, but many have Jewish acquaintances or even relatives. One of them said that he likes him/her, 11 'but (s)he has his/her own typical Jewish characteristics as well' (Class D). This again refers to stereotypical thinking. They agreed that those who have acquaintances from a given group are less discriminative, which suggestion supports Allport's (1954) contact hypothesis. Participants agreed that the students' attitudes are influenced mostly by their micro milieu (family and friends).
First intervention: the 'Identity' activity
Below, results concerning the first intervention are discussed. Associations were surveyed just before and right after the identity activity, while data were collected about students' opinions in a focus group afterwards. Quantitative comparative results are also discussed in relation to the perception of Jewish identity.
Associations
This examines the short-term impact of the 'Identity' activity based on the associations. Every activity started off and ended with a short game during which the volunteers asked the students to write down their associations related to the words 'Jew, Jewish.' 12 This activity -in contrast to that of the focus groups -was appropriate for measuring short-term impacts because the students' experience was still fresh. On average, there were 33 participants during each activity and they wrote down approximately three of four words, 13 resulting in approx. 200 and 240 words before and after the activities, respectively. 11 In the Hungarian language there is no linguistic gender differentiation. 12 The word 'zsidó' in Hungarian means both. 13 Associations could be sentences as well -depending on the volunteers' instructions -but single words were most frequently used. As is clear from Figures 1 and 2, the pre-activity associations are focused more on Judaism, stereotypes, and the Holocaust (or Jewish history), while the post-activity associations reflect more on the Foundations' messages. In the second round there were fewer words related to religion -albeit this topic still dominated -and the words 'culture,' 'tradition' and 'people' occurred more. (These concepts arose during activities when students defined five pillars of Judaism: religion, culture and tradition, people, shared fate, and personal choice.) Apart from these words, 'identity,' 'community,' and 'personal choice' also appear as elements of identity formation. Associations related to WWII disappear and human values such as solidarity and equality appear. The words 'humans' and 'like everyone' refer to the idea -heard also during the activity -that Jews are just like any other human being.
It is worth expanding a little more on the importance of religion-associatedand Holocaust-related words. For centuries, Jews defined their Jewishness through religion (Webber, 2003), which explains the strength of this concept. A ninetyminute activity can hardly erase this association. Regarding the Holocaust, students hear about Jews when they study twentieth-century history. Furthermore, the Holocaust is also widely discussed in the media and public discourse, as well as frequently represented in the cinema. The Haver Foundation was initiated partially based on the experience that students relate the word 'Jew' to the Holocaust, which was perceived as unfortunate.
Even if we cannot draw far-reaching conclusions from the associations of students in the two classes, they first serve as a good basis for comparison with the quantitative results, and second they illustrate well that activities are conducted in various ways. Therefore, their impact may be different as well. This is also true of students: not everyone reacts the same way.
Evaluation of the activity: A 'friendly discussion' (Class A)
Regarding the activities of Haver, everyone was satisfied. Participants mostly emphasized the interactive and personal nature. The former claim was supported in the survey as well, according to which relatively few students experienced boredom (5 percent said they were bored, and 17 percent said they were 'neither bored nor not bored'). It seems that students are in need of an informal style of education during which they can discuss and ask about topics which are not every day. Answers to the question 'what is this activity good for?' during the focus groups went hand in hand with the answers given in the survey: participants understood that the main goal was to expand their knowledge. Despite this, they remembered relatively little of what they had heard during the activity (even though they mentioned that informal education is more effective).
Most of the focus group participants mentioned the 'Identity' activity at home, but did not expand on the details. As someone from Class A summarized, 'I talked about the activity at home but not what it was about.' None of the classes had the chance to discuss it at school, which some of them missed. This might have been useful for helping process the information and remembering better what they had 'learnt.' Hearing someone called a 'Jew' or 'gay' triggered similar reactions: it bothered them, 'but what to do?' One student said that this is embedded into their vocabulary. These results indicate that the activity made them think, but did not inspire them to be proactive. In other words, it involved rather a passive reaction than an active one.
The perception of Jewish identity
We also assessed the potential impact of the first intervention using quantitative tools, comparing control and experimental groups' views about Jewish identity based on pre-prepared answer categories. 14 Figure 3 presents how students defined Jewishness after the first intervention along the five identity elements, as discussed in relation to Haver's Identity activity. Among the various identity elements, we only found a significant difference between the experimental and the control group in terms of the religious component: Those who took part in the Identity activity found the role of practicing Judaism to be a less important component of being Jewish than the members of the control group. This result is in line with what we found based on the analysis of associations presented above.
One of the most important goals of our quantitative research was to measure the level of knowledge about Jewishness. In Table 2 we have summarized the items which were included in the core questionnaire (therefore, these questions were asked a total of three times). In our analysis, 'do not know' and incorrect answers were coded together, as we were primarily interested in the proportion of those who correctly answered these questions. Based on the seven elements above, we created an index to measure the level of knowledge about Jewishness. With this index we aimed to compress information and measure potential changes in students' level of knowledge about Jewishness.
Comparing the rows in Table 3 (based on t-tests), it is obvious that the 'knowledge-index' only changed significantly within the experimental group, meaning that the students who took part in Haver's activities provided more correct answers after the first intervention than beforehand.
Second intervention -A walk in the ghetto
While the first section analyses the open-ended answers to the questionnaire (therefore, we may call these semi-qualitative results), the second section focuses on the close-ended answers.
Evaluation of the walk
Regarding the walk, most students enjoyed this activity -based on answers to the open-ended questions 15 incorporated into post-test II. It seems that they rarely participate in such alternative programs, and the method itself was an innovation for them (even if they had participated in the previous Identity activity). The results of the first question can be classified into two bigger themes. One is related to learning, and the other to the activity. Students 'learnt a lot of new things.' Some mentioned Jewish references (i.e. that they had learnt about Jews or Judaism), while others were happy 'to get to know this part of the city.' Regarding the second item, some comments were connected to the methodology, such as 'we were walking in the city and were not sitting in a classroom' and 'because we walked in the quarter, I could imagine better what they were talking about.' Other comments described the volunteers who led the activity: 'they were very informal,' and 'they answered our questions.' Two complained that they 'had to deal with the topic of Jews again,' and someone wrote that there was nothing (s)he liked. Negative comments were mostly related to the weather and the fact that students had to carry their bags. Answers to the third and fourth questions can be classified into two categories. One concerns the general information students acquired, captured in statements such as 'I got to know this part of Budapest better.' The other answers referred to specific, Jewish-related knowledge. For example, someone said they had learnt how Jews 'celebrate their weddings and how they eat' and 'I learnt about some Jewish traditions.' There were very few comments referring to the activity's attitude-framing nature: '[I brought home with me that] Jews are people just like us.' Someone else wrote '[I brought home with me that] I should be open to the world.' Only a very few students wrote negative answers.
In summary, students learnt a lot of new information and most of them enjoyed the activity. Once again, it seems that students were receptive to these innovative methods.
Shifting knowledge on Jewishness
Based on the core questionnaire, the same process of measurement (as presented in Table 3) was repeated to assess the level of knowledge about Jews and Jewish culture. The change in the level of knowledge about the latter was tested using paired-sample T-Tests that compared the change among those students who had filled out all three questionnaires (Table 4.) Table 4 (based on t-tests) showed that the 'knowledgeindex' had only changed significantly within the experimental group -taking into account the whole study period -, meaning that students that had participated in Haver's activities provided more correct answers after the two interventions than before the experiment.
Comparison of the rows in
However, according to the linear regression models that were designed to measure the treatment effects more accurately by estimating the difference in the post-and pre-measures, 16 only the third model that tested the effect of the second intervention (the Jewish walk) has significant explanatory power (see F-test statistics in Appendix 4). In line with this, we only found significant t-values (at the 0.05 level) in the model that tested the separate effect of the second intervention. In interpreting the results of the regression models, it is important to bear in mind that our data was not perfectly appropriate for regression due to the low number of observation (especially in the control group), as well as due to the 'quasi continuous' measurement level of the dependent variable (the level of knowledge about Jewishness was measured using a seven-item scale).
Associations
The associations were repeated before the dilemma café (in all classes). The results showed little improvement. The experimental groups wrote similar words as prior to the Identity activity, which means that the effect was indeed short term. As mentioned earlier, it is difficult to erase a centuries-old mindset. Comparison of the experimental groups with the control group shows that the word 'human' appeared more often in the former, which means that some students remembered the message that Jews are like other human beings.
Dilemmas
There were five dilemmas associated with different topics: in one of them, the issue of traditions and their importance came up (Topic 1); another focused on stereotypes (Topic 2); the third one involved a dilemma between hate speech and free speech (Topic 3); and the fourth and fifth were both about inclusion versus discrimination based on dietary restrictions (being kosher; Topic 4) and origin (being Jewish; Topic 5). About the dilemmas, see more in Appendix 3.
Part of the method aims at developing a culture of debate, which is otherwise not supported by the national school system. The results of the dilemma café show that this debate culture is still in its infancy: most of the time the volunteer (or moderator) had to initiate conversation by asking supporting questions. Sometimes this was due to the topic of the dilemma, which was not found to be interesting enough, or was perceived as too obvious. However, we believe that most of the time the response was because students are not used to this kind of setting. This claim is supported by the fact that several students said they were happy to participate in such activities in which 'we could talk without having to do anything else but talk' (Class A).
The second issue is the participants' open-minded way of thinking (or a lack of this). A student from Class D said that 'calling someone a Jew does not mean the person is anti-Semitic.' This statement goes hand in hand with the results of the focus group: many students regard any kind of negative speech as 'normal' because of its embeddedness. In some groups, students explained stereotypes against Jews by saying 'Jews are indeed financial [sic]' 17 (Class D). In comparing these responses to the survey results and the general impression about these classes (which was positive), a contradiction arises: it appears that students try to comply with the expected behavior while thinking otherwise, or that these type of statements do not carry much weight for them. Calling someone a Jew, or holding stereotypical views, may still be regarded as something normal. Inclusion was viewed differently: many students believed it was fair if someone could not join a community easily, because 'when someone wants citizenship, there is also a procedure' (Class D), and 'we do not accept someone into a swimming team if the person cannot swim' (Class A). This represents a rather exclusive way of thinking. However, in this case the students were defending a hypothetical Jewish community that was not willing to accept a non-Jewish person (in line with the dilemma).
The third question concerned whether participants had acquired new information regarding Jews and Judaism. The observers did not notice in most cases any special knowledge that could be attributed to the Foundation (which may also be a result of the shortcomings of the methodology that was applied). However, in four out of the fifteen experimental groups students explicitly INTERSECTIONS. EAST EUROPEAN JOURNAL OF SOCIETY AND POLITICS, 5(4): 79-109. mentioned an experience from one of the Haver activities. Furthermore, some additional sentences could be the result of their experience, such as 'everyone can be what (s)he considers themselves to be' (Class A), referring to personal choice as a form of self-identification. The answers in the control groups were very similar to the rest of the responses, therefore no conclusions can be drawn from this point of view.
Conclusion
Our research, which uses an innovative method, involved implementing a pilot experiment to see whether it is possible to measure the impact of the Haver Foundation's activities. As the Foundation has multiple goals, the results are presented in line with these. Regarding the 'knowledge factor,' we can say that the goal was reached partially: while participants knew more about Jews and their lives, many students still related Jewishness to religion, as opposed to having a wider understanding. From their responses, it is clear that even after several activities they had difficulty saying the word 'Jew,' 'tolerance,' etc. out loud. They rather said 'this topic.' The second focus is enhancing open-mindedness. Many students spoke in a problematic way, saying things such as 'they [the Jews] are totally normal despite what they believe in.' In order to change their way of speaking and thinking, a few ninety-minute activities are not enough: teachers should also deal with these issues by giving feedback at the end of each activity. Last, the Foundation also intends to foster a culture of debate among high-school students. Based on the last activity, it seems that the interventions were not enough. However, it is also the task of teachers to (want to) develop this skill.
The potential impact that can be achieved by such a short series of informal education programs is limited, as is this study. The most important result of our research is that in the experimental group students' views changed to some extent, both in terms of the perception of Jewishness, and their level of knowledge about the topic. In line with Váradi (2013), we think that with a limited number of interventions the Foundation will struggle to change students' views, but the former represents a good starting point for raising awareness about tolerance and minority issues. Moreover, the main methodological limitations of our study are the following: (i) the selection of the study school was based on convenience, therefore the external validity of our study is low; (ii) the uneven and not perfectly randomized distribution of the control and experimental subjects does not let us draw far-reaching conclusions, even using the statistically significant results of the tests we employed; (iii) finally, even with the process of repeating the measurement, the long-term impacts of the interventions remain unknown. Therefore, our future goal is to carry out similar, but better designed experimental research based on this pilot study in other high school(s) which are more 'typical' in terms of the attitudes of teachers and the socio-economic background of students. 1. Introduction to the research 2. Personal information about the teacher (family, education, career) 3. Professional questions (about teaching mostly, educational philosophy) 4. Attitudes (general attitudes towards minorities, acquaintances, about the class) 5. About his/her class (minorities, attitudes, in-class methods for handling conflict, sensitivity in the class about these issues, details about parents, etc.) 6. Other: is there anything he/she wants to talk about? Topics for the dilemma café Topic 1: The importance of traditions A girl is getting married to a boy and she just found out that her boyfriend does not want a Jewish (religious) wedding, unlike her. She loves him very much, but her parents also insist on a religious wedding. What should she do?
Topic 2: Stereotypes Two friends are chatting. One of them tells the other that she has fallen in love with a Jewish boy, and she is confronted with stereotypes by her friend, such as 'Jews are rich and tricky' and 'they own the media.' Are the statements anti-Semitic?
Topic 3: Free speech versus hate speech A Scottish Youtuber puts up a video in which he is teaching his girlfriend's dog the Hitler salute. The court finds him guilty. His argument is that he only wanted to prove a point by show his girlfriend that her dog is not cute. Some people defend him in the name of free speech. Is this hate speech or free speech?
Topic 4: Inclusion and discrimination A new family arrives to Hungary from the US and the parents want the child to eat according to kosher rules. The school refuses this request, saying that they cannot meet the parents' whims. The parents argue that other kids are lactose intolerant and their requests are accommodated. Who is right?
Topic 5: Inclusion and discrimination Bruno recently moved to Pest and became friends with someone who goes to a synagogue. The community in the synagogue is organizing a trip to Israel, but Bruno cannot go because he is not Jewish. He could go only if he converted. He is rather considering going to another community where he is not discriminated against. Is he right? (t1=identity, t2=Jewish walk) 5.054 (0.028) ** ***p < 0.01; **p < 0.05; *p < 0.1. | 2020-04-30T09:02:13.396Z | 2020-03-20T00:00:00.000 | {
"year": 2020,
"sha1": "1277488fe96b0d68c2717104d1c2f1862f61d55f",
"oa_license": "CCBY",
"oa_url": "https://intersections.tk.mta.hu/index.php/intersections/article/download/575/273",
"oa_status": "GOLD",
"pdf_src": "Anansi",
"pdf_hash": "e44d4a456ac57e3c9b79173d6787f84cba653b98",
"s2fieldsofstudy": [
"Sociology",
"Education"
],
"extfieldsofstudy": []
} |
221142108 | pes2o/s2orc | v3-fos-license | Comprehensive analysis of the mechanism and treatment significance of Mucins in lung cancer
Aberrant expression of mucin proteins has played a complex and essential role in cancer development and metastasis. Members of the mucin family have been intimately implicated in lung cancer progression, metastasis, survival and chemo-resistance. During the progression of lung cancer, mucin proteins have involved all of the procession of lung cancer, which is interacted with many receptor tyrosine kinases signal pathways and mediated cell signals for tumor cell growth and survival. Mucins thus have been considerable as the indicator of negative prognosis and desirable therapeutic targets of lung cancers. In this review, we comprehensively analyzed the role of each member of the mucin family in lung cancer by combining open-accessed database analysis and assembling cutting-edge information about these molecules.
Background
Lung cancer has ranked the most common cause of cancer death worldwide. Every year, there are about 1.8 million people being diagnosed with lung cancer, and 1.6 million people died from the disease [1]. Approximately 85% of patients had a group of histological subtypes collectively known as non-small cell lung cancer (NSCLC), in which lung adenocarcinoma (LUAD) and lung squamous cell carcinoma (LUSC) have been the most common subtypes [2]. Recently significant advancement has been made in the driver genes research, screening biomarkers, and personalized therapy (precision medicine) of lung cancer, the 5-year relative survival rate for lung cancer has been 19% overall (16% for men and 23% for women); 24% for non-small cell; and 6% for small cell tumors [3]. However, there still remain several challenges as following: we need to identify new driver gene alterations to expand the population benefited from targeted therapies; It is important to understand the mechanisms responsible for resistance to targeted therapy for further prevention or overcoming; also better predictors of responses to immunotherapy, new drugs and rationally designed drug combination therapies need to screen [4].
Mucins are classified into two major categories depended on their structure-membrane mucins and secreted mucins. The membrane mucins are consisted of eleven members as MUC1, MUC3A, MUC3B, MUC4, MUC12, MUC13, MUC15, MUC16, MUC17, MUC20 and MUC21; while secreted mucins are comprised of seven members which can be further subdivided into gel-forming mucins (MUC2, MUC5AC, MUC5B, MUC6, MUC19) and non-gel-forming mucins (MUC7, MUC8). All mucin members have at least one mucinlike domain which contains a high proportion of tandem repetitive structures of prolines, threonines and serins (which form the PTS domain). And the PTS domain of the mucins is extensively glycosylated at the threonine and serine residues through GalNAc O-linkages. The two kinds of mucins have different functions in the human body. The membrane mucins are located in the ductal surfaces of organs epithelial cells served as a physical barrier. The transmembrane mucins are primarily located on the apical membrane of epithelial cells, where they could play a role in cell signaling. They all protect the integrity of epithelial cells from different environmental stresses. For example, they could degraded enzymes by forming a physical, chemical and immunological barrier and interact with many receptor tyrosine kinases mediated cell signals [5]. Any alteration of MUCs expression or glycosylation pattern will significantly affect tumor cell growth, differentiation and survival which enabled them regard as potent cancerinducing molecules [5][6][7].
In this review, we have drawn an overview of the mucin family and discussed the role of each mucin members in tumorigenesis and metastasis and recent advances in tumor research. We will concentrate on the importance of mucin proteins on cellular signaling pathways and its role in targets and immune therapy of lung cancer.
Expression and mutation landscape of Mucins in NSCLC
TCGA-GTEx mixed data Cohort, which contained 1410 lung cancer and normal lung tissue samples was downloaded from UCSC Xena [8][9][10] to analyze Mucins expression (Fig. 1a). According to our analysis, we found that MUC1, MUC2, MUC3AC, MUC4, MUC5AC, MUC5B, MUC6, MUC13, MUC15, MUC16, MUC20, MUC21, MUC22 elevated than normal lung tissue while MUC7 decreased in LUAD. However, in LUSC, MUC1, MUC3AC, MUC5AC, MUC6, MUC7, MUC15, MUC17, MUC21 indicated lower expression compared with normal lung tissue, while MUC4, MUC13, MUC16, MUC20 Fig. 1 Mucins expression and mutation in non-small cell lung cancer. a. mRNA expression of mucin in lung adenocarcinoma and lung squamous cell carcinoma. b. mutation rate of mucins in in lung adenocarcinoma and lung squamous cell carcinoma. * P < 0.05, ** P < 0.01, *** P < 0.0001 increased. These results somewhat have contradicted with the existing research that each of MUC1 and MUC5AC had a high protein expression in lung carcinoma. Lappi-Blanco et al. summarized the MUC1 expression in lung cancer, which found high expression of MUC1 predicts poor survival in the majority of studies [11]. Especially, Guddo et al. and Woenckhaus et al. demonstrated MUC1 expression was associated with poor prognosis in squamous cell cancers patients [12,13]. Considering to MUC5AC, Yu et al. identified MUC5AC overexpressed in stage I/II NSCLC patients [14,15]. In addition, both MUC1 and MUC5AC have higher expression in adenocarcinomas compared with squamous cell carcinomas [14,16]. However, our analysis based on public databases indicated that both MUC1 and MUC5AC mRNA overexpressed in LUAD, but decreased in LUSC. Hence, it is necessary to study the function of Mucins in LUAD and LUSC separately.
Effects of Mucins on cellular signaling pathways
The overexpression of MUC1 causes many downstream indications closely related to poor clinical performance (Fig. 2). Giatromanolaki et al. examined the correlation between VEGF and MUC1 expression in 199 NSCLCs, then demonstrated that MUC1 expression is linked to high VEGF expression [20]. And overexpression of MUC1 facilitates angiogenesis of NSCLC by activating the Akt and ERK signaling pathways then up-regulating vascular endothelial growth factor (VEGF) [21]. Gao et al. demonstrated that knockdown MUC1 could activate apoptosis and inhibit cell proliferation and metastasis, as well as be sensitized to cisplatin treatment by modulating STAT3/Akt, SRC/FAK and Bcl-XL/Bcl-2 signaling pathways in NSCLC [22]. Besides, MUC1 could interact with ERα and ERβ within the nucleus of to inhibit the proliferation of LUAD cells [23]. MUC1 is also involved in the NF-κB signaling pathways by forming a complex with NF-κB/p65. The complex is directly brought to the promoter of CD274 driving PD-L1 transcription [24]. Another complex, MUC1/β-catenin/TCF4 is directly bound to the MYC promoter and promotes the recruitment of p300 histone acetylase (EP300), which can induce histone H3 acetylation and MYC gene transcription, in turns downregulate MYC-target genes [25]. MUC1-C induces NF-κB/p65 chromatin occupancy of the LIN28B first intron and activates LIN28B transcription, consequently activates the LIN28B → let-7 → HMGA2 ceRNA axis in NSCLC, and thereby promotes EMT and stemness phenotype [26]. The N-glycosylated MUC1-C restrains miR-322 expression and thereby upregulates galectin-3. Successively, galectin-3 forms a bridge between MUC1 and the EGFR which physically integrates MUC1 with EGFR signaling [27]. Moreover, MUC1 plays a great role in acquired chemoresistance. In the study of Xu et al. demonstrated that knockout MUC1 could significantly increase the apophatic toxicity of displaying, doxorubicin and TRAIL induced antiapoptotic lung cancer cells. And miR-551b/catalase/ROS axis gives rise to MUC1 overexposure following EGFRmediated activation of the cell survival cascade involving Akt/c-FLIP/COX-2 [28]. In PTX-resistant lung cancer cells, overexposure of MUC1 promotes proliferation, stemness by regulating PI3K/Akt signaling and cancer stemness biomarkers [29]. Similarly, MUC4 drops lung cancer cells proliferation through down-regulating of cell cycle related protein and GSK3β/p-Akt, which regulates the invasion and metastasis by FAK activity and EMT marker [30]. MUC5AC interacts with integrin β4 recruit phosphorylation of FAK (Y397) activated downstream signaling pathways, leading to lung cancer cell migration [31]. Wei Han et al. demonstrated knockdown MUC5AC could significantly downregulate PCNA which is a well-known proliferation biomarker, and metastasis biomarker MMP-2, MMP-9 [15]. MUC16 could promote lung cancer progression, metastasis, and chemoresistance to cisplatin and gemcitabine via the regulation of TSPYL5 activity through JAK2/STAT3/GR axis [32]. MUC16 mutations are associated with MUC16 mRNA and protein up-regulation, furthermore promotes the proliferation, enhances migration and invasion and increases cisplatin resistance of lung cancer [33,34].
Regulation of the expression of Mucin family genes
There are still various transcription factors and signaling molecules regulating MUC1 gene expression in airway epithelial cells and lung cancer cells (Fig. 3). Sp-1 has been demonstrated to modulate MUC1 expression by being peculiarly binding on the MUC1 promoter between − 99/− 90 in lung cancer cells [35]. Hypoxia actives the HIF-1α interacted with MUC1 promoter then enhances MUC1 expression [36]. The downregulation of 14-3-3ζ could completely clear up the carcinogenic potential of MUC1 through MUC1/NF-κB feedback loop [37]. Fuzhengkangai decoction regulates MUC1 expression through Akt-mediated inhibition of p65 [38]. Besides, STAT3 and DPP9 are two upstream regulators of MUC1 which can regulate MUC1 expression at both mRNA and protein levels [22,39]. EGF and TGF-α induces MUC2 and MUC5AC expression through EGFR/ Ras/Raf/ERK-signaling Cascade. In addition, Sp-1 and Sp-3 regulates MUC2 and MUC5AC expression by binding their promoters [40]. PRDM16-ΔPRD regulates transcription of MUC4 by regulating the histone modifications of its promoter [41]. SPDEF regulates the expression of MUC5AC and MUC5B combining with the upstream enhancer regions of the MUC5AC and MUC5B [42]. Besides, two long non-coding RNA have been reported involved in regulating mucins. SNHG16-miR-146a axis stimulate MUC5AC expression in NSCLC [15]. MUC5B-AS1, as a novel long non-coding antisense transcript, promotes cell migration and invasion by forming a RNA-RNA duplex with MUC5B, thereby increases MUC5B expression levels in lung adenocarcinoma [43].
The importance of Mucin for the tumor immune microenvironment
Recently, cancer immune microenvironment has proved of great significance for immunotherapy. Several studies have reported that evaluating MUC1 in tumor cells is relating to the evasion of immune recognition and destruction in NSCLC (Fig. 2). MUC1 plays a key role in TAM-induced in the generation of lung cancer stem cells (LCSCs) progression by regulating NF-κB, CD133, and Sox2 [44]. Targeting MUC1-C drives the aberrant downregulation of PD-L1, IFN-γ and leads to enhanced effector function of CD8+ tumor-infiltrating lymphocytes (TILs) in the tumor microenvironment [45].
Mucins and therapeutic perspectives
Various studies demonstrated that MUC1 plays an important role in drug-resistance, targeted therapy of lung cancer, which makes it an attractive target for lung cancer therapy. The MUC1 inhibitor GO-201, 202 and 203 can bind directly with the cytoplasmic domain of MUC1 thereby weaken MUC1-mediated cell proliferation [48]. And, GO-203 blocks homoerotic dimerization of MUC1-C, and reverses the MUC1 carcinogenic effect in NSCLC [49]. Several studies have reported how G0-203 work in NSCLC. GO-203 inhibits NSCLC cell growth and survival by preventing the integration between MUC1-C and PI3K-p85, and suppresses constitutive phosphorylation of Akt and its downstream effector, mTOR [48]. Furthermore, Go-203 also plays an important role in the regulation of EGFR-TKI resistance treatment.
Silencing MUC1-C in H1975/EGFR(L858R/T790M) cells suppresses AKT signaling pathway, and inhibits cell proliferation of lung cancer [50]. Combining GO-203 with afatinib work synergistically can inhibit the growth of NSCLC cells with EGFR(T790M) or EGFR (delE746-A750) mutants [50]. Combining GO-203 with JQ1 which mechanically inhibits MYC expression shows synergistic function in inhibiting growth of NSCLC tumor xenografts [25]. Silencing MUC1-C in KRAS(G12S) and KRAS(Q61H) mutated NSCLC cells results in downregulation of AKT and MEK signaling and represses ZEB1/miR-200c loop, thereby reverses the EMT phenotype, decreases self-renewal and attenuates the proliferation of KRAS mutant NSCLC cells [51]. Most of all, treatment with GO-203 destroy the MUC1-C → PD-L1 signaling, and promotes the suppression of CD8+ T cell activation [45]. Integrating GO-203 with immune checkpoint inhibitors may be a potential approach for NSCLC therapy. MUC1 also serves as TAAs playing an important role in tumor immunotherapy. There are two vaccines for NSCLC targeted MUC1 being in clinical trials. TG4010 is an immunotherapeutic vaccine based on Modified Vaccinia virus Ankara (MVA), and encoding the human tumor-associated antigen MUC1 and human IL-2. In Phase II study of TG4010, there were 65 patients with MUC1 positively treated with TG4010 in combination with cisplatin and vinorelbine as first-line chemotherapy. The 65 patients were divided into two groups: Group 1, a TG4010-chemotherapy combination; and Group 2, a sequential protocol in which TG4010 was first administered as monotherapy until got partial response then combined with chemotherapy. The median overall survival (OS) was 12.7 months and 14.9 months respectively [52]. In the study of Quoix et al. (NCT00415818), 148 patients with advanced (stage IIIB or IV) NSCLC with MUC1 positively were enrolled in parallel groups, that patients in experiment treated were allocated to the combination therapy group, and received TG4010 plaque forming with TG4010 plus cisplatin and gemcitabine while the control group received the same chemotherapy alone. The 6-month progression-free survival (PFS) was 43.2% in the TG4010 plus chemotherapy group, and 35.1% in the chemotherapy alone group [53]. In another study of Quoix et al. (NCT01383148), they recruited 222 patients and randomly allocated averagely into TG4010 and chemotherapy, placebo and chemotherapy 111 groups. The results indicated that median PFS was 5.9 months in the TG4010 group and 5.1 months in the placebo group [54]. Both of these studies demonstrated TG4010 plus chemotherapy improve PFS and OS outcome in MUC1-positive patients. Recently, a study of 78 patients which all coming from the TIME study carrying the HLA-A02*01 haplotype indicated TG4010 treatment broadens CD8 + T cell against responses to MUC1 as well as other nontargeted TAA [55]. Therefore, TG4010 can be used in combination with other targeted immunomodulators to maximize response rates and clinical benefits. Sequential treatment with anti-PD-1/PD-L1 after treated with TG4010(NCT02823990) shows a better overall survival in mice model [56]. Moreover, there are two clinical trials (NCT02823990 and NCT03353675) in studying about combing TG4010 and Nivolumab in NSCLC patients (Table 2). Tecemotide, also known as L-BLP25 or Stimuvax, is designed to elicit an antigen-specific cellular immune response against MUC1, which is one of the first TAAs identified by human tumor-specific T-cells. Palmer M et al. performed a phase 1 study of L-BLP25 in patients with stage IIIB or IV NSCLC, which certified that L-BLP25 were well tolerated for patients [57]. Later, Charles Butts et al. conducted a Phase IIB Trial in stage IIIB or IV NSCLC patients, which patients were treated with either L-BLP25 plus best supportive care (BSC) or BSC alone. The 3-year following up results demonstrated a median survival time was longer in patients treated with L-BLP25 plus BSC compared with BSC alone, and patients in stage IIIB LR disease showed the greatest difference [58,59]
Conclusions
Although mucins in lung cancer are not well studied because of its heavy molecular weight, they are still tended to play a significant role in lung carcinogenesis. Mucins served as an important diagnostic method is widely used in clinical especially MUC1 and MUC16 due to their unique expression pattern and function. And their therapeutic potential in lung cancer deserve further studies. And the association between different Mucins will make a specific degree of sophistication in our understanding to their function in lung carcinogenesis.
MUC1-targeted vaccines and small molecule drugs are now in clinical studies for preventing lung cancer. However, the effect of those vaccines was rarely as expected, which makes it necessary to develop new drugs for MUC1 or other mucins. Moreover, it seems that MUC16, MUC21 and MUC5B showed high mutation rates, mRNA expression and close relations to tumor immune infiltration may be still a great target for lung cancer target and immune therapy.
In addition, further researches about the role of mucins in lung cancer with different mutational background such as K-ras, EGFR, and BRAF are necessary to guide the combination therapy and overcome drug-resistance for lung cancer. | 2020-08-18T13:37:43.035Z | 2020-08-17T00:00:00.000 | {
"year": 2020,
"sha1": "79ea063ca19388d1860a57fb7305e2f3363dee22",
"oa_license": "CCBY",
"oa_url": "https://jeccr.biomedcentral.com/track/pdf/10.1186/s13046-020-01662-3",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "0e926c38399c9fb8f6df90251fcf06241eae72ce",
"s2fieldsofstudy": [
"Biology",
"Chemistry"
],
"extfieldsofstudy": [
"Medicine"
]
} |
239651432 | pes2o/s2orc | v3-fos-license | Southernmost records of Pachyramphus marginatus (Passeriformes: Tityridae) and first observation for Santa Catarina State, southern Brazil
The Black-capped Becard (Pachyramphus marginatus) has two geographically isolated subspecies, including the Atlantic Forest subspecies that is distributed from Pernambuco State to Paraná State. Here we report the first observation of the species in Santa Catarina State, southern Brazil. On 12 November 2019 an adult male of P. marginatus was observed in the municipality of São Francisco do Sul, on the northeastern coast of Santa Catarina, during an inventory for the creation of a protected area. The species was also observed on three other occasions in the same locality. These are the first known records for Santa Catarina and extend the known distribution range of this species 40 km southward. We also discuss some explanations for the records.
The Black-capped Becard, Pachyramphus marginatus (Lichtenstein, 1823) (Passeriformes: Tityridae), is a neotropical bird species with two geographically isolated subspecies, one in the Amazon region (P. m. nanus) and another in the Atlantic Forest (P. m. marginatus) (SICK, 1997;BIRDLIFE INTERNATIONAL, 2020;MOBLEY 2020). Both subspecies are currently evaluated as Least Concern (BIRDLIFE INTERNATIONAL, 2020). According to BirdLife International (2020), the Atlantic Forest subspecies is distributed from Pernambuco State to São Paulo State ( Figure 1). However, the species has been recorded further south in the last decades. Ricardo Parrini (apud SICK, 1997, p. 632) (HORI, 2011) and recordings of these are on the WikiAves website (wikiaves.com.br; vouchers WA479543, WA488573). After these initial observations, several records were made by ornithologists and birdwatchers in Paraná State, including in the municipalities of Guaraqueçaba, Antonina, Morretes and Guaratuba ( Figure 1). All these records are available in online databases, such as WikiAves and eBird (ebird.org), but none have been formally published. To date, the observation by Carlos Gussoni in Guaratuba Bay, in the municipality of Guaratuba (GUSSONI, 2013), is recognized as the southernmost occurrence of the species population in the Atlantic Forest. Here, we present new observations of P. marginatus that extend its distribution southward and, for the first time, document its presence in Santa Catarina State, southern Brazil.
Records of Pachyramphus marginatus for Santa Catarina
On 12 November 2019 an individual of P. marginatus was heard by FBF on a dirt road that crossed a second-growth forest area in the Saí district (Vila da Glória) (26°12'01.9"S, 48°41'51.8"W; approximately 120 m a.s.l., Figure 1), in the continental portion of the municipality of São Francisco do Sul, on the northeastern coast of Santa Catarina State. The individual, an adult male, was attracted with playback, voice recorded and photographed ( Figure 2). The species was found in the same locality for three consecutive days, and only a single male was observed on all occasions. Another observation of the species was made by FBF and GW in the same place between 15 and 18 January 2020 and, at this time, a female was observed together with a male; the female was not photographed at the time, but some other birdwatchers documented it after the discovery (WikiAves.com.br: WA4217064, WA4096098 and WA4064267). The couple was again observed on 11 February 2020 during a bird trip, and the male was recorded alone on 30 July 2020. The field survey was related to a bird inventory carried out to help create a protected area in the Saí district. Although we looked for the species in other places in the district (e.g., by using playback), to date the species has only been found in the same locality.
These are the first records for Santa Catarina State and the southernmost records for the species. They are approximately 160 km south of the species distribution limit cited by Birdlife International (2020) and 40 km from the closest record in Paraná State (i.e., Guaratuba). Moreover, since the species does not migrate like other species of Pachyramphus (SOMENZARI et al., 2018), the present observations made over nine months during both breeding (spring-summer) and non-breeding (autumn-winter) seasons strongly suggest that the couple is settled in the locality.
Several hypotheses can be suggested to explain these records. First, the recent increase in the number of ornithologists and birdwatchers in Santa Catarina State could have increased the probability of detecting the species. However, a high number of ornithological studies (n = 50) have already been conducted on the northeastern coast of the state, in which 474 bird species were recorded (GROSE et al., 2019). Moreover, several ornithologists had already explored this region looking for rare species, such as Kaempfer's Tody-Tyrant (Hemitriccus kaempferi) and others (e.g., BARNETT et al., 2000;NAKA et al., 2000). The region is also home to the Volta Velha Private Reserve, a place that has been highly visited by birdwatchers for at least the last two decades. Therefore, the hypothesis of a recent increase in the number of observers is weakly supported considering the high number of researchers that have conducted studies in the region.
Another possible explanation is the distribution of the species recently expanded southward. Factors that drive shifts in species distributions could be related to, for example, changes in land use or abiotic conditions (e.g., climatic) that allow the establishment of a species in previously unsuitable areas (PECL et al., 2017;GUO et al., 2018). It is known that species distributions are changing worldwide due to climate change and that each species responds at different rates (PECL et al., 2017). Permesan and Yohe (2003), for example, indicated an average poleward range shift of 6.1 km per decade in a global meta-analysis including 99 bird species, butterflies and plants. This rate would be higher for P. marginatus, considering the first observation in the municipality of Guaratuba in 1998 (40-50 km). However, a shift in the distribution range of P. marginatus is, for now, merely speculative and needs to be more thoroughly investigated to increase what is known about the species in Santa Catarina State (including abundance data). Since the coastal region of PR and northern SC seems to be the edge of occurrence for this species, it is possible that the species was already present in the area but in very low abundance. This is because species tend to be less abundant at their distribution limits due to limiting environmental variables in these locations, as seen in some plant species (CUMMING, 2002;ARUNDEL, 2005;ANGERT, 2009). Thus, a range expansion would possibly increase the population numbers in the former edges of the distribution of a species.
Although several factors suggest that the species is indeed rare or new in the region, it is important to note some similarities the Black-capped Becard has with other sympatric species that can cause birdwatchers and/or ornithologists to make mistakes in the field. Morphologically, it is very similar to the White-winged Becard (Pachyramphus polychopterus), the latter having darker underparts and lacking light-gray spots on the lores (MOBLEY, 2020), and vocally it is very similar to the Greenish Schiffornis (Schiffornis virescens). Further, these two other species are common in the region. Therefore, we recommend that researchers and birdwatchers pay special attention to these species and their peculiarities when confirming new records of the Black-capped Becard.
Records of Pachyramphus marginatus for Santa Catarina
São Francisco do Sul for the funding; and the Núcleo de Educação Ambiental (NEAmb/UFSC), Universidade Federal de Santa Catarina (UFSC) and Univille for the great help in the logistics of the fieldwork. | 2021-09-25T15:44:08.009Z | 2021-08-26T00:00:00.000 | {
"year": 2021,
"sha1": "ee32316e1172234593d5da4aa5ebbbc64122ab2e",
"oa_license": "CCBY",
"oa_url": "https://periodicos.ufsc.br/index.php/biotemas/article/download/80059/47197",
"oa_status": "GOLD",
"pdf_src": "Anansi",
"pdf_hash": "dd2bdb1f906513900695733eba5c2f99294a7e71",
"s2fieldsofstudy": [
"Biology",
"Environmental Science"
],
"extfieldsofstudy": [
"Geography"
]
} |
244830625 | pes2o/s2orc | v3-fos-license | Cellulose Nanoparticles Prepared by Ionic Liquid-Assisted Method Improve the Properties of Bionanocomposite Films
Bionanocomposites have garnered wide interest from the packaging industry as a biocompatible alternative to non-biodegradable petroleum-based synthetic materials. This study presents a simple and eco-friendly alternative to produce cellulose nanoparticles using a protic ionic liquid, and the effects of their incorporation in cassava starch and chitosan films are evaluated. Bionanocomposite films are prepared using the solvent casting method and are characterized using X-ray diffraction, Fourier transform infrared spectroscopy, zeta potential, thermogravimetric analysis, and transmission electron microscopy. The achieved yield of cellulose nanoparticles is 27.82%, and the crystalline index is 67.66%. The nanoparticles’ incorporation (concentration from 0.1 to 0.3% wt) results in a progressive reduction of water vapor permeability up to 49.50% and 26.97% for starch and chitosan bionanocomposite films, respectively. The starch films with 0.1% cellulose nanoparticles exhibit significantly increased flexibility compared to those without any addition. The nanoparticles’ incorporation in chitosan films increases the thermal stability without affecting the mechanical properties. The cellulose nanoparticles obtained using protic ionic liquid, as an alternative pathway avoiding the classic acid hydrolysis can be a simple, sustainable, and viable method to produce bionanocomposites with tailored properties, useful for applications in the packaging industry.
Introduction
Synthetic polymers have garnered significant attention from the research community and general society due to their unique properties, such as lightweight, low-cost, high surface area, and excellent mechanical properties [1].
However, despite being very efficient, most petroleumbased polymers for packaging application are non-biodegradable and harmful to the environment [2,3], which has prompted their replacement with their benign equivalents, particularly in biomaterials derived from renewable sources [2].
Cellulose, starch, chitosan, carrageenan, and agar are good examples of polysaccharides (with thermoplastic properties) obtained from renewable sources with potential application as food packaging materials, including edible coatings [3][4][5]. Among the various natural polysaccharides, starch is the most abundant and economical reserve polysaccharide derived from plants, whose properties depend directly on the morphology, composition, pH, and other factors from the plant source. Starch granules are mainly composed of two types of glucose polymers: amylose and amylopectin [5,6].
Chitosan is the second most abundant biopolymer in nature, obtained from the deacetylation of chitin (from the exoskeleton of arthropods, cell walls of yeasts and fungi). Several factors, such as source, temperature, alkali concentration, and incubation time, can affect the properties of chitin [3]. The predominant difference of chitosan from other polysaccharides relies on the antimicrobial properties against bacteria, yeasts, and fungi [3,7,8]. Despite their excellent properties, chitosan and starch films exhibit high water vapor permeability, which directly influences the film properties and consequently, limits their applications [9,10].
Cellulose nanoparticles (CNPs) are promising functional materials with excellent mechanical characteristics that can be used to improve the properties of these materials because of the strong interactions of crystalline cellulose with the polymers, which increases the hydrophobicity of the films, thus improving the barrier properties [9,10]. However, the conventional methods to obtain CNPs are often accomplished via acid hydrolysis using mineral acids such as hydrochloric and sulfuric acids, which are non-biocompatible, potentially harmful to manipulation, and non-ecofriendly [11]. Therefore, it is crucial to develop more biocompatible and environment-friendly processes using "green" solvents to produce CNPs, which can guarantee the sustainability and the biocompatibility of the nanocomposites [11]. Thus, the search for more biocompatible, low-cost, and environment-friendly solvents, such as ionic liquids (ILs), bio-solvents, and deep eutectic solvents (DES) is important [12]. Among these, the ILs defined as salts that are in liquid state at temperatures below 100 °C, have emerged as one of the most fascinating classes of biocompatible solvents for cellulose dissolution and CNPs production [13][14][15], with several advantages, such as low volatility, high solvation capacity, and excellent thermal and chemical stabilities [16].
Numerous studies [13,[17][18][19] have been conducted to produce of CNPs using ILs, demonstrating that the obtained nanoparticles did not show structural changes, and ILs can be recycled and reused. Likewise, bionanocomposites incorporated with CNPs have been extensively investigated [2,3,10,20,21,21]. However, few studies have reported 1 3 using CNPs produced using IL solvents to produce chitosan and starch bionanocomposite films with tailored properties but overcoming the drawback of CNPs production by acid hydrolysis.
Therefore, this study aimed to produce CNPs using a protic IL and evaluate the effect of their incorporation in starch and chitosan films. The morphology, mechanical, and thermal properties of the bionanocomposites are also discussed.
Preparation of CNP-DM Using PIL
CNPs were prepared using PIL following the modified method described by Gonçalves et al. [19]. Briefly, MCC was mixed with [DMAPA][Hex] at a solid/liquid ratio of 1:9 (w/w) and then homogenized at 80 °C and 2800×g rpm for 3 h using a magnetic stirrer hot plate mixer (C-MAG HS7, IKA, Germany). After homogenization, the tubes containing the solution were collected and centrifuged (Eppendorf Centrifuge 5702R, Germany) three times at 2800×g and 25 °C for 30 min to recover the PIL. The remaining solids in the tubes were washed with deionized water and centrifuged (2800×g, 25 °C, 30 min) until the pH neutralization. The thoroughly washed solid (CNP-DM) was then recovered and lyophilized (Lyophilizer L101, Liotop, Brazil) for 24 h.
Characterization of CNP-DM
To evaluate the CNP-DM properties, different characterizations were performed according to the following protocols: To evaluate the yield [22], CNP-DM was dried in an oven (NI 1510, Nova Instrument, Brazil) at 105 °C until constant weight, and the yield (wt%) was calculated as the ratio between the weight of dried CNP-DM (w f ) and the initial weight of CNP-DM (w i ) in an aliquot of 10 mL of the suspension, as described by Eq. 1.
Zeta potential was measured to evaluate the dispersion of CNP-DM. A cuvette containing 4 mL of CNP-DM suspension (0.3%, w/w) was analyzed at 25 °C using a Zetasizer (Nano ZS, Malvern, UK).
The freeze-dried samples were analyzed using an X-ray diffractometer (D8 Advance, Bruker, USA) coupled with a high-speed data detector SSD 160 (EUA) with CuKα 1 radiation (λ = 1.54 Å/8.047 keV at 40 kV (target voltage) and 25 mA). The scanning range and rate were 5°-50° and 1° min −1 , respectively. The relative crystallinity index (RCI) and average crystal size were calculated according to the previously described procedures [19,23], following the Eq. 2: where, I 002 is the maximum intensity of diffraction peak (002) at 2θ = 22° and I am is the baseline intensity at 2θ = 18°.
The average sample crystallite size (w) perpendicular to the (002) plan was calculated by the Scherrer equation, described by Eq. 3: where, θ is the diffraction angle, K = 0.94 is the correction factor, λ = 0.154 nm, and β is the angular width corrected in radians at half maximum intensity of (002) peak 6.
Transmission electron microscopy (TEM) was used to evaluate the morphology of CNP-DM. The images were obtained using Tecnai G2-Spirit (FEI, USA) with a 120 kV acceleration voltage. Diluted suspensions of CNP-DM (0.01% w/v) were deposited on Formvar/Carbon supported copper grids (300-mesh) and the samples were subsequently stained with 2% uranyl acetate solution [4].
Preparation of Bionanocomposite Films
Cassava starch and chitosan bionanocomposite films were cast, as previously described by Souza et al. [24] and Kurek et al. [25], respectively. Briefly, the starch bionanocomposite films were prepared using cassava starch (4%, w/v) and glycerol (25% of starch, w/w) in water with CNP-DM addition to obtain different dispersion concentrations (0.1, 0.2, and 0.3% w/v). The film without CNP-DM was termed as sample control (SC). Each mixture batch was heated at 70 °C for 1 h of continuous mixing.
Chitosan bionanocomposite films were prepared by dissolving chitosan (2% w/v) in aqueous acetic acid solution (1% v/v) with CNP-DM addition to obtain different dispersion concentrations of (0.1, 0.2, and 0.3% w/v). The film without CNP-DM was denoted as sample control (CC). The solutions were then homogenized at 25 °C under magnetic stirring (2800×g) for 24 h. After homogenization, a chitosan-glycerol dispersion (30% w/v) was added and mixed for 10 min to form chitosan-plasticizer-CNP-DM. Finally, 167 g of the dispersion (starch or chitosan with CNP-DM and without CNP-DM) were cast into Petri dishes (600 cm 2 ) and dried at 30 °C for 24 h. The resulting bionanocomposite films were preserved in a controlled environment to ensure equilibration of the water content. All the bionanocomposite films were prepared in triplicates.
Transmittance and Opacity
The transmission spectra of the bionanocomposite films were recorded at 600 nm using a UV-vis spectrophotometer (UV-M51, Bel Engineering, Italy). The opacity values of the films were calculated using Eq. 4 [21]: where Abs 600 is the absorbance at 600 nm and x is the film thickness (mm). All measurements were taken in triplicates.
Thickness and Moisture
The thickness of the bionanocomposite films was determined using a Mitutoyo digital micrometer (Tokyo, Japan) with a resolution of 0.001 mm. For each film, 10 measurements were randomly taken at different positions and the mean value was calculated and used for further analyses. To evaluate the moisture, film samples (500 mg) were oven-dried at 105 °C for 24 h until a constant weight and the weight loss (%) was calculated following the procedure described by Zhang et al. [26].
Water Vapor Permeability (WVP)
The WVP of the bionanocomposite films was determined using the gravimetric method described by ASTM E96/ E96M-12 standard [27]. Circular film samples (diameter = 4.3 cm) were covered in plastic capsules containing 10 g of calcium chloride used as a desiccant. The plastic capsules were weighed and placed in a desiccator at 25 °C containing sodium chloride as the dissecting substance with a relative humidity (RH) of 75%. Hourly measurements were taken for a total period of 8 h, and the values of water permeability rate (WVP) and permeability (P) were calculated using Eqs. 5 and 6, respectively: where m is the mass gain, A is the permeation area, and t is the time. P s is the steam saturation pressure at 25 °C, and RH 1 and RH 2 are the relative humidity (%) within the desiccator and capsule, respectively. All tests were performed in triplicates.
Mechanical Analysis
The mechanical characterization of the films was performed using the universal testing machine (DL-2000, EMIC, USA) according to the ASTM D882-10 standard [28]. The extension velocity was 5 mm/min and a 100 N load cell was used. Preconditioned (50% RH at 23 °C) films with dimensions of 10 cm × 2.5 cm were adjusted on the grips at a distance of 50 mm for 48 h. The average thickness was calculated using a digital micrometer at 10 random points for each sample.
Statistical Analysis
For statistical analysis, the data were analyzed by ANOVA using the statistical program StatSoft version 8 (StatSoft, USA). Dunnett's and Tukey's tests were used to evaluate the average differences (95% confidence interval).
Characterization of CNP-DM
In the current study, [DMAPA][Hex] was directly added to the MCC suspension to prepare the CNPs (CNP-DM). The average yield of CNP-DM was 27.82% (Table 1), which is lower than that reported in the literature [11,18]. Mao et al. [18] Factors such as the strong hydrophobic character of the IL, reaction time, or temperature affect the cellulose dissolution, viscosity, and miscibility in water, thereby influencing the CNPs' yield [29]. Interestingly, CNPs' yields are generally affected by the anionic part of the IL, i.e., an increase in the length of the anionic alkyl chain promotes lower CNPs' yields [30]. The results obtained in this work are in agreement with those previously reported by Mäki-Arvela et al. [30], highlighting that small anions are the best choice for cellulose dissolution compared to larger ones. Another factor that may have contributed to low CNP-DM yield is the dissolution temperature (60 °C), which affects the viscosity and conductivity of the IL [29]. The conductivity of an IL increases with increasing temperature and more cellulose is dissolved. Therefore, a protic ionic liquid (PIL) such as [DMAPA][Hex], which exhibits relatively high viscosities, hinders cellulose dissolution at lower temperatures, resulting in less cellulose chemical degradation [30]. Table 1 also shows the zeta potential values of the CNP-DM suspensions. The average zeta potential (ζ) − 9.9 ± 1.68 mV implies an unstable suspension according to the previous literature [22] reports because suspensions with ζ values greater than + 30.0 mV and less than − 30.0 mV are considered electrically stable because the repulsion forces exceed those of attraction values. ζ value less than ± 30 mV indicates the start of flocculation. This ζ obtained in the present study was mainly attributed to the short reaction time and low reaction temperature. Mao et al. [18] and Gonçalves et al. [19] [13], observing that the hydrodynamic and polydispersity index of all CNPs increased with the synthesis temperature (ζ = − 19.7 mV at 70 °C/1.5 h and ζ = − 25.2 mV at 110 °C/1.5 h). This means that uniformity in size distribution is achieved when the synthesis temperature is increased due to weakened interaction between the cation and anion of the PIL.
The X-ray diffraction pattern of CNP-DM is shown in Fig. 1a, exhibiting the characteristic peaks of cellulose I at 14.84°, 16.09°, 22.60°, and 34.11° and indicating that the integrity of cellulose crystals was preserved [31] and the amorphous regions were more susceptible to dissolution than the crystalline ones [32]. The diffraction patterns are in agreement with those previously reported by Mao et al. The diffractogram presents two prominent peaks (Fig. 1a). The peaks at 16.0 and 22.6 2θ degrees correspond to the (110) and (200) planes, attributed to the cellulose crystalline domains [34]. During hydrolysis, the hydronium ions penetrate the amorphous regions that are more accessible than the crystalline. Thus, resulting in the hydrolytic cleavage of glucose units and releasing individual crystallites. The monocrystals' growth and realignment occur simultaneously increasing the cellulose crystallinity [35] that is associated with the peak narrowing observed in the CNP-DM diffractogram. The crystalline index of the cellulose nanoparticles can improve the thermal and mechanical properties of this material.
The relative crystallinity index (RCI) of the CNP-DM increased up to 67.66%, which was higher than those obtained by Gonçalves et al. [19] [36].
The CNP-DM also exhibited a higher average crystal size (5.01 nm) than those obtained by Samsudin et al. [13] (2.7 nm) and Tan et al. [32] (4.6 nm), and lower than those reported by Gonçalves et al. [19] (17.2 nm) and Man et al. [17] (19.9 nm). The difference in these values is because of different cellulose sources and the synthesis conditions such as solvent concentration, time, and temperature, which strongly affect the crystal characteristics [37]. Figure 1b shows the transmission electron microscopy (TEM) images of CNP-DM, revealing the size and state of agglomerates. As expected, the CNP-DM consisted of aggregates and needle-shaped structures due to the strong surface particles' interactions by hydrogen bonds [38], supporting the result of zeta potential analyses (ζ = − 9.9 mV). The CNP-DM had an L of 320 ± 24 nm and a D of 35 ± 15 nm. The L/D ratio yielded a mean value of 9.14, which is consistent with the standard cellulose nanoparticle morphology. The mean L/D value confirmed the potential of CNP-DM as a reinforcing agent in composites, as demonstrated in other studies [17,19]. Man et al. [17] obtained cellulose nanocrystals with L ranging from 300 to 500 nm and D ranging from 14 to 22 nm to give a mean L/D value of 7.5-17. Gonçalves et al. [19] obtained cellulose nanowhiskers with a mean L of 156.89 nm and D 4.59 nm, yielding a mean L/D of 50.23 nm. Therefore, considering that the L/D value is mainly influenced by the reaction conditions, cellulose source, and crystal size [19], the low mean L/D value (9.14) achieved in this study is a result of the larger diameter [39].
The thermogravimetric analysis (TGA) measures the samples' mass change due to chemical reactions (dehydration, oxidation, and degradation) and physical sorption as a function of temperature or time [31]. Inflections due to mass loss of CNP-DM were observed in the TG/dTG curves, as depicted in Fig. 2. Figure 2 shows the TG/dTG profile of CNP-DM in the range 25-900 °C, comprising three events: the first event in the range of 31-137 °C involving a mass loss of 6.72% occurred due to loss of moisture [31], the second event at 376 °C (mass loss of 66.87%) was attributed to the cellulose degradation (depolymerization, degradation, and decomposition of glycosidic units) [33], and the third one in the range of 468-627 °C (mass loss of 22.54%) was attributed to the oxidation and decomposition of carbonized residues (3.87% of residues) [40][41][42].
Similar behavior was reported by Gonçalves et al. [19], who evaluated the thermal stability of cellulose nanowhiskers. In this case, three thermal events were observed at 88 °C, 234 °C, and 323 °C without the formation of carbonized residues, and the authors attributed these events to the presence of hydrogen sulfate and amine groups of the IL, which decreased the thermal stability of the nanowhiskers. Mao et al. [18] also evaluated the thermal behavior of cellulose nanowhiskers, observing two principal events at 285 °C and 346 °C, and achieved more thermally stable nanowhiskers than those extracted with concentrated sulfuric acid, suggesting that the PIL allows efficient catalytic reactions and better accessibility to the cellulose amorphous regions, thus preserving the crystalline counterpart and creating smaller and homogeneous nanowhiskers.
The chemical structures of MCC, PIL ([DMAPA][Hex]), and CNP-DM were characterized using FTIR spectroscopy. Figure 3 shows the FTIR spectra of the three samples.
The FTIR spectra of the MCC samples and the CNP-DM nanoparticles are similar, indicating no changes in the functional groups after the hydrolysis process. The characteristic bands at 3513 cm −1 and 3243 cm −1 were assigned to the -OH groups I stretching vibrations [34,43,44]. Hydroxyl groups favor interactions of cellulose nanoparticles with polymeric matrices such as chitosan and starch, that is, -OH groups along with the equatorial positions of the cellulose chain project laterally, being readily available to interact with the hydrogens present in the chains of these polymers. The small band at 2901 cm −1 was associated with the stretching vibration of C-H in CH 2 and CH 3 groups [21,33,44,45]. The band at 1643 cm −1 was associated with the adsorbed water on the polymer [17,33], whereas those at 1424 cm −1 and 1368 cm −1 were attributed to the angular and symmetrical deformations of cellulose methylene groups and the C-H bond, respectively. The band at 1121 cm −1 emanated from the stretching vibration of C-OH, whereas those at 1056 cm −1 and 891 cm −1 were assigned to the ring skeletal vibration of cellulose. Lastly, the band at 613 cm −1 was assigned to the linkage between the glucose units of cellulose. A band detected at 1424 cm −1 was ascribed to intermolecular hydrogen attraction at the C6 group [46]. This band appear stronger in CNP-DM than in MCC IR spectra, suggesting the IL high efficiency for removing the amorphous cellulose regions.
The results from the FTIR spectra also confirmed that the PIL ([DMAPA][Hex]) is an efficient solvent for cellulose dissolution, and its removal from the nanoparticles was complete as no residual PIL was detected. These results are consistent with those reported in the literature regarding cellulose dissolution using PILs [13,17,19].
Optical Properties
Light absorption in films is one of the most important features that affect the applicability of bionanocomposites in food packaging. Therefore, it is imperative to protect the films from lipid oxidation caused by UV light, a common oxidation initiator in food systems [47]. Table 2 lists the transmittance values and opacities of starch and chitosan bionanocomposite films incorporated with CNP-DM.
The results depicted in Table 2 demonstrate that the light transmittance values of the starch and chitosan films varied from 69.60 to 80.60% and 87.46 to 88.66%, respectively. Increasing the CNP-DM concentration (%) in starch films significantly decreased (p < 0.05) the transmittance values and introducing 0.3% of CNP-DM to the starch matrix, a significant reduction (≈ 5.2%) compared to sample control (SC) was achieved. For the chitosan films, no significant difference (p > 0.05) (from 0.1 to 0.3% of CNP-DM) was observed. However, the addition of CNP-DM (0.2%) led to a reduction of ≈ 1.0% in transmittance as compared to CC (p < 0.05). Bagde et al. [20] reported a reduction in transmittance from 74 to 64% when 1% nanocellulose was incorporated into starch films. Salari et al. [21] observed significant reductions in the transmittance of chitosan films at CNC concentrations above 1%. Presently, the transmittance differences among the chitosan films were not significant, probably due to the low concentrations of CNP-DM.
The opacity of starch and chitosan films varied from 0.93 to 1.41 A 600 nm mm −1 and 0.61 to 0.75 A 600 nm mm −1 , respectively, indicating no significant difference (p > 0.05) with respect to the control samples. The opacity characteristics of the films obtained in this study are consistent with those obtained by Santana et al. [10] for starch films with cellulose nanofibers (0%, 1%, 3%, and 5%), where no significant changes were found when compared to the control sample. It can be surmised from these results that the addition of CNP-DM resulted in no expressive changes in transmittance and opacity sufficient to compromise the use of bionanocomposite for packaging.
Moisture
The moisture values of starch and chitosan films incorporated with CNP-DM ranged from 10.52 to 11.53% and 25.64 to 26.30%, respectively (Table 2). There were no statistically significant differences (p > 0.05) between the values of CNP-DM/starch films and the SC. In the case of CNP-DM/chitosan films, no significant differences were observed among the formulations. However, compared to the CC, a significant increase (≈ 6%) in moisture was detected. It is noteworthy that incorporating CNP-DM into chitosan led to agglomerates by hydrogen bond formation between hydroxyl groups, and the polymer chains were free to interact with water, resulting in weak dispersion that promoted water absorption [2,6], and is desirable in food packaging materials [48].
Thickness and Water Vapor Permeability (WVP) Rate of Bionanocomposite Films
Thickness is an important parameter that must be monitored in films for maintaining uniformity and reproducibility [49]. The thickness and WVP of starch and chitosan films with CNP-DM (0%, 0.1%, 0.2%, 0.3%) are shown in Table 2. All films were processed under the same conditions. A thickness variation of 0.101 mm for the starch-CNP-DM films and 0.097 mm for the chitosan-CNP-DM films was observed without significant differences (p > 0.05). Thus, the addition of CNP-DM to the polymer matrix did not influence the film thickness. Bagde et al. [20] and Salari et al. [21] reported no significant changes in the thickness either when CNPs were added to starch (0.183-0.199 mm) and chitosan (0.090-0.110 mm) bionanocomposites, respectively.
WVP is one of the most significant parameters of films because of its impact on the prevention or reduction of the humidity transfer from the environment to the packaged product [22]. In this study, the WVP values (Table 2) were in the range of 1.02-2.02 × 10 −10 (g m −1 s −1 Pa −1 ) and 1.11-1.63 × 10 −10 (g m −1 s −1 Pa −1 ) for the starch and chitosan films, respectively. In both types, the high concentration of CNP-DM led to significant reductions (p < 0.05) compared to the control films. The highest WVP reductions of 49.50% (starch) and 26.97% (chitosan) were achieved using 0.2% and 0.3% CNP-DM, respectively.
Thus, it can be observed that the addition of CNP-DM to the polymeric matrices reduced the WVP due to the high crystallinity of CNP-DM. Some researchers [10,50,51] have suggested that the reduction in the WVP of starch mixed with CNPs can be associated with the fact that CNPs hinder the permeation of water molecules by forming crystalline domains, leading to a more compact material. Hence, it was observed in this study that the addition of CNP-DM to starch and chitosan films was sufficient to provide a physical barrier through the interaction of the CNPs with the polymer matrix, thereby reducing the permeation of water and allowing good applicability.
Mechanical Analysis
Mechanical parameters such as tensile strength (MPa), Young's modulus (MPa), and elongation at break (%) were evaluated to explore the effect of CNP-DM incorporation in starch and chitosan films, and the results are listed in Table 3. For starch films, the tensile strength of 2-3.5 MPa, elongation break of 128-180%), and Young's modulus of 16-91 MPa were achieved. The incorporation of CNP-DM afforded a more flexible material (less stiff), i.e., the addition of CNP-DM (0.1%) decreased the tensile strength by 64.28% compared to the SC. However, increasing the concentration of CNP-DM noticeably increased (p < 0.05) the tensile strength of the bionanocomposites. A significant increase (98.4%) in elongation at break with the incorporation of 0.1% (CNP-DM) was observed compared to the SC, indicating the film's excellent mechanical flexibility.
For chitosan films, no significant changes were observed in the tensile strength and elongation at break with the addition of CNP-DM (0.1, 0.2, 0.3%). However, the addition of 0.3% CNP-DM reduced the tensile strength by 19.49% and increased the elongation break by 45.47% compared to the CC. All films exhibited significant reductions in Young's modulus compared to the control samples.
Taheri et al. [44] reported no significant differences in tensile strength (99.61 MPa) and Young's modulus (31.20 MPa) for chitosan films with 3% nanocellulose. In another study, Silva et al. [4] prepared starch films reinforced with CNPs (0-5%) and found an increase of 90% and 92% in tensile strength with the addition of 0.1% and 0.2% CNPs, respectively, in addition to 400% increase in Young's modulus. However, in the present study, this behavior was not observed, and the incorporation of CNP-DM lowered the tensile strength of the films, suggesting that the low concentrations of CNP-DM were not sufficient to improve the mechanical properties of both bionanocomposites. Silva et al. [4] reported a mean L/D value of 24 for cellulose nanocrystals, which is superior to that achieved in this study (9.14). The L/D value is an important parameter for mechanical reinforcement, i.e., the higher the L/D value, the higher is the reinforcement capacity [52,53].
Another factor that affected the mechanical parameters is the instability of the CNP-DM suspensions (ζ = − 9.9 mV), which hindered their effective dispersion in both matrices and thus reduced the capacity of mechanical reinforcement [54].
Thermogravimetry
The TG/dTG curves for the different films are shown in Fig. 4. For starch films (Fig. 4a), two main events were observed. The first event involved moisture loss in the range of 27.9-192.5 °C, and the second involved a mass loss of 76.95% in the range of 221.8-504.9 °C, corresponding to the degradation of starch and glycerol [6]. For the chitosan films (Fig. 4b), the first event was observed in the range of 28.6-153.7 °C, attributed to the loss of acetic acid and moisture [21,45]. The second mass loss of 15.11% in the range of 147.5-278.5 °C represents glycerol degradation [55], whereas the third mass loss of 37.99% in the range of 282.4-492.4 °C was attributed to the degradation of chitosan.
Interestingly, glycerol ther mal deg radation (147.5-278.5 °C) was only observed in chitosan films, probably due to its low interaction with the polysaccharides [55].
Incorporating 0.1% and 0.3% of the CNP-DM suspension into starch films promoted a reduction of approximately 7 °C in the T onset of the second thermic event. Santana et al. [10] evaluated the thermal stability of starch films incorporated with CNPs (1-5%) and observed that the main degradation event occurred between 257.4 and 352.8 °C with 80% of mass loss. The authors suggested that the decrease in thermal stability is due to the reduced flexibility of amylopectin chains. For chitosan films reinforced with CNPs (5% and 10%), Khan et al. [51] reported the major mass loss in the range of 280-460 °C and found no changes in thermal behavior with the addition of CNPs.
In summary, the CNP-DM's incorporation into starch and chitosan polymeric matrices affected thermal stability in different ways. The presence of 0.1% CNP-DM reduced the thermal stability of starch films by 9 °C in contrast to
Conclusions
Cellulose nanoparticles (CNP-DM) were successfully prepared using a protic IL ([DMAPA][Hex]) while maintaining crystal integrity. The IL exhibited high selectivity to the amorphous region during the dissolution process. Incorporating CNP-DM slightly increased the thermal stabilities until 9 °C, while expressively increased the flexibilities until 98%, and decreased the WVP until 48% of the starch and chitosan bionanocomposite films compared to their control samples. For both polymeric matrices, the incorporation of 0.2% CNP-DM was sufficient to significantly reduce the WVP, allowing the application of these bionanocomposites in food packaging.
The results obtained in this study are associated with low-cost production, biocompatibility, low toxicity, and recyclability, making [DMAPA][Hex] a simple, efficient, and sustainable solvent to produce cellulose nanoparticles.
Further research will be conducted to improve the bionanocomposites' mechanical properties, which are essential for food packaging.
Author Contributions
The material preparation, data collection, and analysis were performed by SRV, COdS and JBAdS. The first draft of the manuscript was written by SRV, COdS and JBAdS. JID, COdS and VCS-E contributed on the funding management. CUM, JFBP, EdSF, PVFL and PRC contributed on the analysis, conceptualization, and revision of the whole manuscript. All authors commented on previous versions of the manuscript. All authors read and approved the final manuscript. | 2021-12-03T16:14:31.589Z | 2021-12-01T00:00:00.000 | {
"year": 2022,
"sha1": "c7478f980da74a0c49b666a1535a0b9a07fe4d40",
"oa_license": "CCBY",
"oa_url": "https://www.researchsquare.com/article/rs-1093986/latest.pdf",
"oa_status": "GREEN",
"pdf_src": "Springer",
"pdf_hash": "43266ce45b9ea56dea94d002f85158217d50751c",
"s2fieldsofstudy": [
"Materials Science"
],
"extfieldsofstudy": []
} |
26050924 | pes2o/s2orc | v3-fos-license | Wingspan Stents for the Treatment of Symptomatic Atherosclerotic Stenosis in Small Intracranial Vessels: Safety and Efficacy Evaluation
BACKGROUND AND PURPOSE: Until now, endovascular treatment of symptomatic atherosclerotic stenosis in small intracranial arteries (≤2.5 mm) was limited. We evaluated the safety and efficacy of the treatment by using Wingspan stents in arteries of this caliber. MATERIALS AND METHODS: From March 2007 to July 2010, 53 symptomatic intracranial stenoses with narrowing of at least 50% in 53 patients were treated by using Wingspan stents. Clinical manifestations and imaging features were recorded. RESULTS: The technical success rate was 98.1%. There were no serious complications, with the exception of 1 patient who experienced a small cerebral hemorrhage caused by perforation of microwire. Thirty-nine patients (74%) were available for follow-up imaging with DSA. ISR was documented in 13 of these patients, including 2 patients with symptomatic ISR. The median length of the vascular lesions was 5.39 mm, and patients whose vascular lesions were longer than 5.39 mm had a much higher incidence of ISR than patients whose vascular lesions were shorter than 5.39 mm (53% versus 15%, respectively). The median ratio of the reference artery diameter to the stent diameter was 0.78, and patients whose ratio was smaller than 0.78 had a much higher incidence of ISR than patients whose ratio was larger than 0.78 (53% versus 15%, respectively). CONCLUSIONS: In our series, percutaneous transluminal angioplasty and stent placement of small intracranial arteries by using Wingspan stents was safe. The ISR rate was relatively high; most patients having ISR were asymptomatic. Further follow-up is needed to assess the long-term efficacy of this procedure.
P TAS is an important method for treating atherosclerotic stenosis in intracranial arteries, and preliminary results suggest that this procedure is safe and effective. PTAS is difficult to implement in small-diameter artery stenosis because of the high surgical skill required and the high rate of restenosis. 1,2 From experience we know that the incidence of ISR is 25% with angioplasty and stent placement in small coronary arteries. 3 Vessel diameter is negatively correlated with ISR and other adverse outcomes. 4 By using the self-expanding Wingspan stent system (Boston Scientific, Freemont, California) to treat intracranial artery stenosis, the surgical success rate and safety of the procedure are further improved because of increased compliance. Bose et al 5 studied patients with intracranial stenosis in arter-ies 2.5 to 4.5 mm in diameter and who were treated with a Wingspan stent; however, to date, only a few studies have been conducted on the use of Wingspan stents in the treatment of small intracranial artery stenosis, and the sample size was small. 6,7 Here, we defined small intracranial arteries as those with a vessel diameter Յ2.5 mm. In this study of 53 patients, with 53 small intracranial stenoses that were treated with Wingspan stents, we used clinical manifestations and imaging results to assess the safety and follow-up results of the procedure.
Patients and Techniques
We reviewed patients with symptomatic atherosclerotic stenosis in small intracranial arteries (Յ2.5 mm) treated with Wingspan stents in our hospital during a 41-month period from March 2007 to July 2010. We carefully reviewed the detailed patient information, including the patient's age and sex, clinical manifestations, lesion morphology, and endovascular treatment strategy.
Inclusion criteria were as follows: 1) age between 18 and 75 years; 2) at minimum, a TIA or stroke related to the symptomatic atherosclerotic stenosis in an intracranial artery within the preceding 180 days; 3) a pre-event mRS score Յ3; and 4) a DSA showing reference artery diameter Յ2.5 mm.
Exclusion criteria were as follows: 1) nonatherosclerotic intracra-nial arterial stenosis; 2) patients with brain tumors, vascular malformations, or aneurysms; 3) cardiogenic stroke in patients with atrial fibrillation, heart valve disease, left ventricular mural thrombus, or left ventricular myocardial infarction within the preceding 6 weeks; 4) patients with contraindications to the contrast agent, heparin, or anesthesia; and 5) having the same lesion previously treated with a stent.
Intervention Procedure
In brief, access was typically achieved through the common femoral artery. Heparin was titrated during the procedure to achieve an ACT that was 2 to 2.5 times that of baseline. Almost all procedures were performed through a 6F guiding catheter or a long-sheath system. After conventional catheter-based angiography, a microcatheter was manipulated across the target lesion by using a 0.014-inch microwire (Transend EX Platinum, Boston Scientific; ATW, Cordis Corp., Miami, Florida). The microcatheter was then exchanged over a 0.014inch exchange microwire for a Gateway angioplasty balloon (Boston Scientific). When the reference artery diameter was between 1.85 mm and 2.5 mm (including 1.85 mm and 2.5 mm), the balloon diameter used was 80% of the reference artery diameter. When the reference artery diameter was between 1.5 mm and 1.85 mm (including 1.5 mm), we chose a balloon with a diameter of 1.5 mm. When the reference artery diameter was below 1.5 mm, we chose a balloon with a diameter of 1.5 mm (1 patient's artery had a diameter of 1.44 mm). The balloon length was selected to match the length of the lesion. Angioplasty was typically performed with a slow, graded inflation of the balloon to a pressure of between 6 and 10 atm, and was then maintained for 15-20 seconds. Following angioplasty, the balloon was removed and conventional angiography was repeated. Next, the Wingspan delivery system was prepared and advanced over the exchange wire across the target lesion. The stent's diameter was chosen to exceed the diameter of the reference artery by 0.5-1.0 mm. The stent's length was selected to exceed the length of the lesion by least 3 mm on both sides. Angiography was performed to measure residual postoperative stenosis. The procedures were performed by neurointervention physicians, (Q.H., J.L., B.H., Y.X., W.Z.). All have more than 8 years of experience in neurointervention.
Preprocedure and Postprocedure Medical Therapy
All patients were pretreated with a daily dose of 75 mg clopidogrel and 300 mg aspirin at least 3 days before the endovascular procedure. Aspirin was maintained at a daily dose of 300 mg for at least 6 months after the procedure until follow-up angiography was performed. In cases where no ISR or other related disease developed, aspirin was usually continued indefinitely at a daily dose of 100 mg. Clopidogrel was usually maintained for 6 weeks after surgery and then discontinued. Risk factors for atherosclerosis were controlled in accordance with relevant postprocedural guidelines.
Clinical and Angiographic Follow-Up
Clinical scores (modified Rankin score and NIHSS score) were obtained before the procedure, after the procedure, and before discharge. Following surgery, scores were obtained at 1 day, 4 weeks, 6 months, 1 year, and every other year thereafter. Follow-up angiography was performed at 6 months. ISR was defined as Ͼ50% stenosis within or immediately adjacent to (within 5 mm) the implanted stent, with Ͼ20% absolute luminal loss. 8 The initial and follow-up clinical examinations were performed by 1 neurologist (Y.Z.).
Statistical Methods
Before the measurements were taken, the measuring method was clearly identified by the investigators; the measurements were taken by 2 experienced investigators. Despite this, measurement bias is always inevitable, so we performed a t test for the 2 sets of data and found no statistically significant difference (P Ͼ .05). We therefore took the average of the 2 sets of data as the final measurement.
Discrete lesions treated in the same vascular distribution were counted as a single patient with 2 treated lesions, and the stents were evaluated independently for ISR. If lesions were treated in 2 separate vascular distributions in 1 patient, then that patient was counted twice, with each stent evaluated independently for ISR. We grouped the patients according to the following criteria: relationship between ISR and lesion site, degree of vascular tortuosity, length of the vascular lesion, reference artery diameter and ratio of the reference artery diameter to the stent diameter, and residual stenosis. The count data were then analyzed by using the -square test. A P value of Ͻ0.05 represented a statistically significant difference. The data were analyzed by using SPSS 16.0 software.
Patient Characteristics
The study included 53 patients (40 men and 13 women) ranging in age from 41 to 75 years (mean age, 56.7 years). Before stent placement, a total of 25 patients (47%) had experienced TIAs and 28 patients (53%) had experienced a stroke. Median time from qualifying event to stent placement ranged from 1 day to 45 days, with a mean of 23 days.
All 53 patients received DSA preoperatively. We defined the distal artery diameter as the reference artery diameter. The degree or percent stenosis of the target lesion was determined by using the formulas described by the WASID method. 9 Of the 53 lesions, 45 (84.9%) were located in an anterior circulation and were distributed as follows: 2 were in the intracranial carotid artery, 40 were in the M1 segment of the middle cerebral artery, and 3 were in the M2 segment of the middle cerebral artery; the remaining 8 lesions (15.1%) were in the posterior circulation. Artery diameter ranged from 1.44 to 2.5 mm, with a mean of 2.07 mm. All vascular lesions had 50%-95% narrowing, with a mean of 73.9 Ϯ 2.7%. In total, 42 of the 53 (79.2%) stenoses were greater than 70%.
Treatment Results
Initial Treatment Results. To date, we attempted implantation of 54 Wingspan stents in 53 patients. The length of stenosis in 1 patient was 28.4 mm, so we needed 2 stents to cover the lesion. The technical success rate for stent deployment across the stenotic lesion was 98.1% (Fig 1); only 1 stent could not be released at the target site because of vascular tortuosity. All of the patients were treated with a Gateway balloon and a Wingspan stent. After Gateway angioplasty, the stenoses were 10%-80% (with a mean of 40.9 Ϯ 4.2%). After Wingspan stent placement, the residual stenoses were 0%-50% (with a mean of 13.0 Ϯ 3.4%). One patient (1.9%) experienced surgery-related complications during the perioperative period. This patient, with middle cerebral artery stenosis, accepted an implanted Wingspan stent. The CT after the procedure showed a small hematoma in the basal ganglia region, and the hematoma did not increase. This may have been caused by a perforation from the microwire. The patient suffered partial anomic aphasia for 1 day and mild muscle weakness for 2 weeks, and then gradually returned to normal. This perforation did not lead to permanent neurologic sequelae. No patients died during the perioperative period.
Imaging and Clinical Follow-Up. Among 52 patients successfully treated by Wingspan stents, 39 patients (74%) were available for follow-up imaging with DSA (Fig 2). The imaging follow-up time was 6 -32 months, with a mean of 9.8 months. ISR was documented in 13 patients (33.3%), including 1 patient with complete stent occlusion. One ISR patient received angioplasty with a drug-eluting stent. Fifty patients were available for a clinical follow-up after 6 -40 months, with a mean of 17.9 months; 34 patients had their follow-up more than 12 months after surgery. Of the 50 patients, 4 had clinical symptoms: 2 had symptomatic ISR and 2 had TIAs in a nonstented region. No patients died during the follow-up period.
Risk Factors for ISR
We calculated the median length of the vascular lesions (5.39 mm), the median ratio of the reference artery diameter to the
ISR
Non-ISR P stent diameter (0.78), and the average degree of residual stenosis (10.7%) among patients who had a follow-up, then grouped the patients based on whether they fell above or below these mean values (Table). The results showed that patients with vascular lesions longer than 5.39 mm and/or ratios smaller than 0.78 had a much higher incidence of ISR.
Discussion
Although PTAS is an important method for treating cerebral artery stenosis, PTAS for the treatment of small intracranial artery stenosis remains challenging. Based on experience in the treatment of coronary stenosis, vessel size is inversely correlated with the risk of restenosis and with adverse outcomes after percutaneous coronary intervention 4 ; this is because a small vessel is less able to accommodate lumen renarrowing, which invariably occurs, to some degree, in most arteries after balloon dilation. 10 Moreover, intracranial arteries are more fragile than peripheral arteries due to their thin muscle in the artery wall, their lack of support by peripheral tissues, and their vascular tortuosity and small vessel diameter; these features make stent-assisted angioplasty procedures more difficult and possibly increase the incidence of complications. Thus, intracranial PTAS, especially in small intracranial arteries, is challenging. Wingspan stents have made it possible to use PTAS to treat small intracranial arteries. In our study, the technical success rate was 98.1%; 1 stent could not be released to the target site because of vascular tortuosity. Although there was 1 surgeryrelated complication during the perioperative period, this patient had no permanent neurologic sequelae. Before placement of the Wingspan stent, an angioplasty was performed using a Gateway balloon. This conservative predilation approach reduces the degree of vascular trauma and likely minimizes both the risk of target-vessel perforation and the likelihood of downstream embolization of atheromatous debris caused by plaque disruption. Several research studies reported that the incidence of complications was 15%-30% when using a coronary balloon stent for intracranial stenosis, whereas the incidence was about 5%-18% when using a Gateway balloon and Wingspan stent. [10][11][12][13][14][15][16][17][18][19] After Wingspan stent placement, the residual stenoses were 0%-50% (with a mean of 13.0 Ϯ 3.4%). The average residual stenoses of patients who had a follow-up was 10.7%. It was better than the 23%-36% reported, maybe related to the small caliber. First, there were 15 patients with reference artery diameter Ͻ1.85 mm; we chose a balloon with a diameter larger than 80% of the reference artery diameter. Second, angioplasty was typically performed with a slow-graded inflation of the balloon to a pressure higher than nominal pressure. Third, we usually chose a stent with larger diameter to have good dilation. So the degree of predilation with the balloon was greater.
ISR is a key factor influencing the outcome of PTAS. The rate of ISR in patients who undergo coronary vessel PTAS with a bare metal stent can be as high as 27.8%. 19 The occurrence of ISR is related to vascular injury and excessive repair. In theory, Wingspan stents can prevent ISR by reducing vascular injury, but the incidence of ISR remains at 30%. 20,21 ISR is more common in small coronary arteries, 4 but there are no related reports for small intracranial artery stenosis. In our study, among the 39 patients who received an imaging follow-up, we encountered 13 patients (33.3%) with ISR. This percentage is similar to the currently reported incidence of ISR. Meanwhile, only 2 out of 13 (15.4%) patients were symptomatic. Levy et al 22 showed that 24% of patients with ISR were symptomatic, and some single-center studies obtained similar results. 8,23 These data suggest that most patients with ISR are asymptomatic.
The lesion site, length of the vascular lesion, residual stenosis, degree of vascular tortuosity, reference artery diameter, and stent diameter are all potential risk factors for ISR. Fiorella et al 24 found that anterior circulation lesions were much more prone to ISR than posterior circulation lesions. But they did not reveal any underlying characteristics that exposed patients to an increased risk of ISR, with the exception of lesion location. 22 In our research, the differences were not statistically significant because of the small sample size. Turk et al 25 showed that anterior circulation regions-in particular, the supraclinoid segment and the middle cerebral artery-were highly prone to restenosis, and ISR lesions in these 2 locations were often more serious than the original stenosis. 20 It may be related to the degree of vascular tortuosity. In our dataset, we divided patients into 2 groups, according to the LMA classification, but found no statistically significant differences. On the one hand, vascular tortuosity increases surgical difficulty, and the stent cannot be released to the target site; on the other hand, because of malapposition, the stent cannot effectively support the wall or control the progress of the atherosclerotic plaque. Both of these situations can increase the incidence of ISR. In the future, it will be necessary to study the stent morphology after release by using CT angiography flat panel an-gioCT (Dyn-CT).
Some researchers also studied the relationship between the length of the vascular lesion or residual stenosis and ISR. They showed that patients with long lesions and a high percentage of residual stenosis have high rates of restenosis. 26,27 In our study, we divided 39 patients with imaging follow-up into 2 groups based on the median length of the vascular lesions (5.39 mm). Patients whose vascular lesions were longer than 5.39 mm had a much higher incidence of ISR (53% versus 15% for patients with a lesion length of 5.39 mm; P Ͻ .05). The median degree of residual stenosis was 10.7%, and we used the same method to group the patients but found no statistically significant difference between the 2 groups.
The self-expanding Wingspan stent exerts a continuous outward radial force against the vessel wall. This outward radial force prevents early vessel recoil, thus consolidating the gains achieved with the initial angioplasty. A recent study showed that the degree of undersizing significantly affected wall shear stress, the wall shear stress gradient, and the oscillatory shear index, which may promote intimal hyperplasia, thrombosis, and atherogenesis. Conversely, oversizing significantly increased intramural stress, which can cause acute vessel dissection and can chronically stimulate smooth muscle proliferation and initiate an inflammatory response. 28 Thus, both undersizing and oversizing can lead to ISR. In the course of treatment detailed in our study, the Wingspan stent was slightly oversized, measuring 0.5 to 1.0 mm larger than the diameter of the reference artery. Based on the median ratio of the reference artery diameter to the stent diameter (0.78), we divided the 39 patients who received follow-up imaging into 2 groups. Patients whose ratio was less than 0.78 had a much higher incidence of ISR (53% versus 15% for patients with a ratio of more than 0.78; P Ͻ .05). It is possible that the intramural wall stress from the stent caused more damage to the vessel wall and subsequent excessive repair, ultimately leading to ISR. 26 Therefore, choosing a stent of the appropriate size may effectively reduce the blood flow dynamics and mechanical interference on the vessel wall, thereby reducing the incidence of ISR.
The limitations of these data should be noted. First, angiographic follow-up of all patients has not yet been completed, and the absolute number of follow-up patients becomes relatively small when they are split into subsets for analysis. Second, this is a restrospective analysis, so selection bias may be built into the data. Prospective randomized controlled trials are needed to provide more adequate statistical evidence. Third, there is insufficient relevant data on the use of Wingspan stents to assess the long-term efficacy of this procedure.
Conclusions
In our series, percutaneous transluminal angioplasty and stent placement of small intracranial arteries was safe. The ISR rate was relatively high; most patients having ISR were asymptomatic. Patients with longer vascular lesions and smaller ratios of the reference artery diameter to the stent diameter were more prone to ISR. Further follow-up is needed to assess the longterm efficacy of this procedure. | 2017-07-09T00:51:38.495Z | 2012-02-01T00:00:00.000 | {
"year": 2012,
"sha1": "bda23bff1321b94af2ae9b40c4e40fc4071cc154",
"oa_license": "CCBY",
"oa_url": "http://www.ajnr.org/content/ajnr/33/2/343.full.pdf",
"oa_status": "HYBRID",
"pdf_src": "Anansi",
"pdf_hash": "bda23bff1321b94af2ae9b40c4e40fc4071cc154",
"s2fieldsofstudy": [
"Medicine"
],
"extfieldsofstudy": [
"Medicine"
]
} |
174799233 | pes2o/s2orc | v3-fos-license | Latent Weights Do Not Exist: Rethinking Binarized Neural Network Optimization
Optimization of Binarized Neural Networks (BNNs) currently relies on real-valued latent weights to accumulate small update steps. In this paper, we argue that these latent weights cannot be treated analogously to weights in real-valued networks. Instead their main role is to provide inertia during training. We interpret current methods in terms of inertia and provide novel insights into the optimization of BNNs. We subsequently introduce the first optimizer specifically designed for BNNs, Binary Optimizer (Bop), and demonstrate its performance on CIFAR-10 and ImageNet. Together, the redefinition of latent weights as inertia and the introduction of Bop enable a better understanding of BNN optimization and open up the way for further improvements in training methodologies for BNNs. Code is available at: https://github.com/plumerai/rethinking-bnn-optimization
Introduction
Society can be transformed by utilizing the power of deep learning outside of data centers: self-driving cars, mobile-based neural networks, smart edge devices, and autonomous drones all have the potential to revolutionize everyday lives. However, existing neural networks have an energy budget which is far beyond the scope for many of these applications. Binarized Neural Networks (BNNs) have emerged as a promising solution to this problem. In these networks both weights and activations are restricted to {−1, +1}, resulting in models which are dramatically less computationally expensive, have a far lower memory footprint, and when executed on specialized hardware yield in a stunning reduction in energy consumption. After the pioneering work on BinaryNet [1] demonstrated such networks could be trained on a large task like ImageNet [2], numerous papers have explored new architectures [3][4][5][6][7], improved training methods [8] and sought to develop a better understanding of their properties [9].
The understanding of BNNs, in particular their training algorithms, have been strongly influenced by knowledge of real-valued networks. Critically, all existing methods use "latent" real-valued weights during training in order to apply traditional optimization techniques. However, many insights and intuitions inspired by real-valued networks do not directly translate to BNNs. Overemphasizing the connection between BNNs and their real-valued counterparts may result in cumbersome methodologies that obscure the training process and hinder a deeper understanding.
In this paper we develop an alternative interpretation of existing training algorithms for BNNs, and subsequently, argue that latent weights are not necessary for gradient-based optimization of BNNs. We introduce a new optimizer based on these insights, which, to the best of our knowledge is the first optimizer designed specifically for BNNs, and empirically demonstrate its performance on CIFAR-10 [10] and ImageNet. Although we study the case where both activations and weights are binarized, the ideas and techniques developed here concern only the binary weights and make no assumptions about the activations, and hence can be applied to networks with activations of arbitrary precision.
The paper is organized as follows. In Section 2 we review existing training methods for BNNs. In Section 3 we give a novel explanation of why these techniques work as well as they do and suggest an alternative approach in Section 4. In Section 5 we give empirical results of our new optimizer on CIFAR-10 and ImageNet. We end by discussing promising directions in which BNN optimization may be further improved in Section 6.
Background: Training BNNs with Latent Weights
Consider a neural network, y = f (x, w), with weights, w ∈ R n , and a loss function, L(y, y label ), where y label is the correct prediction corresponding to sample x. We are interested in finding a binary weight vector, w bin , that minimizes the expected loss: In contrast to traditional, real-valued supervised learning, Equation 1 adds the additional constraint for the solution to be a binary vector. Usually, a global optimum cannot be found. In real-valued networks an approximate solution via Stochastic Gradient-Descent (SGD) based methods are used instead. This is where training BNNs becomes challenging. Suppose that we can evaluate the gradient ∂L ∂w for a given tuple (x, w, y). The question then is how can we use this gradient signal to update w, if w is restricted to binary values?
Currently, this problem is resolved by introducing an additional real-valued vectorw during training. We call these latent weights. During the forward pass we binarize the latent weights,w, deterministically such that The gradient of the sign operation vanishes almost everywhere, so we rely on a "pseudo-gradient" to get a gradient signal on the latent weights,w [1,11]. In the simplest case this pseudo-gradient, Φ, is obtained by replacing the binarization during the backward pass with the identity: This simple case is known as the "Straight-Through Estimator" (STE) [12,11]. The full optimization procedure is outlined in Algorithm 1. The combination of pseudo-gradient and latent weights makes it possible to apply a wide range of known methods to BNNs, including various optimizers (Momentum, Adam, etc.) and regularizers (L2-regularization, weight decay) [13][14][15].
Latent weights introduce an additional layer to the problem and make it harder to reason about the effects of different optimization techniques in the context of BNNs. A better understanding of latent weights will aid the deployment of existing optimization techniques and can guide the development of novel methods.
For the sake of completeness we should mention there exists a closely related line of research which considers stochastic BNNs [16,17]. These networks fall outside the scope of the current work and in the remainder of this paper we focus exclusively on fully deterministic BNNs.
The Role of Latent Weights
Latent weights absorb network updates. However, due to the binarization function modifications do not alter the behavior of the network unless a sign change occurs. Due to this, we suggest that the latent weight can be better understood when thinking of its sign and magnitude separately: The role of the magnitude of the latent weights, m, is to provide inertia to the network. As the inertia grows, a stronger gradient-signal is required to make the corresponding binary weight flip. Each Algorithm 1: Training procedure for BNNs using latent weights. Note that the optimizer A may be stateful, although we have suppressed the state in our notation for simplicity. input :Loss function L(f (x; w), y), Batch size K input :Optimizer A : g → δ w , learning rate α input :Pseudo-Gradient Φ : L(f (x; w), y) → g ∈ R n initializew ←w 0 ∈ R n ; while stopping criterion not met do Sample minibatch {x (1) , ..., x (K) } with labels y (k) ; Perform forward pass using w bin = sign(w); Compute gradient: g ← 1 K Φ k L(f (x (k) ; w bin ), y (k) ); Update latent weightsw ←w + α · A(g); end binary weight, w bin , can build up inertia, m, over time as the magnitude of the corresponding latent weight increases. Therefore, latent weights are not weights at all: they encode both the binary weight, w bin , and a corresponding inertia, m, which is really an optimizer variable much like momentum.
We contrast this inertia-based view with the common perception in the literature, which is to see the binary weights as an approximation to the real-valued weight vector. In the original BinaryConnect paper, the authors describe the binary weight vector as a discretized version of the latent weight, and draw an analogy to Dropout in order to explain why this may work [18,19]. Anderson and Berg argue that binarization works because the angle between the binarized vector and the weight vector is small [9]. Li et al. prove that, for a quadratic loss function, in BinaryConnect the real-valued weights converge to the global minimum, and argue this explains why the method outperforms Stochastic Rounding [20]. Merolla et al. challenge the view of approximation by demonstrating that many projections, onto the binary space and other spaces, achieve good results [21].
A simple experiment suggests the approximation viewpoint is problematic. After training the BNN, we can evaluate the network using the real-valued latent weights instead of the binarized weights, while keeping the binarization of the activations. If the approximation view is correct, using the real-valued weights should result in a higher accuracy than using the binary weights. We find this is not the case. Instead, we consistently see a comparable or lower train and validation accuracy when using the latent weights, even after retraining the batch statistics.
The concept of inertia enables us to better understand what happens during the optimization of BNNs. Below we review some key aspects of the optimization procedure from the perspective of inertia.
First and foremost, we see that in the context of BNNs, the optimizer is mostly changing the inertia of the network rather than the binary weights themselves. The inertia variables have a stabilizing effect: after being pushed in one direction for some time, a stronger signal in the reverse direction is required to make the weight flip. Meanwhile, clipping of latent weights, as is common practice in the literature, influences training by ceiling the inertia that can be accumulated.
In the optimization procedure defined by Algorithm 1, scaling of the learning rate does not have the role one may expect, as is made clear by the following theorem: Theorem 1. The binary weight vector generated by Algorithm 1 is invariant under scaling of the learning rate, α, provided the initial conditions are scaled accordingly and the pseudo-gradient, Φ, does not depend on |w|.
The proof for Theorem 1 is presented in the appendix. An immediate corollary is that in this setting we can set an arbitrary learning rate for every individual weight as long as we scale the initialization accordingly.
We should emphasize the conditions to Theorem 1 are rarely met: usually latent weights are clipped, and many pseudo-gradients depend on the magnitude of the latent weight. Nevertheless, in experiments we have observed that the advantages of various learning rates can also be achieved by scaling the initialization. For example, when using SGD and Glorot initialization [11] a learning rate of 1 performs much better than 0.01; but when we multiply the initialized weights by 0.01 before starting training, we obtain the same improvement in performance. Theorem 1 also helps to understand why reducing the learning rate after training for some time helps: it effectively increases the already accumulated inertia, thus reducing noise during training. Other techniques that modify the magnitude of update-steps, such as the normalizing aspect of Adam and the layerwise scaling of learning rates introduced in [1], should be understood in similar terms. Note that the ceiling on inertia introduced by weight clipping may also play a role, and a full explanation requires further analysis.
Clearly, the benefits of using Momentum and Adam over vanilla-SGD that have been observed for BNNs [8] cannot be explained in terms of characteristics of the loss landscape (curvature, critical points, etc.) as is common in the real-valued context [22,14,23,24]. We hypothesize that the main effect of using Momentum is to reduce noisy behavior when the latent weight is close to zero. As the latent weight changes signs, the direction of the gradient may reverse. In such a situation, the presence of momentum may avoid a rapid sign change of the binary weight.
Bop: a Latent-Free Optimizer for BNNs
In this section we introduce the Binary Optimizer, referred to as Bop, which is to the best of our knowledge, the first optimizer designed specifically for BNNs. It is based on three key ideas.
First, the optimizer has only a single action available: flipping weights. Any concept used in the algorithm (latent weights, learning rates, update steps, momentum, etc) only matters in so far as it affects weight flips. In the end, any gradient-based optimization procedure boils down to a single question: how do we decide whether to flip a weight or not, based on a sequence of gradients? A good BNN optimizer provides a concise answer to this question and all concepts it introduces should have a clear relation to weight flips.
Second, it is necessary to take into account past gradient information when determining weight flips: it matters that a signal is consistent. We define a gradient signal as the average gradient over a number of training steps. We say a signal is more consistent if it is present in longer time windows. The optimizer must pay attention to consistency explicitly because the weights are binary. There is no accumulation of update steps.
Third, in addition to consistency, there is meaningful information in the strength of the gradient signal.
Here we define strength as the absolute value of the gradient signal. As compared to real-valued networks, in BNNs there is only a weak relation between the gradient signal and the change in loss that results from a flip, which makes the optimization process more noisy. By filtering out weak signals, especially during the first phases of training, we can reduce this noisiness.
In Bop, which is described in full in Algorithm 2, we implement these ideas as follows. We select consistent signals by looking at an exponential moving average of gradients: where g t is the gradient at time t, m t is the exponential moving average and γ is the adaptivity rate.
A high γ leads to quick adaptation of the exponential moving average to changes in the distribution of the gradient.
It is easy to see that if the gradient g i t for some weight i is sampled from a stable distribution, m i t converges to the expectation of that distribution. By using this parametrization, γ becomes to an extent analogous to the learning rate: reducing γ increases the consistency that is required for a signal to lead to a weight flip.
We compare the exponential moving average with a threshold τ to determine whether to flip each weight: This allows us to control the strength of selected signals in an effective manner. The use of a threshold has no analogue in existing methods. However, similar to using Momentum or Adam to update latent weights, a non-zero threshold avoids rapid back-and-forth of weights when the gradient reverses on a weight flip. Observe that a high τ can result in weights never flipping despite a consistent gradient pressure to do so, if that signal is too weak.
Algorithm 2:
Bop, an optimizer for BNNs. input :Loss function L(f (x; w), y), Batch size K input :Threshold τ , adaptivity rate γ initialize w ← w 0 ∈ {−1, 1} n , m ← m 0 ∈ R n ; while stopping criterion not met do Sample minibatch {x (1) , ..., x (K) } with labels y (k) ; Compute gradient: Both hyperparameters, the adaptivity rate γ and threshold τ , can be understood directly in terms of the consistency and strength of gradient signals that lead to a flip. A higher γ results in a more adaptive moving average: if a new gradient signal pressures a weight to flip, it will require less time steps to do so, leading to faster but more noisy learning. A higher τ on the other hand makes the optimizer less sensitive: a stronger gradient signal is required to flip a weight, reducing noise at the risk of filtering out valuable smaller signals.
As compared to existing methods, Bop drastically reduces the number of hyperparameters and the two hyperparameters left have a clear relation to weight flips. Currently, one has to decide on an initialization scheme for the latent weights, an optimizer and its hyperparameters, and optionally constraints or regularizations on the latent weights. The relation between many of these choices and weight flipping -the only thing that matters -is not at all obvious. Furthermore, Bop reduces the memory requirements during training: it requires only one real-valued variable per weight, while the latent-variable approach with Momentum and Adam require two and three respectively.
Note that the concept of consistency here is closely related to the concept of inertia introduced in the previous section. If we initialize the latent weights in Algorithm 1 at zero, they contain a sum, weighted by the learning rate, over all gradients. Therefore, its sign is equal to the sign over the weighted average of past gradients. This introduces an undue dependency on old information. Clipping of the latent weights can be seen as an ad-hoc solution to this problem. By using an exponential moving average, we eliminate the need for latent weights, a learning rate and arbitrary clipping; at the same time we gain fine-grained control over the importance assigned to past gradients through γ.
We believe Bop should be viewed as a basic binary optimizer, similar to SGD in real-valued training. We see many research opportunities both in the direction of hyperparameter schedules and in adaptive variants of Bop. In the next section, we explore some basic properties of the optimizer.
Hyperparameters
We start by investigating the effect of different choices for γ and τ . To better understand the behavior of the optimizer, we monitor the accuracy of the network and the ratio of weights flipped at each step using the following metric: Number of flipped weights at time t Total number of weights + e −9 .
Here e −9 is added to avoid log(0) in the case of no weight flips.
The results are shown in Figure 1. We see the expected patterns in noisiness: both a higher γ and a lower τ increase the number of weight flips per time step. The lower panels show π t , as defined in Equation (7), for the last layer of the network. On the left side we compare three values for γ, while keeping τ fixed at 10 −6 . On the right we compare three values for τ , while keeping γ fixed at 10 −3 . We see that both high γ and low τ lead to rapid initial learning but result in high flip rates, while low γ and high τ result in slow learning and near-zero flip rates.
More interesting is the corresponding pattern in accuracy. For both hyperparameters, we find there is a "sweet spot". Choosing a very low γ and high τ leads to extremely slow learning. On the other hand, overly aggressive hyperparameter settings (high γ and low τ ) result in rapid initial learning that quickly levels off at a suboptimal training accuracy: it appears the noisiness prevents further learning.
If we look at the validation accuracy for the two aggressive settings ((γ, τ ) = (10 −2 , 10 −6 ) and (γ, τ ) = (10 −3 , 0)), we see the validation accuracy becomes highly volatile in both cases, and deteriorates substantially over time in the case of τ = 0. This suggest that by learning from weak gradient-signals the model becomes more prone to overfit. The observed overfitting cannot simply be explained by a higher sensitivity to gradients from a single example or batch, because then we would expect to observe a similarly poor generalization for high γ.
These empirical results validate the theoretical considerations that informed the design of the optimizer in the previous section. The behavior of Bop can be easily understood in terms of weight flips. The poor results for high γ confirm the need to favor consistent signals, while our results for τ = 0 demonstrate that filtering out weak signals can greatly improve optimization.
CIFAR-10
We use a VGG [25] inspired network architecture, equal to the implementation used by Courbariaux et al. [1]. We scale the RGB images to the interval [−1, +1], and use the following data augmentation during training to improve generalization (as first observed in [26] for CIFAR datasets): 4 pixels are padded on each side, a random 32 × 32 crop is applied, followed by a random horizontal flip. During test time the scaled images are used without any augmentation. The experiments were conducted using TensorFlow [27] and NVIDIA Tesla V100 GPUs.
In assessing the new optimizer, we are interested in both the final test accuracy and the number of epochs it requires to achieve this. As discussed in [8], the training time for BNNs is currently far longer than what one would expect for the real-valued case, and is in the order of 500 epochs, depending on the optimizer. To benchmark Bop we train for 500 epochs with threshold τ = 10 −8 , adaptivity rate γ = 10 −4 decayed by 0.1 every 100 epochs, batch size 50, and use Adam with the recommended defaults for β 1 , β 2 , [14] and an initial learning rate of α = 10 −2 to update the real-valued variables in the Batch Normalization layers [28]. We use Adam with latent real-valued weights as a baseline, training for 500 epochs with Xavier learning rate scaling [18] (as recommended in [8]) using the recommended defaults for β 1 , β 2 and , learning rate 10 −3 , decayed by 0.1 every 100 epochs, and batch size 50. The results for the top-1 training and test accuracy are summarized in Figure 2. Compared to the base test accuracy of 90.9%, Bop reaches 91.3%. The baseline accuracy was highly tuned using a extensive random search for the initial learning rate, and the learning rate schedule, and improves the result found in [1] by 1.0%.
ImageNet
We test Bop on ImageNet by training three well-known binarized networks from scratch: BinaryNet, a binarized version of Alexnet [29]; XNOR-Net, a improved version of BinaryNet that uses real-valued scaling factors and real-valued first and last layers [6]; and BiReal-Net, which introduced real-valued shortcuts to binarized networks and achieves drastically better accuracy [4].
We train BinaryNet and BiReal-Net for 150 epochs and XNOR-Net for 100 epochs. We use a batch size of 1024 and standard preprocessing with random flip and resize but no further augmentation. For all three networks we use the same optimizer hyperparameters. We set the threshold to 1 · 10 −8 and decay the adaptivity rate linearly from 1 · 10 −4 to 1 · 10 −6 . For the real-valued variables, we use Adam with a linearly decaying learning rate from 2.5 · 10 −3 to 5 · 10 −6 and otherwise default settings (β 1 = 0.9, β 2 = 0.999 and = 1 · 10 −7 ). After observing overfitting for XNOR-Net we introduce a small l2-regularization of 5 · 10 −7 on the (real-valued) first and last layer for this network only. For binarization of the activation we use the STE in BinaryNet and XNOR-Net and the ApproxSign for BiReal-Net, following Liu et al. [4]. Note that as the weights are not binarized in the forward pass, no pseudo-gradient for the backward pass needs to be defined. Moreover, whereas XNOR-Net and BiReal-Net effectively binarize to {−α, α} by introducing scaling factors, we learn strictly binary weight kernels.
The results are shown in Table 1. We obtain competitive results for all three networks. We emphasize that while each of these papers introduce a variety of tricks, such as layer-wise scaling of learning rates in [1], scaled binarization in [6] and a multi-stage training protocol in [4], we use almost identical optimizer settings for all three networks. Moreover, our improvement on XNOR-Net demonstrates scaling factors are not necessary to train BNNs to high accuracies, which is in line with earlier observations [30].
Discussion
In this paper we offer a new interpretation of existing deterministic BNN training methods which explains latent real-valued weights as encoding inertia for the binary weights. Using the concept of inertia, we gain a better understanding of the role of the optimizer, various hyperparameters, and regularization. Furthermore, we formulate the key requirements for a gradient-based optimization procedure for BNNs and guided by these requirements we introduce Bop, the first optimizer designed for BNNs. With this new optimizer, we have exceeded the state-of-the-art result for BinaryNet on CIFAR-10 and achieved a competitive result on ImageNet for three well-known binarized networks.
Our interpretation of latent weights as inertia differs from the common view of BNNs, which treats binary weights as an approximation to latent weights. We argue that the real-valued magnitudes of latent weights should not be viewed as weights at all: changing the magnitudes does not alter the behavior of the network in the forward pass. Instead, the optimization procedure has to be understood by considering under what circumstances it flips the binary weights.
The approximation viewpoint has not only shaped understanding of BNNs but has also guided efforts to improve them. Numerous papers aim at reducing the difference between the binarized network and its real-valued counterpart. For example, both the scaling introduced by XNOR-Net (see eq. (2) in [6]) and DoReFa (eq. (7) in [31]), as well as the magnitude-aware binarization introduced in Bi-Real Net (eq. (6) in [4]) aim at bringing the binary vector closer to the latent weight. ABC-Net maintains a single real-valued weight vector that is projected onto multiple binary vectors (eq. (4) in [5]) in order to get a more accurate approximation (eq. (1) in [5]). Although many of these papers achieve impressive results, our work shows that improving the approximation is not the only option. Instead of improving BNNs by reducing the difference with real-valued networks during training, it may be more fruitful to modify the optimization method in order to better suit the BNN.
Bop is the first step in this direction. As we have demonstrated, it is conceptually simpler than current methods and requires less memory during training. Apart from this conceptual simplification, the most novel aspect of Bop is the introduction of a threshold τ . We note that when setting τ = 0, Bop is mathematically similar to the latent weight approach with SGD, where the moving averages m now play the role of latent variables.
The threshold that is used introduces a dependency on the absolute magnitude of the gradients. We hypothesize the threshold helps training by selecting the most important signals and avoiding rapid changes of a single weight. However, a fixed threshold for all layers and weights may not be the optimal choice. The success of Adam for real-valued methods and the invariance of latent-variable methods to the scale of the update step (see Theorem 1) suggest some form of normalization may be useful. We see at least two possible ways to modify thresholding in Bop. First, one could consider layer-wise normalization of the exponential moving averages. This would allow selection of important signals within each layer, thus avoiding situations in which some layers are noisy and other layers barely train at all. A second possibility is to introduce a second moving average that tracks the magnitude of the gradients, similar to Adam.
Another direction in which Bop may be improved is the exploration of hyperparameter schedules. The adaptivity rate, γ, may be viewed as analogous to the learning rate in real-valued optimization. Indeed, if we view the moving averages, m, as analogous to latent weights, lowering γ is analogous to decreasing the learning rate, which by Theorem 1 increases inertia. Reducing γ over time therefore seems like a sensible approach. However, any analogy to the real-valued setting is imperfect, and it would be interesting to explore different schedules.
Hyperparameter schedules could also target the threshold, τ , (or an adaptive variation of τ ). We hypothesize one should select for strong signals (i.e. high τ ) in the first stages of training, and make training more sensitive by lowering τ over time, perhaps while simultaneously lowering γ. However, we stress once again that such intuitions may prove unreliable in this unexplored context.
More broadly, the shift in perspective presented here opens up many opportunities to further improve optimization methods for BNNs. We see two areas that are especially promising. The first is regularization. As we have argued, it is not clear that applying L2-regularization or weight decay to the latent weights should lead to any regularization at all. Applying Dropout to BNNs is also problematic. Either the zeros introduced by dropout are projected onto {−1, +1}, which is likely to result in a bias, or zeros appear in the convolution, which would violate the basic principle of BNNs. It would be interesting to see custom regularization techniques for BNNs. One very interesting work in this direction is [32].
A Proof for Theorem 1
Proof. Consider a single weight. Letw t be the latent weight at time t, g t the pseudo-gradient and δ t the update step generated by the optimizer A. Then: w t+1 =w t + αδ t . Now take some positive scalar C by which we scale the learning rate. Replace the weight bỹ v t = Cw t . Since sign(ṽ t ) = sign(w t ), the binary weight is unaffected. Therefore the forward pass at time t is unchanged and we obtain an identical pseudo-gradient g t and update step δ t . We see: v t+1 =ṽ t + Cαδ t = C · (w t + αδ t ) = Cw t+1 .
Thus sign(ṽ t+1 ) = sign(w t+1 ). By induction, this holds for ∀t > t and we conclude the BNN is unaffected by the change in learning rate. | 2019-06-05T16:32:39.000Z | 2019-06-05T00:00:00.000 | {
"year": 2019,
"sha1": "c1e8d9df347b8de53fc2116615b1343ba327040d",
"oa_license": null,
"oa_url": null,
"oa_status": null,
"pdf_src": "Arxiv",
"pdf_hash": "fcfc3ae9229542f1955f2b1f3345fc45a6a3b4ae",
"s2fieldsofstudy": [
"Computer Science"
],
"extfieldsofstudy": [
"Computer Science",
"Mathematics"
]
} |
253700321 | pes2o/s2orc | v3-fos-license | Distinctive Morphological Patterns of Complicated Coronary Plaques in Acute Coronary Syndromes: Insights from an Optical Coherence Tomography Study
Optical coherence tomography (OCT) is an ideal imaging technique for assessing culprit coronary plaque anatomy. We investigated the morphological features and mechanisms leading to plaque complication in a single-center observational retrospective study on 70 consecutive patients with an established diagnosis of acute coronary syndrome (ACS) who underwent OCT imaging after coronary angiography. Three prominent morphological entities were identified. Type I or intimal discontinuity, which was found to be the most common mechanism leading to ACS and was seen in 35 patients (50%), was associated with thrombus (68.6%; p = 0.001), mostly affected the proximal plaque segment (60%; p = 0.009), and had no distinctive underlying plaque features. Type II, a significant stenosis with vulnerability features (inflammation in 16 patients, 84.2%; thin-cap fibroatheroma (TCFA) in 10 patients, 52.6%) and a strong association with lipid-rich plaques (94.7%; p = 0.002), was observed in 19 patients (27.1%). Type III, a protrusive calcified nodule, which was found to be the dominant morphological pattern in 16 patients (22.9%), was found in longer plaques (20.8 mm vs. 16.8 mm ID vs. 12.4 mm SS; p = 0.04) and correlated well with TCFA (93.8%; p = 0.02) and inflammation (81.3%). These results emphasize the existence of a wide spectrum of coronary morphological patterns related to ACS.
Introduction
Acute coronary syndromes (ACS), the most severe expressions of coronary artery disease, remain a leading cause of morbidity and mortality worldwide despite continuous advances in acute care and primary and secondary prevention [1]. Based on in vitro studies [2], coronary atherosclerosis and plaque destabilization with subsequent thrombus formation are the main pathophysiological mechanisms in the majority of cases. There are also other non-atherosclerotic causes of ACS, such as spontaneous coronary artery dissection or spontaneous recanalization of coronary thrombus [3], which are less frequent but important to recognize as they require different therapeutic approaches.
Coronary angiography (CA) is a well-established invasive procedure used in both stable and acute settings for the assessment of the extent of coronary artery disease and the guidance of treatment strategies [4]. As no imaging technique is flawless, CA has known limitations regarding its inability to provide an accurate characterization of plaque morphology as well as a proper stenosis severity grading [4]. 2 of 13 Different techniques currently exist for plaque characterization, ranging from noninvasive modalities, such as coronary computer tomography (CT), to intravascular imaging (intravascular ultrasound (IVUS) or optical coherence tomography (OCT)). Advances in coronary CT now allow for plaque geometrical and compositional assumptions [5,6] and it was proven that some of the features identified on CT imaging could correlate with the risk of future clinical events [7]. Nevertheless, CT interpretation is sometimes cumbersome due to certain factors, such as "the blooming" effect and low image resolution.
OCT, an intravascular imaging technique that utilizes light waves, has emerged as an adjuvant to CA, providing high-quality cross-sectional images of the vessel wall and luminal area with better resolution and tissue characterization, though it is at the cost of lower penetration when compared to IVUS [4,8]. By means of different backscattering and attenuation properties, each tissue component offers a specific OCT image. OCT provides invaluable insights in the setting of ACS [3,[8][9][10][11][12][13][14][15][16][17][18][19][20] by allowing the exclusion of non-atherosclerotic causes [9][10][11] as well as defining the vulnerable plaque by evaluating the fibrous cap thickness and degree of macrophage infiltration [4]. Equally important, it can assess the culprit atherosclerotic lesions, with its favorable ability in detecting plaque erosion (PE), plaque rupture (PR) [21], or due to adequate calcium penetration, the calcified nodule (CN) [22]. In addition, it offers a suitable discriminating capacity between red and white thrombus [23].
The aim of this study was to investigate the morphological features of culprit coronary plaques in ACS patients, using OCT imaging. We analyzed how complicated plaques appear on the luminal interface, the underlying plaque structure, and the topography of plaque complications along the plaque length. The correlation with the clinical picture was evaluated.
Study Population
This was an observational retrospective study conducted in a single tertiary center in Romania: Cluj County Emergency Hospital, Department of Interventional Cardiology. Consecutive patients with an established diagnosis of ACS and an indication of invasive CA who underwent OCT imaging after angiography between January 2012 and May 2021 were included.
OCT images from 82 consecutive patients were initially included. After applying the aforementioned criteria, nine patients were excluded, with three additional patients being considered ineligible due to the presence of a large thrombotic mass, which impaired lesion characterization. In total, 70 patients with 70 subsequent culprit lesions were deemed suitable for analysis ( Figure 1).
Cardiovascular risk factors were defined as follows: arterial hypertension as systolic blood pressure > 140 mmHg and/or diastolic blood pressure > 90 mmHg or treated hypertension; diabetes mellitus as glycated hemoglobin level ≥ 6.5% and/or fasting glucose level ≥ 126 mg/dL or the use of anti-diabetic drugs; dyslipidemia as low-density lipoprotein cholesterol > 130 mg/dL and/or triglycerides > 150 mg/dL or treated dyslipidemia; and overweight condition as body mass index > 25 kg/m 2 .
STEMI was defined as continuous typical chest pain that lasted more than 30 min associated with an ST-segment elevation of at least 0.1 mV in 2 or more contiguous leads or new left bundle branch block on the 12-lead electrocardiogram and elevated cardiac biomarkers (high-sensitivity cardiac troponin I, creatin kinase, and creatin kinase-MB). NSTEMI was defined as ischemic symptoms with elevated cardiac enzymes in the absence of persistent ST-segment elevation on the electrocardiogram, whereas UAP was defined Cardiovascular risk factors were defined as follows: arterial hypertension as systolic blood pressure >140 mmHg and/or diastolic blood pressure >90 mmHg or treated hypertension; diabetes mellitus as glycated hemoglobin level ≥6.5% and/or fasting glucose level ≥126 mg/dL or the use of anti-diabetic drugs; dyslipidemia as low-density lipoprotein cholesterol >130 mg/dL and/or triglycerides >150 mg/dL or treated dyslipidemia; and overweight condition as body mass index >25 kg/m 2 .
STEMI was defined as continuous typical chest pain that lasted more than 30 min associated with an ST-segment elevation of at least 0.1 mV in 2 or more contiguous leads or new left bundle branch block on the 12-lead electrocardiogram and elevated cardiac biomarkers (high-sensitivity cardiac troponin I, creatin kinase, and creatin kinase-MB). NSTEMI was defined as ischemic symptoms with elevated cardiac enzymes in the absence of persistent ST-segment elevation on the electrocardiogram, whereas UAP was defined as ischemic symptoms at rest in the absence of ST-segment elevation or positive cardiac biomarkers.
For the measurement of the troponin values, an Access hsTnI kit (Beckman Coulter, Brea, CA, USA) was used. This kit contains paramagnetic particles coated with mouse monoclonal anti-human cTnI antibody suspended in TRIS-buffered saline, with surfactant, bovine serum albumin (BSA), and sheep monoclonal anti-human cTnI alkaline phosphatase conjugate diluted in buffered saline, with surfactant, BSA matrix, and proteins.
The culprit lesion was confirmed based on electrocardiographic changes, echocardiographic wall motion abnormalities and angiographic appearance. CA was performed using the existing on-site angiograph: Siemens Artis Zee (Siemens Healthineers, Erlangen, Germany).
CA and OCT imaging were performed by three senior interventional cardiologists during working hours and two days/week on call, according to the current guideline [24,25] and consensus standards [26].
OCT Acquisition Technique and Image Analysis
OCT uses near-infrared light (~1300 nm) to provide a tissue penetration of up to 3 mm with a high axial (10-20 µm) and lateral resolution (20-40 µm For the measurement of the troponin values, an Access hsTnI kit (Beckman Coulter, Brea, CA, USA) was used. This kit contains paramagnetic particles coated with mouse monoclonal anti-human cTnI antibody suspended in TRIS-buffered saline, with surfactant, bovine serum albumin (BSA), and sheep monoclonal anti-human cTnI alkaline phosphatase conjugate diluted in buffered saline, with surfactant, BSA matrix, and proteins.
The culprit lesion was confirmed based on electrocardiographic changes, echocardiographic wall motion abnormalities and angiographic appearance.
CA was performed using the existing on-site angiograph: Siemens Artis Zee (Siemens Healthineers, Erlangen, Germany).
CA and OCT imaging were performed by three senior interventional cardiologists during working hours and two days/week on call, according to the current guideline [24,25] and consensus standards [26].
OCT Acquisition Technique and Image Analysis
OCT uses near-infrared light (~1300 nm) to provide a tissue penetration of up to 3 mm with a high axial (10-20 µm) and lateral resolution (20-40 µm). Each OCT system consists of an imaging catheter, a drive motor operating control, and imaging software (Software version E.5.2.1, St. Jude Medical, St. Paul, MN, USA) [4].
OCT imaging was performed using a frequency-domain ILUMIEN TM OPTIS TM OCT system (St. Jude Medical, St. Paul, MN, USA) and C7 Dragonfly TM /Dragonfly TM OPTIS TM over-the-wire catheter (St. Jude Medical, St. Paul, MN, USA). The optical probe was manually advanced, distal to the region of interest, followed by automated pullback at a speed of 20 mm/s with simultaneous blood displacement using contrast media manually injected through the guiding catheter.
All OCT images were digitally archived in a dedicated database (RoM1OCTRegistry) and then analyzed by two independent, experienced interventional cardiologists (C.H. and M.O.), who were blinded to the patients' clinical and paraclinical findings. Any inconsistencies between the observers were mediated by a third physician (D.M.O.). Image post-processing was employed using deep learning-based models to potentially provide an automated assessment of coronary artery disease. When the culprit lesion was identified, key morphological features were defined, as shown in Table 1.
Total plaque length was measured as the distance from diseased-to-diseased segment. Plaque anatomy was analyzed at increments of 1 mm. Each plaque was separated by a 5 mm disease-free section. Lipid-rich plaques (LRPs) and thin-cap fibroatheromas (TCFA) were defined as containing two or more quadrants of lipid pool/necrotic core. Fibrous This study complies with the Declaration of Helsinki on human research. Reporting of the study conforms to the broad EQUATOR guidelines [29].
Statistical Analysis
Statistical analysis was performed using IBM SPSS version 26.0 (SPSS Inc, Chicago, IL, USA) from a Microsoft Excel 2019 database. Continuous variables are expressed as mean ± standard deviation, while categorical variables are expressed as counts and percentages. To study the difference between continuous variables, the Kruskal-Wallis test was used, without making any assumptions on data distribution. Pearson's chi-squared test was performed for correlations between categorical dichotomous or multinomial variables. Fisher's exact test was instead employed when the expected cell count in the cross-tabulation was less than 5. A p-Value < 0.05 was considered statistically significant.
Results
The baseline patient characteristics are summarized in Table 2. The average patient age was 59.7 years with more than two-thirds of the patients being male (65.7%). The risk profile was similar between the groups, with a high prevalence of hypertension (78.6%) and dyslipidemia (65.7%) and a low rate of smoking (20%) and diabetes mellitus (25.7%). With respect to the diagnosis at admission, a larger proportion of patients had UAP (48.6%), as compared to STEMI (24.3%) or NSTEMI (27.1%). The main angiographic findings are presented in Table 3. The left anterior descending artery was the culprit vessel in 78.6% of cases with no differences between the groups in terms of predilection for a particular vessel (p = 0.64). In 44.3% of cases, culprit lesions were severely stenotic whereas more than half of the patients (55.7%) showed multivessel disease status. Three distinct morphological entities were identified (Figure 2). Type I, intimal discontinuity (ID), was the most common mechanism leading to ACS and was seen in 35 patients II, a significant stenosis (SS) with vulnerability features, including macrophage infiltration in 16 patients (84.2%) and TCFA in 10 patients (52.6%), was present in 19 patients (27.1%). Type III, a protrusive CN, was the dominant morphological pattern in 16 patients (22.9%).
The clinical presentation of ACS is significantly correlated with plaque morphology ( The clinical presentation of ACS is significantly correlated with plaque morphology ( Most of the patients underwent invasive treatment, mainly PCI (74.3%). Identification of an SS type II lesion prompted specific treatment in all cases, while a conservative management was employed in 9/35 patients (25.7%) with ID type I and 5/16 patients (31.2%) with CN type III plaques, respectively ( Table 2).
The presence of a thrombus was significantly associated with ID plaques (68.6%; p = 0.001) (Figure 3). TCFA was significantly more prevalent in CN plaques (93.8%; p = 0.02), as compared to ID type I or type II plaques. Intimal inflammation had a similarly high prevalence within the three patterns (85.7% in ID, 84.2% in SS, 81.3% in CN; p = 0.92). Healed plaque component prevalence was also similar across the groups (22.9% in ID, 21.1% in SS, 25% in CN; p = 0.96). LRPs were highly prevalent and significantly associated with SS lesions (94.7%; p = 0.002). management was employed in 9/35 patients (25.7%) with ID type I and 5/16 patients (31.2%) with CN type III plaques, respectively ( Table 2).
Evaluation of the intraplaque topography of complications ( Figure 4) showed that only ID had a particular pattern of occurrence, that is, in the proximal plaque segment (60%; p = 0.009). When analyzing the underlying plaque composition for the type I entity, four main determinants of vulnerability were found: an inflamed thick-cap fibroatheroma, TCFA, TCFA flanking superficial calcium sheets, and healed plaque component ( Figure 5). None of the ruptures or erosions showed a strong correlation with any of the latter (p = 0.35) but When analyzing the underlying plaque composition for the type I entity, fou determinants of vulnerability were found: an inflamed thick-cap fibroatheroma, TCFA flanking superficial calcium sheets, and healed plaque component ( Figure 5) of the ruptures or erosions showed a strong correlation with any of the latter (p = 0.3 there was a trend towards a larger number of erosions arising in the context of TCF superficial calcium compared to PR (58.3% vs. 34.8%). Regarding cardiovascular risk-lowering drugs, 51.4% of the patients were on c statin treatment prior to hospital admission, while 40% received aspirin. In the group there was a signal towards a lower number of patients manifesting ID lesions patients, 42.9%) while more patients presented with SS type II plaques (13/19 pa 68.4%), p = 0.1 ( Table 2).
Discussion
In this in vivo OCT study three main morphological patterns leading to AC identified: ID, SS, and CN. ID was the most prevalent aspect, generally affecting the imal plaque segment, and it was associated with the presence of a thrombotic ma had no distinct underlying plaque features. The clinical appearance of ID plaqu mostly UAP but could also be NSTEMI or STEMI. SS lesions typically appeared on and were associated with vulnerability characteristics: inflammation and TCFA Regarding cardiovascular risk-lowering drugs, 51.4% of the patients were on chronic statin treatment prior to hospital admission, while 40% received aspirin. In the statin group there was a signal towards a lower number of patients manifesting ID lesions (15/35 patients, 42.9%) while more patients presented with SS type II plaques (13/19 patients, 68.4%), p = 0.1 ( Table 2).
Discussion
In this in vivo OCT study three main morphological patterns leading to ACS were identified: ID, SS, and CN. ID was the most prevalent aspect, generally affecting the proximal plaque segment, and it was associated with the presence of a thrombotic mass and had no distinct underlying plaque features. The clinical appearance of ID plaques was mostly UAP but could also be NSTEMI or STEMI. SS lesions typically appeared on LRPs and were associated with vulnerability characteristics: inflammation and TCFA. The clinical feature of SS plaques was mainly UAP. CN were relatively prevalent and strongly correlated with TCFA, inflammation, and longer coronary plaques. The clinical feature of CN was mainly NSTEMI.
With respect to the type I entity, PR is the most prevalent cause of coronary thrombosis in patients with sudden cardiac death or fatal myocardial infarction (MI), as shown by pathology studies [30,31] (60-73% of cases). In patients presenting with acute non-fatal MI, PR was found in 66% of cases, as detected by IVUS [32], and in 44-73% of cases, as shown by OCT studies [21,22]. Moreover, it is known that the incidence of PR is higher in STEMI patients [33] (70%; p = 0.03), while the presence of a large thrombotic mass is mainly seen in STEMI patients and, to a smaller degree, in NSTEMI ones [33] (78% vs. 27%; p < 0.001).
In contrast to these findings, the incidence of PR seen in our study was lower: only 32.9%. Several factors could have accounted for this difference. Studies that found higher PR incidences [21,22,[30][31][32] included more severe patients, either with sudden cardiac death and fatal MI or non-fatal MI, whereas most of our patients had UAP (48.6%). In addition, a selection bias in our research was related to the impaired imaging conditions in the presence of a large and occlusive thrombus, which determined the operators to limit the use of OCT imaging. This led to a low prevalence of STEMI and NSTEMI patients included in our study (24.3% and 27.1%, respectively). Another important aspect to consider is prior medication, as it was shown that patients on chronic statin treatment had lower incidences of PR [34]. Indeed, more than half of our cohort received statin before admission and there was a trend towards fewer ID lesions seen in these patients. PE was identified in up to 31% of ACS cases, being the second cause of coronary plaque thrombosis [31]. In comparison, our study showed a lower incidence of 17.1% for this entity. This could be explained by the clinical status of our patients, with a high prevalence of UAP. Another possible explanation for our findings could be the particular risk profile of our patients, notably the low incidence of smoking (20%). Smoking is one of the most powerful cardiovascular risk factors with a dose-dependent effect on MI rate (eight-fold increased risk for those smoking more than 25 cigarettes per day) [35]. It was proven that smoking is associated with PE in both men and women [30]. This lower smoking risk profile may have accounted for the lower rate of PE seen in our study.
Our study has shown that ID lesions (both PE and PR) tend to occur more often in the proximal plaque segments. Similar results were obtained by Fukumoto et al. [36], by means of a three-dimensional IVUS color mapping system (used to localize areas of elevated shear stress along the plaque length), showing that proximal plaque segments are more prone to PR. In contrast, a small retrospective study [37] demonstrated PE to have mostly a distal localization (p = 0.01). Results should be interpreted with caution in the latter instance since in the erosion group, lesion preparation with predilatation was often used, which could have led to alterations in the plaque morphology. Moreover, removal of thrombus from the proximal segments through thromboaspiration could have arisen, thus creating a pseudo-distal erosion impression.
In the proximal part of the atherosclerotic plaques, identification of the complications leading to ACS and the fact that these plaques often are of borderline angiographic severity support the interest of OCT imaging. This could improve the detection of the proximal complicated plaque segments, which may have been missed by CA, thus resulting in the selection of longer stents to allow for optimal lesion coverage.
In our cohort, no specific underlying plaque morphology was significantly associated with the type I plaque complication profile. Our study showed that ID lesions may occur in various forms of complicated plaques, from thin-to thick-cap fibroatheroma, in superficial calcium or healed plaques. Although pathology data [31] show that PR generally occurs on TCFAs, Jia et al. [22] found that 100% of OCT-detected PR occurred on LRPs, while only 67.3% of them were TCFAs. In another study on 1660 STEMI patients [38], PR was detected on thick-cap fibroatheromas in 10.2% of cases. In the situation of PE, the substrate is different: pathological intimal thickening in 16% of cases and LRPs in 84% of cases, as demonstrated by in vitro OCT studies [39]. In vivo studies [22,38] show an incidence of 50-56% for fibrous plaque and 44-50% for LRPs (including 13.5% TCFAs).
A high macrophage content was observed in this group (85.7%), especially in the case of PR (95.4%). This could provide a potential explanation for the heterogeneity of subjacent anatomy, as intimal inflammation could increase the risk of complication even in plaques than appear more "stable".
Our research showed a trend towards more erosions in association with TCFA overlying superficial calcium plates (58.3% vs. 34.8% PR). This suggests that superficial calcium sheets can cause micro-disruptions in the intima and lead to luminal thrombosis. Accumulation of spotty calcifications in the necrotic core close to the fibrous cap is a known risk marker of plaque vulnerability and rupture [40]. However, Costopoulos et al. [41] observed that plaque shear stress (hence, the risk for PR) is reduced when dense calcium ≥10% is present. Moreover, it was demonstrated [42] that low plaque shear stress promotes low-density lipoprotein filtration, thus being associated with plaque progression but not destabilization. Future studies are needed to fully clarify the underlying mechanisms correlating calcified plaque components and vulnerability features.
Regarding the type II entity, a significant finding that derived from our work is that an SS plaque can lead to an ACS in 27.1% of cases. This can be attributed to the fact that our cohort reflects a real-life scenario, with patients from the full ACS spectrum being included. Most of the existing studies enrolled only STEMI and NSTEMI patients. In our study, most (48%) of the ACS patients presented with UAP and it is to be noted that 73% of the UAP patients exhibited this pattern of association between severe stenosis and features of high complication risk, mainly inflammation (84.2%) and TCFA (52.6%).
Manoharan et al. [43] found that in both STEMI patients after thromboaspiration and in those with NSTEMI/UAP, the culprit lesions were at least 50% stenotic by angiographic assessment. The PROSPECT study [44] included 700 patients with ACS and demonstrated that three IVUS parameters can predict future clinical events: plaque burden >70%, minimum lumen area <4 mm 2 , and the association of TCFA. We have found a significant correlation between SS lesions and, respectively, LRPs and macrophage infiltration. Furthermore, TCFA was present in more than half of the cases. Given the fact that none of these type II lesions exhibited OCT signs of thrombosis, we may consider that other adjuvant factors could play a role in the ACS clinical picture: added epicardial vasospasm, microvascular dysfunction, or transient micro-thrombosis on high-risk plaques, with spontaneous in situ resolution or distal embolism, leaving the plaque thrombus-free. As all of our ACS patients received potent antithrombotic therapy upon admission, superficial micro-thrombosis may have undergone resolution at the time of OCT imaging.
In relation to the type III cohort, a protrusive CN emerged as an important entity in our study, with an incidence of 22.9%. Data from both pathology and OCT studies describe this morphological pattern in only 5-8% of culprit plaques of ACS patients [39,45]. It is known that the presence of coronary calcification is a sign of advanced atherosclerosis and subsequently worse clinical outcomes [46]. Our findings are consistent with this data: patients with a CN have significantly longer plaques and a trend towards more severe coronary artery disease. Moreover, CN strongly correlated with the presence of TCFA (93.8%), another hallmark of advanced disease. In a study by Sugiyama et al. [47] culprit calcified plaques amounted for only 12.7% of ACS cases and they were classified into three groups based on the pattern of calcification and the integrity of the fibrous cap: eruptive calcified nodules, superficial calcific sheet, and calcified protrusion. We believe that the superficial calcific sheet subtype defined as "sheet-like superficial calcific plate without erupted nodules or protruding mass into the lumen" may be considered equivalent to the TCFA overlying superficial calcium observed by us. Macrophage infiltration was also prevalent in this group (81.3%), which is in line with the developing process of TCFA and intimal calcification.
As our study investigated only patients with "traditional" risk factors, a particular area of interest for future research could be certain high-risk conditions associated with accelerated atherosclerosis, such as acquired immunodeficiency syndrome [48]. Shedding light on their pathological features could help in the disease management and development of certain tailored treatments.
Several limitations of this study need to be addressed. Given the small cohort size from a single-center and its retrospective nature, caution is advised when interpreting the data and they should be considered hypothesis-generating. As both clinical and imaging follow-up was beyond the scope of this investigation, its long-term significance remains to be known; nevertheless, it paves the road for future studies. Fewer STEMI and NSTEMI patients were included compared to UAP. Left anterior descending was the examined vessel in the vast majority of cases, partially due to the fact that OCT was more likely to be performed in this situation as the left anterior descending artery has prognostic implications. A small number of patients were excluded, mainly because of poor image quality, in-stent complications, and thrombus, precluding analysis of underlying plaque.
Conclusions
This study demonstrates a wide spectrum of culprit plaque morphological patterns in ACS patients, corresponding to the clinical heterogeneity of this disease. Type I or ID plaque (with PR or PE) is the most common feature (50% of cases) and mostly affects the proximal plaque segments. In patients with borderline severity angiographic stenoses, this could impact clinical practice since OCT lesion imaging could guide the physician in selecting longer stents, thus being able to provide optimal proximal lesion coverage.
There are more PE in relation to TCFA overlying superficial calcium plates, a finding that could support future studies. Type II or SS plaque emerged as an important entity and was seen in 27.1% of cases. It mostly occurs in UAP patients, has an underlying LRP, and is associated with vulnerability hallmarks (intimal inflammation and TCFA). Type III or CN plaque has a prevalence higher than previously described (22.9%), is mainly found in NSTEMI patients with longer coronary plaques and more severe disease, and is strongly correlated with the presence of TCFA and macrophage infiltration. | 2022-11-20T16:37:06.891Z | 2022-11-01T00:00:00.000 | {
"year": 2022,
"sha1": "0e30292d64fb9dc6325cfa93acce9e394fa7f1b5",
"oa_license": "CCBY",
"oa_url": "https://www.mdpi.com/2075-4418/12/11/2837/pdf?version=1668674193",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "d07b3592a0bba5f721c25f00277cf416dff76c8c",
"s2fieldsofstudy": [
"Medicine"
],
"extfieldsofstudy": [
"Medicine"
]
} |
3308201 | pes2o/s2orc | v3-fos-license | Quantitative assessment of the blood-brain barrier opening caused by Streptococcus agalactiae hyaluronidase in a BALB/c mouse model
Streptococcus agalactiae is a pathogen causing meningitis in animals and humans. However, little is known about the entry of S. agalactiae into brain tissue. In this study, we developed a BALB/c mouse model based on the intravenous injection of β-galactosidase-positive Escherichia coli M5 as an indicator of blood-brain barrier (BBB) opening. Under physiological conditions, the BBB is impermeable to E. coli M5. In pathological conditions caused by S. agalactiae, E. coli M5 is capable of penetrating the brain through a disrupted BBB. The level of BBB opening can be assessed by quantitative measurement of E. coli M5 loads per gram of brain tissue. Further, we used the model to evaluate the role of S. agalactiae hyaluronidase in BBB opening. The inactivation of hylB gene encoding a hyaluronidase, HylB, resulted in significantly decreased E. coli M5 colonization, and the intravenous injection of purified HylB protein induced BBB opening in a dose-dependent manner. This finding verified the direct role of HylB in BBB invasion and traversal, and further demonstrated the practicability of the in vivo mouse model established in this study. This model will help to understand the S. agalactiae–host interactions that are involved in this bacterial traversal of the BBB and to develop efficacious strategies to prevent central nervous system infections.
neutrophil chemokines IL-8, CXCL-1, CXCL-2, CCL-20 and IL-6 in brain endothelium, and therefore increased the permeability of the BBB 19 . Several investigations in animals have also shown that GBS can penetrate the CNS by crossing the BBB after a prolonged period of bacteremia 20,21 . Therefore, the BBB plays an important role in controlling the entry of pathogens into the brain.
Increased permeability of the BBB can be seen in bacterial meningitis caused by S. agalactiae 22,23 . However, how this bacterium crosses the BBB and enters the CNS is not clearly understood. Therefore, it is of crucial importance to characterize BBB permeability in order to better understand the pathogenesis of meningitis. In our previous study, the inactivation of hylB gene encoding a hyaluronidase, HylB, caused the significantly decreased brain bacterial counts in mice 24 . However, whether HylB directly acts on the BBB opening remains unclear. In this study, we sought to establish a model to evaluate BBB opening by screening a bacterial strain as an indicator, and used this model to quantitatively evaluate the direct role of HylB in S. agalactiae penetration across BBB.
Results
The screening of an indicator strain. To make the counting easy, we aimed to find a bacterium with a colony morphology distinct from that of S. agalactiae. The results showed that among the M1 to M5 E. coli mutants, only the M5 strain generated characteristic blue clones on M63 media (Fig. 1), indicating that it was βgalactosidase-positive.
Determination of E. coli M5 virulence in mice.
To determine whether E. coli M5 was virulent to mice, we performed bacterial infection in BALB/c mice. Interestingly, it was observed that none of the mice infected with 2 × 10 9 CFU of E. coli M5 showed any signs of illness, and there was zero mortality throughout the experimental period of 7 d. The diet and mental state of the experimental mice were identical to those of the control group. The result indicated that E. coli M5 was avirulent to mice.
Kinetics of E. coli presence in blood of mice.
To investigate the rate of E. coli M5 metabolism in mice, the mice were injected intravenously with M5, and blood samples were collected from each mouse at intervals of 3 min, 5 min, 10 min, 30 min, 60 min and 120 min post-infection. The CFU enumeration results showed that the number of M5 cells in the blood increased from 6 × 10 5 CFU/mL at 3 min to a peak value of approximately 2.5 × 10 6 CFU/mL at 5 min and then decreased dramatically to 3 × 10 5 CFU/mL at 10 min. After one hour, bacteria could hardly be detected, and they had clearly been removed from circulation within 2 hours after intravenous injection (Fig. 2). No bacteria were detected in the brain at any time points.
Determination of a challenge concentration of S. agalactiae. We first sought to determine whether the S. agalactiae GD201008-001 strain causes significant BBB permeability by intraperitoneal infection. It is important to determine an appropriate initial inoculation concentration for S. agalactiae to create an in vivo model. We chose values of 5-fold (50 CFU) and 10-fold (100 CFU) that of the median lethal dose (LD 50 < 10 CFU) 25 of S. agalactiae to inoculate the mice. The result showed that with the duration of infection, the number of GD201008-001 cells increased in both the blood (Fig. 3A) and the brain (Fig. 3B), suggesting that GD201008-001 was capable of replicating within the bloodstream and spreading to the brain. GD201008-001 began to appear in the blood at 3 h post-infection, 3 h earlier than in the brain. The bacteria could be detected in the brain at 6 h post-infection when we injected 100 CFU of S. agalactiae. However, we could not detect any . Detection of S. agalactiae in blood and the brain tissues. (A) Bacterial loads in the blood; (B) bacterial loads in the brain. Groups of four BALB/c mice were inoculated with 50 CFU or 100 CFU of S. agalactiae and killed at different time points post-infection. The CFU of S. agalactiae in the blood and the brain were quantified and described as the mean and S.D. *P < 0.05, **P < 0.01 or ***P < 0.001 indicates significantly different bacterial loads between the two infection groups.
bacteria until 12 h post-infection in the 50 CFU group. This result suggests that the intraperitoneal infection with 100 CFU of S. agalactiae is more appropriate to induce BBB opening in this mouse model. Evaluation of BBB opening. Groups of five mice were challenged with 100 CFU of GD201008-001. At 3 h, 6 h, 9 h and 12 h after challenge, the indicator strain E. coli M5 was inoculated intravenously into each group. In determining the sampling time points, based on Fig. 2, more M5 tracers were detected in the blood at 5 min. Therefore, 5 min after injection was selected to quantify the degree of BBB permeability. Our data showed that GD201008-001 began to appear in the brain 6 h post-infection, and with the duration of infection, the bacterial number increased (Fig. 4). M5 could be detected 3 h after S. agalactiae inoculation, and the number of M5 had a similar increasing trend as GD201008-001 (Fig. 5). This result suggests that E. coli M5 may be used as an appropriate indicator for describing the degree of BBB opening.
Detection of BBB opening caused by S. agalactiae hyaluronidase. Previous study showed that hyaluronidase contributed to S. agalactiae penetration into the CNS 24 . To verify the utility of this mouse model, the mice were infected with the wild-type S. agalactiae, ΔhylB and CΔhylB strains, and then E. coli M5 was administered to the mice. After intraperitoneal infection with 100 CFU of S. agalactiae and its derivatives, the numbers of the three bacterial strains in the blood increased rapidly from <10 3 CFU/mL at 3 h to >10 7 CFU/mL at 15 h (Fig. 6A). In the brains, the wild-type and the complemented strain CΔhylB were present 6 h post-infection, which was earlier than the mutant ΔhylB (Fig. 6B). Compared with the wild-type and complemented strains, for the hylB mutant, the numbers were lower at each time point in the brains. As time passed, the number of bacteria in the brains increased to >10 5 CFU/g at 15 h post-infection. E. coli M5 in the wild-type and CΔhylB groups could be detected in the brain tissues 6 h post-infection, and the increasing trend of the number of M5 cells was similar to that of S. agalactiae (Fig. 6C). The sudden increase in the number of M5 at 15 h indicated that the BBB of the mice had opened to a great degree. However, in the ΔhylB group, the ΔhylB strain and M5 were not detected until 9 h post-infection, and the amount of M5 was significantly lower than that of the wild-type group at 9 h post-infection (P < 0.001). At 12 h and 15 h, the difference in the number of M5 cells between the wild-type and ΔhylB groups reached a highly significant difference (P < 0.001). Although the number of M5 cells in the CΔhylB group was lower than that in the wild-type group, the difference was smaller than that with the ΔhylB group.
To further investigate the role of the hylB gene in BBB opening, the protein HylB, which is encoded by the hylB gene, was expressed and injected intravenously into the mice. An injection of 0.5 mg/mL HylB did not cause BBB opening at 3 h, as evidenced by the absence of E. coli M5 in brain, whereas M5 could enter and begin to accumulate in the brains of the 1.0 mg/mL and 2.0 mg/mL groups (Fig. 7). There was a dose-dependent increase in the degree of BBB opening at each time point.
Discussion
Meningitis is the most common clinical syndrome of S. agalactiae infection. Bacterial penetration across the BBB and into the CNS is the first step in the development of meningitis 26 . Therefore, adequate BBB models need to be developed in order to characterize the properties of bacterial penetration into the CNS. The in vitro BBB model based on the culture of brain microvascular endothelial cells has been widely used to probe the potential role(s) of individual virulence determinants in the initial pathogenesis of CNS infection by S. agalactiae. For example, hBMEC has been used to determine the invasive roles of fibronectin binding protein A (SfbA) 27 , laminin-binding protein (Lmb) 28 and the surface protein HvgA in GBS infection 29 . However, the in vitro model might not completely mimic the disease in animals or humans. It was reported that CovR-deficient GBS showed a decreased ability to invade the brain endothelium in vitro, but in vivo, this deletion mutant was more proficient in the induction of permeability and proinflammatory signaling pathways in the brain endothelium and in penetration of the BBB 30 . In contrast, a previous study on the major pilin subunit PilB reported that the pilB mutant was less Figure 6. Evaluation of BBB opening in mice infected intraperitoneally with the S. agalactiae wild-type (WT), mutant (ΔhylB), and complemented (CΔhylB) strains. Groups of 40 BALB/c mice were inoculated with 100 CFU S. agalactiae and its derivatives. At 3 h, 6 h, 9 h, 12 h, and 15 h post infection, the blood (A) and brain (B) tissues were collected from four mice of each groups for S. agalactiae quantification. Meanwhile, at each time point, another four mice from each group were injected intravenously with E. coli M5 (2 × 10 8 CFU). Five minutes later, the mice were killed, and the brains were removed for quantification of M5 (C). Bacterial loads in the blood are expressed as CFU/mL, and those in the brains are expressed as CFU/g of tissue. *P < 0.05, **P < 0.01 or ***P < 0.001 indicates significantly different bacterial loads between the two infection groups.
virulent than its wild-type strain in the newborn mice model, whereas, in vitro, the mutant had the similar ability with the wild-type GBS in resistance to macrophage killing 31 . Therefore, the development of an in vivo model will be extremely helpful in the study of bacterial meningitis.
The mouse has been widely used for investigating S. agalactiae virulence 24,27,32 . In recent years, some researchers have developed mouse models by using intravenous injection of exogenous tracers in mice and subsequent detection of extravasate molecules in the brain tissues to measure BBB permeability 33,34 . The tracers provide convenient morphological evidence for BBB opening; however, Saunders et al. 35 reported that the exogenous tracers are not a satisfactory tool for studying blood-brain barrier dysfunction. For example, Evans blue, one of the most commonly used markers, has toxic properties and thus the dye detected in brain might be due to toxic effects on cerebral endothelial or ependymal cells. Also, Evans blue detected in brain is likely to be a mixture of dye bound to plasma proteins, dye bound to brain tissue and free dye. Therefore, it is unreliable to estimate the size of BBB impairment using Evans blue. In this study, we established a model to assess the degree of BBB opening using E. coli M5, which has strong β-galactosidase activity, as an indicator. The bacterial strain is capable of producing blue colonies when cultured on medium containing X-Gal, and cannot permeate an intact BBB. Any entry of this bacterium into and spread within the brain is indicative of a leaky BBB. Therefore, the extent of BBB integrity can be quantitatively assessed through the monitoring of E. coli M5 and the measurement of the bacterial number in brain tissue. The similar method has been reported in S. pneumonia by Tsao et al. 26 Nevertheless, the parameters and criteria may be variable due to different bacterial species. In our study, this model was optimized for use in S. agalactiae, and demonstrated to be a powerful method for analyzing the BBB opening.
Under physiological conditions, the BBB strictly regulates the entry of blood-borne substances into the brain. Brain inflammation can affect the permeability of the BBB directly via cytokine-mediated activation of metalloproteinases or tight junction disruption, or indirectly by promoting transmigration of leukocytes 36 . In S. agalactiae, hyaluronidase has been demonstrated to contribute to the bacterial invasion and the pathogenesis of meningitis in mice 24 . However, unlike the PilA and β-hemolysin/cytolysin which stimulate the release of pro-inflammatory cytokines 18,19 , hyaluronidase acts as an anti-inflammation factor instead. Our previous study demonstrated that compared to the wild-type S. agalactiae, the hyaluronidase-deficient mutant stimulated a significantly higher level of pro-inflammatory cytokines including IL-1β, IL-6 and TNF-α in macrophages, whereas its mortality to zebrafish was lower 24 . Afterwards, a probable reason for this phenomenon was illustrated by Kolar et al. 37 . They revealed that GBS evades host immunity by degrading hyaluronan (HA) which is a component of extracellular matrix nearly in all tissues. HA is commonly cleaved into small fragments by tissue hyaluronidase in response to tissue injury. These small HA fragments are inflammatory factors that ligate to Toll-like receptor (TLRs) to elicit inflammatory response and repair the damaged tissue. However, bacterial hyaluronidase degrades pro-inflammatory HA fragments to the major end product disaccharides. HA disaccharides bind to TLR2/4 to block signaling elicited by host HA fragments and other TLR2/4 ligands, thus preventing GBS ligands from activating pro-inflammatory signaling cascades. Therefore we assume that hyaluronidase contributes to GBS meningitis by anti-inflammation and evasion of host immune. However, our recent study showed that intravenous injection of a purified hyaluronidase, HylB, induced acute lung and brain injury 38 . This led to us to speculate that HylB might play important role in BBB permeability. In order to evaluate this speculation, we first investigated the role of HylB in disrupting the BBB integrity using this model established in this study. Compared with the wild-type S. agalactiae, the inactivation of hylB resulted in decreased BBB opening throughout the infection. Although the presence of S. agalactiae in brain indicates the BBB opening, the use of E. coli M5 as an indicator excludes the possibility that the differential BBB integrity may be caused by different proliferation abilities in vivo between the wild-type and hylB mutant strains.
To further determine whether HylB has a direct impact on BBB integrity, we intravenously injected the purified HylB protein into the mice. We found that the intravenous injection of HylB induced BBB opening in a dose-dependent manner. In the groups treated with 1 mg/mL and 2 mg/mL, the BBB was open 3 h post-infection, 3 h earlier than in the 0.5 mg/mL group, and the number of E. coli M5 increased with the time of infection. In considering the dose of HylB protein, we tested a treatment of 3 mg/mL HylB, but all the mice died 15 h coli M5 (2 × 10 8 CFU) was injected intravenously. Five minutes later, the mice were killed, and the brains were removed for quantification of M5. The data are expressed as CFU per gram of brain. *P < 0.05, **P < 0.01 or ***P < 0.001 indicates significantly different bacterial loads between the two infection groups.
Scientific REPORTS | 7: 13529 | DOI:10.1038/s41598-017-13234-1 post-infection. This finding indicated that HylB is one of the important virulence factors of S. agalactiae, which is in agreement with previous studies 24,38 . A similar role for hyaluronidase in inducing pneumococcal meningitis has also been reported by Zwijnenburg et al. 39 . The present investigation of HylB further demonstrates that using E. coli M5 as an indicator is an easy and reliable method for assessing BBB integrity and/or leakiness. In particular, the model could be more suitable to investigate the contribution of soluble bacterial virulence factors to BBB disruption.
In this study, we used a piscine strain of S. agalactiae with an extremely high virulence to BALB/c mice. It is not clear why this piscine strain is so virulent and what genetic relationship exists between fish and human isolates. Our previous study has made a comparative genomic analysis among 15 S. agalactiae strains of different origins, and found that the Chinese piscine isolates GD201008-001 and ZQ0910 are phylogenetically distinct from the Latin American piscine isolates SA20-06 and STIR-CD-17, but are closely related to the human strain A909 40 . Additionally, a published study reported that a GBS isolate from a clinical case of human neonatal meningitis caused disease and death in Nile tilapia 8 . In this regard, it may be of interest to further investigate the pathogenic mechanisms of meningitis caused by different origins of S. agalactiae strains. This model established here could be a potentially useful tool for the investigation. Nevertheless, it will be imperative to demonstrate that the E. coli tracer works with other GBS and mouse strains that are widely used in the meningitis model.
In summary, the present study developed a model that can quantify the degree of BBB opening caused by S. agalactiae, and used this model to demonstrate that hyaluronidase plays a direct role in BBB permeability.
Methods
Bacterial strains and growth conditions. S. agalactiae strain GD201008-001, β-hemolysin/cytolysin positive, which belongs to serotype Ia, MLST type ST-7, was isolated from farmed tilapia with meningoencephalitis in Guangdong Province, China, in 2010 40 . Its genome sequence has been deposited in the GenBank database under accession number CP003810. The S. agalactiae hylB deleted mutant strain ΔhylB and the complemented strain CΔhylB were constructed in the previous study 24 Determination of E. coli M5 virulence in mice. BALB/c mice (24-26 g, aged 5-6 weeks) were bought from the Experimental Animal Center, Yangzhou University. The mice were divided into two groups with 10 mice for each group. The screened E. coli M5 were grown overnight in LB broth. A bacterial suspension of 50 µL was transferred into 5 mL LB and incubated at 37 °C to allow the cells to reach mid-log phase growth. When the bacteria reached an OD 600 of 0.6, they were harvested by centrifugation at 5000 × g for 5 min. The cell pellets were washed twice with sterile phosphate-buffered saline (PBS) (pH 7.4) and re-suspended in PBS to a concentration of 2 × 10 10 CFU/mL. One group of mice was injected intravenously with 100 μL of bacterial suspension, whereas the other was injected with 100 μL PBS and served as a control. The mice were observed until one week post infection.
Kinetics of E. coli presence in blood of mice. As an indicator, E. coli M5 should be eliminated rapidly from the circulatory system. Based on five predetermined time points, BALB/c mice (24-26 g, aged 5-6 weeks) were divided into five groups with eight mice for each group. Mid-log phase bacteria were washed twice with PBS, followed by re-suspension in PBS and adjustment of the concentration to 2 × 10 9 CFU/mL. For each time point, five mice were used as the experimental group and were injected intravenously with 100 μL of bacterial suspension, while another three were injected intravenously with 100 μL of PBS. Blood samples and brains were obtained aseptically at 3 min, 5 min, 10 min, 30 min and 60 min post infection. Blood samples of 100 μL were spread onto M63 plates. To avoid surface contamination, the organs were washed twice with PBS. Tissues were placed in 1 mL of PBS and homogenized with a biological sample homogenizer (BioPrep-24, Ningbo Hinotek Instrument Co Ltd, China). Then, 100 μL of homogenate that was either undiluted or diluted 10 −1 , 10 −2 and 10 −3 in PBS were plated on M63 plates. The M63 plates were incubated overnight at 37 °C. Colonies were counted and given as CFU/g for brain samples or CFU/mL for blood samples.
Determination of the challenge concentration of S. agalactiae.
Our previous study has shown that the bacterial strain GD201008-001 is highly virulent to BALB/c mice by intraperitoneal administration, with LD 50 values of less than 10 CFU 25 . Here, we chose two different doses of S. agalactiae, 50 and 100 CFU (5-and 10-fold greater than the LD 50 ), to find an applicable dose for this mouse model. BALB/c mice were divided into two groups with 16 mice for each group. One group received an intraperitoneal injection of 100 μL of 500 CFU/ mL bacterial suspension, and the other received an injection of 1000 CFU/mL. In each group, four mice were sacrificed every three hours to aseptically collect the blood and brain. Homogenized brain tissues and blood were plated onto THB plates for bacterial cell counting to determine tissue colonization. The experiments were repeated at least three times to ensure reproducibility. The data are expressed as CFU/g or CFU/mL per mouse.
The mice were infected with a predetermined dose of 100 CFU of the strain GD201008-001. Control mice were injected with sterile PBS. Then, 2 × 10 8 CFU of the indicator E. coli M5 in 100 μL PBS was given intravenously at the specified time points (3 h, 6 h, 9 h and 12 h), and 5 min later, the mice were sacrificed. To detect the degree of BBB opening, the brains were removed aseptically and homogenized in PBS. The homogenate was serially diluted and spread onto THB or M63 agar plates, then incubated overnight at 37 °C. The organ CFU enumeration of S. agalactiae and E. coli M5 were determined and expressed as the mean and S.D. per mouse.
Detection of BBB opening caused by S. agalactiae hyaluronidase. To investigate the effect of BBB opening caused by S. agalactiae hyaluronidase, we used the deficient mutant strain ΔhylB and the complemented strain CΔhylB constructed in our previous study 24 . One hundred and twenty mice were divided into three groups with 40 mice for each group. Mid-log phase S. agalactiae and its derivatives were washed twice in PBS and re-suspended in PBS to 1 × 10 3 CFU/mL. The concentration of E. coli M5 was adjusted to 2 × 10 9 CFU/ mL. Three groups of mice were infected with 100 µL of the wild-type S. agalactiae, ΔhylB or CΔhylB by intraperitoneal injection. At 3 h, 6 h, 9 h, 12 h, and 15 h post infection, groups of four mice were killed, and the blood and brain tissues were collected for S. agalactiae quantification. Meanwhile, another four mice from each group were respectively inoculated with 100 µL of E. coli M5 by intravenous route at each time point as mentioned above. At 5 min post-inoculation with E. coli, the brains were aseptically removed and homogenized in PBS. The homogenates were serially diluted, spread onto THB plates for S. agalactiae counting or M63 plates for E. coli M5 counting and incubated overnight at 37 °C. The bacteria were counted and reported as CFU/g per mouse.
To further determine the role of the hylB gene in the pathogenesis of BBB opening caused by S. agalactiae, we expressed HylB with good enzymatic activity, as described in our previous study 38 . Sixty mice were divided into three groups with 20 mice in each group. The three groups of the mice were intravenously injected with 200 μL of HylB protein at a final concentration of 0.5 mg/mL, 1.0 mg/mL and 2.0 mg/mL, respectively. Another 20 control mice were injected with 200 μL of sterile PBS. Then, 100 μL of the E. coli M5 (2 × 10 9 CFU/mL) was inoculated intravenously at intervals of 3 h, 6 h, 9 h, 12 h and 15 h post infection, and the brain tissues were sampled at 5 min post-infection with E. coli. The homogenates were serially diluted, spread onto M63 plates and incubated overnight at 37 °C for E. coli M5 counting. The bacteria were counted and expressed as CFU/g per mouse.
Statistical analysis. Data were collected and analyzed using MS Excel 2010 and SPSS Statistics version 20.0 software. Multiple comparisons were performed by analysis of variance (ANOVA) followed by Turkey's multiple-comparison test, with P < 0.05 indicating a statistically significant difference and P < 0.01 indicating a highly significant difference. The error bars presented in the figures represent the standard deviations of the means of multiple replicate experiments.
Experimental procedures
Ethics statement. All the animal experiments were carried out according to animal welfare standards and were approved by the Ethical Committee for Animal Experiments of Nanjing Agricultural University, China. All animal experiments complied with the guidelines of the Animal Welfare Council of China. | 2018-04-03T01:17:43.216Z | 2017-10-19T00:00:00.000 | {
"year": 2017,
"sha1": "46401d5c58b1417ab7d057c5307ce33b38caaf3f",
"oa_license": "CCBY",
"oa_url": "https://www.nature.com/articles/s41598-017-13234-1.pdf",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "7d0ef4dd24947edba44d4106dc711c858a837d75",
"s2fieldsofstudy": [
"Biology"
],
"extfieldsofstudy": [
"Biology",
"Medicine"
]
} |
25541234 | pes2o/s2orc | v3-fos-license | Controversies in the diagnosis and management of testosterone deficiency syndrome
See also page [1369][1] and [www.cmaj.ca/lookup/doi/10.1503/cmaj.150033][2]
Testosterone deficiency syndrome is an area fraught with disagreement and controversy. A new Canadian guideline[1][3] from the Canadian Men’s Health Foundation is welcomed in the light of the huge volume of research on
T estosterone deficiency syndrome is an area fraught with disagreement and controversy. A new Canadian guideline 1 from the Canadian Men's Health Foundation is welcomed in the light of the huge volume of research on this topic over the last five years. Given the ongoing controversy and changes in our understanding of testosterone deficiency, it is not surprising that there are multiple guidelines available from other developers, including recent updates from the European Association of Urology 2 and the International Society for Sexual Medicine. 3 The guideline from the Canadian Men's Health Foundation 1 provides recommendations specifically for Canadian physicians and is largely consistent with the 2008 International Society for the Study of the Aging Male 4 and 2010 Endocrine Society guidelines. 5 The current guideline will be a useful resource for Canadian physicians, but although research studies are bringing clarity to some aspects of caring for patients with testosterone deficiency syndrome, there are still many for which there is no consensus.
The diagnosis of testosterone deficiency syndrome is not straightforward. With the known limitations of testosterone measurement and lack of a valid symptom score, it is not surprising that primary care physicians lack confidence in diagnosing the syndrome, especially when experts cannot agree on values. True to the Endocrine Society guideline, 5 the new Canadian guideline refers to unequivocal testosterone deficiency syndrome and equivocal testosterone deficiency syndrome without clarifying laboratory values. 1 Instead, the authors put weight on a combination of factorsclinical history, physical examination and response to therapy -in making the diagnosis, in addition to measuring testosterone. The European Association of Urology and the International Society for Sexual Medicine set parameters that men with a total testosterone level of less than 8 nmol/L will usually benefit from treatment and that a trial of therapy may be indicated for those with levels between 8 and 12 nmol/L in the presence of substantial symptoms. 2,3 Although experts could argue about these levels, the levels do, at least, provide a basis for primary care management and remove the mystique that only an eminent endocrinologist can really diagnose "true" testosterone deficiency. Canadian physicians may find these specific levels helpful as a guide in diagnosis.
Several guidelines recommend a trial of treatment as a component of the diagnostic process, particularly in patients with borderline testosterone levels. What is at issue is the length of the trial. The Canadian guideline advises a threemonth trial of treatment, 1 as also recommended in the Endocrine Society guideline. 5 In line with the International Society for Sexual Medicine guideline, 3 I suggest that clinicians consider a minimum period of six months when assessing response. In a randomized controlled trial of testosterone undecanoate in men with type 2 diabetes, my colleagues and I showed that improvement continued until six months, and even extended to 12 months in some patients. 6 Contributing factors to long response times in testosterone therapy include compliance issues with topical treatment. In addition, for patients receiving testosterone undecanoate treatment, a three-month trial period would include only two long-acting injections, with peak levels not necessarily being reached. Because men are likely to get a trial of testosterone therapy only once, it is vital that we do it properly and expose them to sustained levels of testosterone for an adequate period to achieve maximal benefit.
Controversies in the diagnosis and management of testosterone deficiency syndrome
Geoffrey I. Hackett MD Competing interests: None declared.
This article was solicited and has not been peer reviewed.
There is no consensus on the choice of initial biochemical testing. The Canadian guideline recommends that the total testosterone level be measured in the first instance, with determination of bioavailable testosterone using sex hormonebinding globulin only in patients with both symptoms and equivocally low total testosterone levels. 1 Sex hormone-binding globulin levels may be elevated in conditions such as liver disease and hyperthyroidism, with use of certain medications such as anticonvulsants, and even in extreme age. In men with these conditions, a total testosterone level of 14 nmol/L, which is within the normal range according to the European Association of Urology and International Society for Sexual Medicine guidelines, 2,3 could be associated with an appropriate diagnosis of testosterone deficiency syndrome based on free or bioavailable testosterone. To identify these men may require measurement of sex hormone-binding globulins, a test that I believe is neither too complicated nor too expensive to perform just once.
It is good to see that the Canadian guideline group acknowledges erectile dysfunction as one of the most common symptoms associated with testosterone deficiency and a justification for testosterone therapy. 1 Virtually all guidelines on erectile dysfunction arrive at the sensible conclusion that total testosterone level should be measured in all patients who experience erectile dysfunction. This includes men with type 2 diabetes. However, not all guidelines that focus on testosterone deficiency come to the same conclusion. The Endocrine Society guideline, for example, recommends against screening for testosterone deficiency in men with type 2 diabetes and erectile dysfunction. 5 More than three-quarters of men with type 2 diabetes have erectile dysfunction, 6 and about 90% have positive symptom scores for testosterone deficiency syndrome. 6 Surely these facts now justify a recommendation to screen all men with type 2 diabetes for hypogonadism. The same justification for screening probably applies to men with HIV and those with chronic opioid use.
The Canadian guideline includes a caveat that, for men with erectile dysfunction and no other manifestations of testosterone deficiency syndrome, investigation should be considered only after a trial of phosphodiesterase type 5 (PDE-5) inhibitors has failed. 1 It is difficult to justify this delay, because testosterone deficiency is considered a curable cause of erectile dysfunction. 2 Specialists in sexual medicine realize that sexual desire, nocturnal erections, orgasm, ejaculation, intercourse and relationship satisfaction are equally important and more likely to respond to testosterone therapy than to PDE-5 inhibitors. In younger men with no comorbidities, testosterone therapy will almost cer-tainly deal with all of their symptoms. Prescribing on-demand PDE-5 inhibitors to this group of patients is unlikely to be seen as an effective solution by the couple. Also, because patients may value effective relief of symptoms over a shortterm reduction in fertility with testosterone therapy, discussion about goals of therapy is essential in making treatment decisions, a point strongly made in the Canadian guideline. 1 Differing views on treatment approaches among guideline and expert groups are particularly common in the setting of comorbidities, such as obesity, diabetes and cardiac problems. In men with obesity and testosterone deficiency, the Canadian guideline rightly points out that weight loss may increase testosterone levels over two to four years. 1 In a meta-analysis, Corona and colleagues 7 suggested that weight reduction should be the first step for all men with obesity and testosterone deficiency. However, symptoms do not improve with weight loss alone, 8 especially in men with comorbidities. 9 Patients, quite rightly, demand relief of symptoms, not just improved blood levels, and hence testosterone therapy should not be delayed while awaiting possible long-term benefits from weight loss. For men with diabetes, the study by my colleagues and I showed a positive effect of testosterone treatment on glycated hemoglobin (HbA 1c ) concentrations. 6 In addition, long-term registry studies showed sustained improvements in HbA 1c with testosterone therapy in patients with poorly controlled diabetes. 10 Treatment of testosterone deficiency produces modest improvements in several modalities that, combined, may constitute considerable benefit to the patient with diabetes. 6 The literature on the treatment of testosterone deficiency is rapidly changing. When therapeutic levels of testosterone therapy are achieved over a sustained period, all-cause mortality may be reduced. 11 A recent meta-analysis has suggested that injection of testosterone, especially longacting formulations, has higher efficacy rates and safety benefits than topical treatment owing to the substantially lower levels of dihydrotestosterone associated with conversion of 5-α-reductase inhibitors in skin. 12 Opinion varies on monitoring patients taking testosterone therapy. My colleagues and I showed that prostate-specific antigen (PSA) levels increased only in men treated for severe testosterone deficiency (< 8 nmol/L). 6 The European Association of Urology guideline recommends that the PSA level measured six months after starting testosterone therapy be considered as the baseline against which future levels are assessed; failure to recognize this may result in excessively high biopsy rates. 2 In addition to monitoring PSA and hematocrit at three and six months after the start of treatment and annually thereafter, the Canadian guideline suggests digital rectal examination at baseline, at six months and annually. 1 Although this is graded as a weak recommendation, there is no evidence of the value of digital rectal examination in this setting, especially because it usually requires specialist consultation with associated costs. Because the evidence suggests no increased risk of prostate cancer associated with testosterone therapy, the chances of a cancer detected by digital rectal examination in a man with normal findings on examination a few months earlier and a normal PSA level must be remote.
Many will welcome the clarity provided by the new Canadian guideline and other recently revised guidelines, but many important clinical issues remain unresolved. Unfortunately, it is unlikely that the ideal clinical trials required to provide the highest levels of evidence will ever be done for ethical, practical and financial reasons. | 2017-08-15T08:53:11.176Z | 2015-12-08T00:00:00.000 | {
"year": 2015,
"sha1": "70a8975366d5ab880106571049c164a4dd63af9b",
"oa_license": null,
"oa_url": "https://www.cmaj.ca/content/cmaj/187/18/1342.full.pdf",
"oa_status": "GOLD",
"pdf_src": "Anansi",
"pdf_hash": "7cb865e2c0e2a7fd54321efc48d9b03a9a79847e",
"s2fieldsofstudy": [
"Medicine"
],
"extfieldsofstudy": [
"Medicine"
]
} |
214764971 | pes2o/s2orc | v3-fos-license | Glycemic Control after Initiating Direct-Acting Antiviral Agents in Patients with Hepatitis C Virus and Type 2 Diabetes Mellitus Using the United States Integrated Healthcare System
Objective: Hepatitis C virus (HCV) has an increased risk of Type 2 diabetes mellitus (T2DM). Prior studies found that the eradication of HCV with direct-acting antiviral (DAA) agents led to improved glycemic control in patients with T2DM. We aimed to identify the association between HCV eradication and glycemic control in patients diagnosed with HCV and T2DM. Methods: A retrospective observational study was conducted to identify adult patients diagnosed with HCV from January 1, 2014, to August 31, 2017. Patients were included if they were initiated on one of the following DAA agents within the study period: Sofosbuvir/velpatasvir, sofosbuvir/ledispavir, elbasvir/grazopevir. Patients were also required to have the diagnosis of T2DM. The primary outcome of this study was the average change in glycosylated hemoglobin (HbA1c) pre- versus post-DAA agents. Findings: Our final cohort consisted of 996 patients diagnosed with HCV and T2DM: Patients who achieved sustained virologic response (SVR) (n = 937, 94%) and those who did not achieve SVR (n = 59, 6%). In the SVR group, there was a 0.3950% reduction in HbA1c (P < 0.0001) and in those who did not achieve SVR group, there was 0.3532% reduction in HbA1c (P = 0.0051). In the overall study population, SVR group had 0.04% more reduction in HbA1c but was not statistically significant (P = 0.7441). Conclusion: Both groups had statistically significant reductions in HbA1c when comparing the mean change in average HbA1c pre-versus post-DAA agent. Patients who achieved SVR had a greater absolute reduction in HbA1c by 0.04%; however, this was not statistically significant.
Introduction
C hronic hepatitis C virus (HCV) infection is associated with a higher prevalence of Type 2 diabetes mellitus (T2DM). [1,2] The presence of chronic HCV infection increases the risk of developing T2DM for patients with metabolic syndrome by 11-fold and is estimated that up to 33% of chronic hepatitis C patients have T2DM. [1,3] T2DM is one of the most common extrahepatic manifestations of chronic HCV infection and is associated with increased risk for liver, renal, and cardiovascular complications. [4] Specific complications of HCV include cirrhosis, liver failure, The mechanism for the observed association between HCV and DM2 is unclear, but some evidence suggests that it may be related to increased insulin resistance or expression of pro-inflammatory cytokines. Prior studies, such as Calzadilla-Bertot et al. showed patients with compensated HCV cirrhosis having higher rates of decompensation when they had diabetes and insulin resistance. The study also found that insulin resistance was a predictor of overall mortality. [8] Other studies have speculated that HCV proteins increase inflammatory cytokines such as interleukin-2 and tumor necrosis factor-α, which results in an upregulation of gluconeogenesis, enhances lipid accumulation in the liver, and causes insulin resistance. [1,9] Early identification and treatment of patients with HCV and T2DM may reduce and/or prevent future diabetic complications through improved glycemic control. [4] Several studies have shown the effects of HCV on glycemic control, such as an interferon-based study by Delgado-Borreg et al. The study suggested that successful clearance of the HCV could lead to improvement in insulin resistance. [10] In addition, the veterans affairs (VA) healthcare system conducted a study to include national data from 167 medical centers, 875 ambulatory care, and community-based outpatient clinics of the VA system throughout the United States. Hum et al. found that the eradication of HCV with direct-acting antiviral (DAA) agents lead to improved glycemic control in patients with T2DM. Patients with more inadequate glycemic control at baseline had even more significant improvement, nearly a 1% drop in glycosylated hemoglobin (HbA1c) associated with sustained virologic response (SVR). [3,11] While the present study certainly offers a more diverse population of patients and takes place in the US, another trial published in Journal of Medical Virology in 2017 addressed this question previously and also found a glycemic benefit in patients with DM when treated with DAAs for HCV in a 100% Caucasian population in Italy. [11] Although evidence suggests that eradication of HCV may be associated with improved glycemic control in patients with T2DM, prior studies are limited. The objective of this study was to investigate whether eradication of HCV infection, with the newest DAA therapies, is associated with improved glycemic control in patients with T2DM using an integrated managed care healthcare system population in the US population.
Methods
This retrospective, observational study was conducted to identify patients from the Kaiser Permanente Southern California (KPSC) health plan aged >18 years old and with the diagnosis of HCV genotypes 1 through 6 from January 1, 2014, to August 31, 2017. The study design was pre-and post-study design with a period of 12 months before index date labeled as preindex and the period of 12 months postindex labeled as postindex. KPSC is an integrated healthcare delivery system with approximately 4.5 million members located in Southern California. Data were derived from the KPSC regional database from 14 medical centers and contain information on patient demographics, diagnoses, prescriptions, laboratory results, medical, and hospital encounters. The KPSC database has an electronic health medical record system that allows for more detailed information to be accessed and included in studies. The KPSC membership currently represents 15% of the underlying population in the Southern California region and this membership closely mirrors the Southern California population; it is racially diverse and includes the entire socioeconomic spectrum. [12] The Institutional Review Board for KPSC approved this study.
Patients were required to be newly initiated on a DAA agent during this period, and this was labeled as index date; the DAA list included: Ledipasvir/sofosbuvir, sofosbuvir/velpatasvir, and elbasvir/grazoprevir. The study end date was August 31, 2017, and hence that all patients had 12 months postindex to evaluate the primary outcome. Patients with any history of interferon and ribavirin-containing regimens, or first-generation DAA agents, or antiviral medications were excluded from the study. Patients were required to have a diagnosis of T2DM or have had a prescription of an antidiabetic medication during the preindex period and up to primary outcome so that patients were on antidiabetic medication when evaluating HbA1c levels. The baseline period was identified as during the preindex period and covariates consisted with gender, race, HCV genotypes, comorbidities (anemia, liver cirrhosis, alcohol use disorder, and chronic kidney disease), and laboratory measurements (international normalized ratio [INR], platelets, bilirubin, and albumin). Patients with HbA1c levels were identified during pre-and post-index periods after the SVR laboratory measurement; multiple HbA1c levels were averaged during their respective periods. Any patients without HbA1c during the pre-or post-index period were excluded. The SVR was defined as undetectable hepatitis C viral load 12 weeks after the end of hepatitis C treatment. Undetectable hepatitis C viral load during that timeframe indicates the cure of HCV. Finally, any patient who did not have an SVR level was excluded.
The primary outcome of the study was to compare HbA1c levels during the pre-versus post-index period among those who achieved SVR. A secondary aim was to determine if there was a statistically significant difference in the HbA1c levels for patients who achieved SVR versus those who did not achieve SVR. We also sought to identify factors associated with changes in pre-and post-HbA1c levels. Potential confounders and biases may include possibility that patients taking new effective therapy for HCV might be more likely to engage in other healthy behaviors. Additional analysis including multivariable regression analysis was conducted to address potential sources of bias. Multivariable regression analysis was conducted to determine whether independent variables such as genotype, body mass index (BMI), and gender affected the changes in HbA1c. In all categories, there was no statistical difference between gender, BMI, and hepatitis C genotypes (data not shown).
Unadjusted descriptive statistics were conducted to summarize patient characteristics of the two study cohorts (achieved SVR vs. did not achieve SVR). Differences between these patient groups were tested using two-sample t-test or Wilcoxon test for continuous variables and the Chi-squared statistic for categorical variables. The mean change in HbA1c from before versus after DAA treatment was calculated. In addition, categories of HbA1c (>7%, >8%, and >9%) were evaluated descriptively to calculate mean differences pre-and post-DAA treatment. Finally, mean change in HbA1c levels for patients with cirrhosis compared to patients with no cirrhosis were descriptively evaluated. A sample size calculation was conducted and it was found that a study population of 71 patients was required to detect a difference of 0.5% of HbA1c with 90% power. Multivariable linear regression was conducted to identify the relationship associated with HbA1c change while controlling for age, sex, race, baseline laboratories, and comorbidities. Additional multivariate linear regression was performed to determine whether independent variables such as genotype, body mass index, and gender impacted the changes in HbA1c. All data were analyzed using SAS version 9.4 (SAS Institute, Cary, NC, USA). Values of P < 0.05 were considered statistically significant.
Results
A total of 29,096 adult patients were diagnosed with HCV in the KPSC region between January 1, 2014, and August 31, 2017. Among 29,096 HCV patients, 25% (n = 7176) were treated with the selected DAAs [ Figure 1]. Among those patients, 22% (n = 1559) were diagnosed with T2DM or were on antidiabetic medication for 12 months before index date (DAA agent start date). The final study cohort consisted of 996 patients after applying the inclusion and exclusion criteria, and two study cohorts were created [ Figure 1]: Patients who achieved SVR (n = 937, 94%) and those who did not achieve SVR (n = 59, 6%). Table 1 provides baseline characteristics for our study cohort. Within the total study cohort (n = 996), the majority of the patients were male (67.3%) and the average age was 61 years (standard deviation [SD] 8.1). The race was relatively similar in representation: Caucasian (n = 307, 30.8%), African American (n = 293, 29.4%), and Hispanics (n = 310, 31.1%). Majority of the patients identified had hepatitis C genotype 1 (n = 868, 87.1%) and were treated with ledispavir/sofosbuvir (n = 892, 89.6%). The baseline antidiabetic medications for patients diagnosed with T2DM before initiating DAA medications were identified and are shown in Table 1. The majority of the patients who were on antidiabetic medications were on metformin (43.4%), sulfonylureas (19.7%), and insulins (14.8%). Statistical findings included gender, where males were shown not to achieve SVR (~80%) compared with achieved SVR (66.5%), P = 0.0365; laboratory measurement for platelets were higher in mean levels for the patients who achieved SVR (186.7) compared to patients who did not achieve SVR (155.1) (P = 0.0011); finally, bilirubin levels were also found to be statistically different (P = 0.0030). As shown in Table 2, the mean HbA1c pre-versus post-DAA treatment for patients who achieved SVR (n = 937) was 7.5% and 7.1%, with a mean change of 0.3950 (P < 0.0001). The mean HbA1c for patients pre-versus post-DAA agent who did not achieve SVR (n = 59), was 7.4% and 7.1%; the mean change in HbA1c was 0.3532 (P = 0.0051). The difference in HbA1c between patients who achieved SVR and those who did not was 0.0418 (P = 0.7441), which was not statistically significant.
Further analysis was conducted and adjusted based on the HbA1c baseline. There were no significant changes between patients who achieved SVR compared to patients who did not achieve SVR with baseline 2) 0 (0.0) *P<0.05 was defined as a statistically significant difference between the two groups. The following statistical test was used to calculate the P value: a Chi-square test was used for categorical variables, b Wilcoxon test was used for continuous variables. DAA=Direct-acting antiviral, LDV/SOF=Ledipasvir/sofosbuvir, SOL/VEL=Sofosbuvir/ledipasvir, EBR/GZR=Elbasvir/grazoprevir, INR=International normalized ratio, T2DM=Type 2 Diabetes, DPP-4=Dipeptidyl peptidase-4, GLP=Glucagon-like peptide, SGLT2=Sodium glucose co-transporter 2, SVR=Sustained virologic response HbA1c levels >7%, >8% and >9% [ Table 3]. In addition, other potential confounders such as baseline cirrhosis, were also analyzed. Patients with cirrhosis and without cirrhosis also showed no significant difference in mean HbA1c between the two study groups [ Table 4]. Results from the multivariable regression analysis showed substantial predictors to be age and INR that are associated with changes in HbA1c from pre-to post-DAA treatment. As patients increase in age, they have a 0.01 decrease in their HbA1c turn (P = 0.0058). Patients with increase in INR levels have 0.33 smaller decreases for HbA1c pre-post difference (P = 0.0453). Patients who did not achieve SVR were to have 0.04 slighter decrease for HbA1c pre-post difference. Table 5 displays the factors associated with the changes in HbA1c levels pre-and post-difference.
Discussion
In the US, an estimated 29.1 million people are diagnosed with T2DM, and 1.4% of the population have chronic HCV. [3] Patients with chronic HCV infection is four times more likely to develop T2DM than in patients without HCV. [3] Researchers have suggested that there is a two-way association between HCV and T2DM. HCV is associated with accelerated steatosis that is mediated through increased production of the lipogenic substrate, upregulation of lipogenesis, and disruptions of fatty acid metabolism. [1] Other mechanisms have suggested that proinflammatory cytokines secreted by HCV may also affect beta-cell function by disrupting insulin signaling. [13] The eradication of HCV has been shown to reduce the risk of HCC, to improve liver fibrosis, and to decrease the risk of other complications of chronic liver disease in the interferon era of HCV treatment. [4] Several studies have inferred successful clearance of HCV can lead to improvements in insulin resistance and reduction in HbA1c. [3,4,8,11] The benefits of glycemic control in patients with comorbidities of HCV and T2DM have been speculated, but not extensively investigated with the new DAA agents.
A post hoc analysis of six studies followed patients with chronic hepatitis C genotype 1, treated with paritaprevir/ritonavir/ombitasvir/dasabuvir, revealed a significant drop in fasting glucose (-8.87 mg/dl by week 12; P < 0.0001) in the treatment group compared to the placebo group. [14] The VA Health System also conducted a study including, 2435 patients who were treated with either ledipasvir/sofosbuvir, paritaprevir/ ritonavir/ombitasvir/dasabuvir, or simeprevir/sofosbuvir combination therapies which demonstrated a significantly higher reduction in the mean HbA1c level in the SVR group (0.98% ± 1.4%) compared to the no-SVR group (0.65% ± 1.5%) (P = 0.02). [3] The reduction in HbA1c level associated with SVR was restricted to patients with a high baseline HbA1c level but showed no significant difference among patients with HbA1c ≤7.2% at baseline and with or without cirrhosis. [3] Although the VA did conduct a recent study with DAA agents, the patient demographics were limited to the veteran population. The aim of our study was to validate the findings conducted by the VA Health System but within an integrated health system representing a diverse patient demographic and socioeconomic population.
Within the total study cohort (n = 996), the majority of the patients were male (67.3%) and the average age was 61 years (SD 8.1). The race was relatively similar in representation: Whites (n = 307, 30.8%), Blacks (n = 293, 29.4%), and Hispanics (n = 310, 31.1%). Majority of the patients identified had hepatitis C genotype 1 (n = 868, 87.1%) and were treated with ledispavir/sofosbuvir (n = 892, 89.6%) [ Table 1]. Our results were similar to the findings conducted by the VA system and demonstrated that eradication of HCV with DAA agents improved glycemic control in patients with T2DM. The VA study showed a nearly 1% reduction in HbA1c exclusively in the SVR group; however, our results show an approximate 0.4% reduction in HbA1c not only in the SVR group but also in those who did not achieve SVR. Regardless of HCV treatment success or failure, the reduction in HbA1c in the respective study groups was statistically significant (P < 0.001 in SVR group and P = 0.0051 in No SVR group) [ Table 2]. Contrary to the VA study, the reduction in HbA1c did not differ in patients who achieved SVR from those patients who failed to achieve SVR (P = 0.7741) [ Table 2]. In support of our findings, a prospective study followed 251 patients with chronic HCV genotype 1 and HIV co-infection. The patient population included 31% HIV positive patients and 17% of patients were diagnosed with T2DM. The DAA Wilcoxon test was used for continuous variables. DAA=Direct acting antiviral, HbA1c=Hemoglobin A1c, CKD=Chronic kidney disease, SVR=Sustained virologic response, CBC=Complete blood count, INR=International normalized ratio, BMI=Body mass index, BMI normal=18.5-24.9, BMI overweight=25.0-29.9, BMI obese=≥30.0, BMI underweight=<18.5, SVR=Sustained virologic response treatment regimens contained a wide spectrum of agents, including asunaprevir, beclabuvir, daclatasvir, ledipasvir, sofosbuvir, and telaprevir. They found that HbA1c reduction in patients with SVR was 0.022% ± 0.53% and that the drop in HbA1c levels after the completion of therapy was unchanged in the group of HCV/HIV co-infected patients with SVR when compared to the group of HCV/HIV co-infected patients without SVR. [15] We also wanted to identify whether baseline HbA1c levels affected the primary outcome of the study [ Table 3]. While the VA study showed, the reduction in HbA1c level associated with SVR was restricted to patients with a high baseline HbA1c level; our research showed no difference in outcomes. Our research shows that there were more considerable mean changes in HbA1c when baseline HbA1c were higher; however, there were no significant changes between patients who achieved SVR compared to patients who did not achieve SVR with baseline HbA1c levels >7%, >8% and >9% [Table 3]. Similarly, we wanted to identify if levels of cirrhosis in patients with hepatitis C affected the changes in HbA1c pre-versus post-DAA treatment. We found that baseline HbA1c, patients with cirrhosis and no cirrhosis had no statistically significant effect in HbA1c reduction [ Table 4]. An ad hoc analysis was conducted to determine whether antidiabetic medications contributed to the decrease in HbA1c in both groups. At baseline, 43.4% of patients were on metformin, 14.8% were on insulin, and 19.7% were on sulfonylurea [ Table 5]. Additional covariate analysis was conducted to determine if gender, baseline BMI, and HCV genotypes contributed to changes in HbA1c. In all categories, there was no statistical and clinical significance.
Although several studies show that decreases in fasting glucose and HbA1c were identified after HCV viral clearance, our findings demonstrate a reduction in HbA1c following treatment of DAA agents is associated with HCV suppression, regardless of successful viral clearance. DAA agent's inhibition of HCV replication may have an immediate effect in slowing hepatic steatosis, reducing the secretion of proinflammatory cytokines, and improving insulin resistance. It is essential to consider that the lowering of HbA1c level represents only one mechanism by which HCV eradication could potentially influence cardiovascular risk. Treatment of HCV has the potential to impact a large proportion of patients with respect to liver disease and diabetes control. However, there is a need for more extensive prospective studies to address the longer term impact of SVR achieved with DAAs on T2DM.
Limitations of the study include study design with the inability to correct for unmeasured confounders and bias including possibility that patients taking new effective therapy for HCV might be more likely to engage in other healthy behaviors. Another limitation of the study is the small sample size for nonresponders. Finally, there is limited clinical significance since all patients with HCV are offered treatment regardless of the presence of DM2.
The strengths of this study include the sample size. This study is the first of its kind to evaluate the utilization of the newer DAA agents and effect on HCV eradication and glycemic control in T2DM in a real-world population within an integrated system. As with many database studies, there are some limitations to address. In this study, we did not calculate the adherence of HCV medications since this may impact patients not achieve SVR. There are also unmeasured confounding factors such as changes in lifestyle habits, diet, and concomitant medications that may have contributed to reduction in HbA1c. Finally, this study was not powered to detect a 0.5% reduction in HbA1c between achieved SVR and did not reach SVR groups.
In this analysis, consistent with clinical trials and prior studies, patients on DAA therapies achieved SVR >90%, and furthermore, results suggest that the change in HbA1c after successful eradication of HCV with DAA treatment had no different than with those who had failed treatment. Patients with more inadequate glycemic control at baseline did not have statistically significant changes between achieving SVR and not achieving SVR. This suggests that treatment with DAA agents regardless of complete eradication of HCV infection may potentially have an immediate beneficial effect on hepatic inflammation and glycemic control. This study highlights the need for the long-term evaluation of HCV eradication on glycemic control in patients with T2DM. Long-term studies should incorporate the benefits of HCV eradication after 2-3 years of DAA completion and achieving SVR. | 2020-04-02T09:10:36.192Z | 2020-01-01T00:00:00.000 | {
"year": 2020,
"sha1": "010acaed5996a5227663f2b48995fd1001870462",
"oa_license": "CCBYNCSA",
"oa_url": "https://doi.org/10.4103/jrpp.jrpp_19_110",
"oa_status": "GOLD",
"pdf_src": "ScienceParsePlus",
"pdf_hash": "1903850b1b4095b1b5b2c51be5bf83124e92e878",
"s2fieldsofstudy": [
"Medicine"
],
"extfieldsofstudy": [
"Medicine"
]
} |
271083491 | pes2o/s2orc | v3-fos-license | Mitochondria of Porcine Oocytes Synthesize Melatonin, Which Improves Their In Vitro Maturation and Embryonic Development
The in vitro maturation efficiency of porcine oocytes is relatively low, and this limits the production of in vitro porcine embryos. Since melatonin is involved in mammalian reproductive physiology, in this study, we have explored whether endogenously produced melatonin can help in porcine oocyte in vitro maturation. We have found, for the first time in the literature, that mitochondria are the major sites for melatonin biosynthesis in porcine oocytes. This mitochondrially originated melatonin reduces ROS production and increases the activity of the mitochondrial respiratory electron transport chain, mitochondrial biogenesis, mitochondrial membrane potential, and ATP production. Therefore, melatonin improves the quality of oocytes and their in vitro maturation. In contrast, the reduced melatonin level caused by siRNA to knockdown AANAT (siAANAT) is associated with the abnormal distribution of mitochondria, decreasing the ATP level of porcine oocytes and inhibiting their in vitro maturation. These abnormalities can be rescued by melatonin supplementation. In addition, we found that siAANAT switches the mitochondrial oxidative phosphorylation to glycolysis, a Warburg effect. This metabolic alteration can also be corrected by melatonin supplementation. All these activities of melatonin appear to be mediated by its membrane receptors since the non-selective melatonin receptor antagonist Luzindole can blunt the effects of melatonin. Taken together, the mitochondria of porcine oocytes can synthesize melatonin and improve the quality of oocyte maturation. These results provide an insight from a novel aspect to study oocyte maturation under in vitro conditions.
Introduction
Melatonin, also known as N-acetyl-5-methoxy tryptamine, is widely present in almost all organisms.In mammals, all tissues and organs, especially the pineal gland, convert tryptophan to 5-hydroxy tryptophan by hydroxylase; this intermediate is then metabolized into 5-hydroxytryptamine (5HT) by decarboxylase.This 5-HT is then catalyzed by arylamine N-acetyltransferase (SNAT/AANAT) into N-acetylserotonin.Finally, N-acetylserotonin is converted to melatonin by hydroxyindole-O-methyltransferase (HIOMT/ASMT).The melatonin is synthesized by the pineal gland and directly released into the cerebrospinal fluid and blood [1][2][3], while the extra pineal synthesized melatonin is primarily intended for local utilization (autocrine and paracrine effects) [4].The majority of melatonin is produced in the mitochondria [5] and it plays an important role in reducing oxidative stress.Our group is the first to report that the mitochondria of mouse oocytes synthesize melatonin [6].
Mitochondria are one of the most important organelles of oocytes.They are the powerhouses of cells and also control intracellular calcium homeostasis [7,8].In addition, mitochondria play a central role in other functions, including the regulation of cell death and signaling pathways, iron metabolism, and the biosynthesis of certain organic compounds [9][10][11].During folliculogenesis, the follicle undergoes substantial growth, expanding approximately 500-fold.This also makes oocytes within the follicle undergo major structural and biochemical transitions, including two meiotic divisions.To cope with such an energy-consuming process, the quantity and quality, as well as the distribution pattern, of mitochondria in the oocytes are also required to change [12].
The maturation of oocytes requires large amounts of ATP for their continued transcription and translation; therefore, this process needs sufficient numbers of functional mitochondria for ATP production.The fact is that the immature oocytes have limited mitochondrial activity and they depend upon the surrounding cumulus and granulosa cells to provide additional energy to support their maturation [13].During the process of ovulation, the oocytes lose their connection to the cumulus cells, and this forces them to activate their own mitochondria.This is the reason why the matured oocytes have accumulated a sufficient number of mitochondria to generate ATP.Therefore, the ATP levels are elevated during polar body expulsion.Higher ATP levels are associated with higher fertilization rates in matured oocytes [14], while the lower ATP content in oocytes is associated with poor oocyte quality and lower levels of fertilization [15,16].
As mentioned above, melatonin is synthesized in mitochondria.Thus, melatonin is considered to have a major impact on mitochondrial functions, including increasing the efficiency of the electron transport chain [17], ATP production [18], and reducing oxidative damage to the mitochondria [19][20][21].The oxidative damage caused by excess reactive oxygen species (ROS) impairs cellular function, leading to enzyme inactivation, lipid peroxidation, ATP depletion, and mitochondrial disturbance.It has been found that high levels of ROS and low antioxidant activity in the follicular fluid result in poor pregnancy outcomes after IVF (in vitro fertilization).Increased ROS levels during in vitro oocyte maturation are associated with chromosomal errors and the low developmental potential of oocytes [22,23].For example, the increased ROS levels in cultured mouse oocytes alter the chromosomal arrangement of microtubules and spindles and inhibit their maturation [5].Melatonin, as a potent antioxidant, can directly scavenge toxic oxygen derivatives [24,25] and stimulate the activities of antioxidant enzymes [26], including glutathione peroxidase (GSH-Px) and superoxide dismutase (SOD) [27].However, whether porcine oocytes can synthesize melatonin and, if so, the effects of this endogenously generated melatonin on oocyte maturation are still unclear.
In this study, we aimed to explore the subcellular localization of melatonin synthesis in porcine oocytes and the role of locally synthesized melatonin in oocyte maturation.To achieve this purpose, interfering RNA has been used to knock down the expression of melatonin.The synthetic gene of AANAT was also used to explore further whether melatonin is involved in the maturation of porcine oocytes under in vitro conditions.
Ethics Statement
All animal studies followed the guidelines of the Animal Care and Use Committee of China Agricultural University and were approved by the Ethics Committee of the Agriculture University of China (permission number: AW01602202-1-6).
Chemicals
All chemicals used in this study were purchased from the Sigma-Aldrich Chemical Company (St. Louis, MO, USA), unless otherwise indicated.
The Procedure of In Vitro Porcine Oocyte Maturation
The ovaries of sows (donated by a local slaughterhouse, Beijing Food Company, Beijing, China) were collected and packed in thermostable containers (37 • C) with sterilized saline, penicillin, and streptomycin, then the samples were transported to the laboratory within 2 h and, finally, washed with 37 • C sterilized physiological saline.Thereafter, follicular fluid was extracted from the follicles (3-6 mm in diameter) with a syringe fitted with a 20 G needle.The cumulus-oocyte complex (COCS) was rinsed twice in HEPES-buffered lactate (TL-HEPES) medium and 3 times in hormone-free maturation medium.The COCs were then transferred into the maturation medium (50 oocytes per 0.5 mL of medium), which consisted of TCM-199 with 0.57 mM cysteine, 3.05 mM D-glucose, 0.91 mM sodium pyruvate, 10 ng/mL epidermal growth factor (EGF), 0.5 IU/mL luteinizing hormone (human origin, LH, sigma-L6420), 0.5 IU/mL follicle stimulating hormone (human origin, FSH, sigma-F4021), 0.1% polyvinyl alcohol (PVA), 75 mg/mL penicillin, 50 mg/mL streptomycin, 20 ng/mL LIF, 20 ng/mL IGF1, and 40 ng/mL FGF2 incubated at 38.5 • C, under 5% CO 2 and in 100% humidity for 42-44 h for maturation.The maturated COCs were transferred to a culture medium containing 1 mg/mL hyaluronidase in TL-HEPES, then the cumulus cells were removed by vertexing and washed with TL-HEPES.These denuded oocytes were then used in the subsequent experiments [28].
Parthenogenetic Activation of Oocytes
The denuded porcine oocytes were activated in the activation medium (0.3 m mannitol, 0.05 mm CaCl 2 , 0.1 mm MgCl 2 , and 0.1% bovine serum albumin (BSA)) by an electrical pulse of DC 130 V/mm for 80 µs, using a BTX Electro-Cell Manipulator 2001 (BTX, Inc., San Diego, CA, USA).The activated oocytes were then rinsed in porcine zygote medium-3 (PZM-3) and cultured in a medium containing 5 µg/mL of cytochalasin B at 38.5 • C and 5% CO 2 in air with 100% humidity for 5-6 h.The experiment was divided into 3 groups: the Control, 5-HT, and 5-HT + Lu groups.
In Vitro Culture (IVC) of Embryos
The parthenogenetically activated oocytes (approximately 20-30 oocytes per group) were placed in 100 µL droplets of PZM-3, supplemented with 0.6 mg/mL of BSA, and were incubated at 39 • C, 5% CO 2 , and 5% O 2 .The cleavage rate and blastocyst rate were observed and recorded after 48 and 168 h of IVC, respectively.
Calculation of Cumulus Cell Expansion and Polar Body Extrusion Rates in Porcine Oocyte Maturation
The expanded oocytes of matured COCs after 44 h of incubation were counted under the microscope.The expanded oocytes served as the numerator to divide the total number of oocytes in each well and to calculate the expansion rate of cumulus cells.Then, the cumulus cells were removed by the use of 0.3 mg/mL of hyaluronidase.The oocytes with discharged polar bodies were selected under microscopy with a 20-times eyepiece.The polar body discharge rate was calculated against the total number of oocytes in each well.
Cortical Granule Migration Assay
The zona pellucida of MI-stage oocytes was removed with 0.1% pronase; the oocytes were washed 3 times with PBS and incubated in a CO 2 incubator to restore them to a normal shape.Then, they were fixed with 4% formaldehyde at room temperature for 30 min and the oocytes were washed 3 times with blocking solution for 5 min each time; the blocking solution was PBS + 3 mg/mL BSA + 7.5 mg/mL glycine.The blocked oocytes were infiltrated with 0.5% Triton-X100-PBS-0.1% PVA for 30 min, then incubated at room temperature for an additional 30 min in dark conditions and stained with a staining solution of 100 µg/mL PBS-FITC-PNA (SigmaL-7381).The samples were washed with PBS 3 times, placed on a glass slide, and covered with a paraffin-coated cover glass.The distribution of oocyte cortical granules was observed under a laser confocal microscope.
Mitochondrial Levels and Their Distribution in Porcine Oocytes
The oocytes were vortexed to remove the zona pellucida to obtain naked oocytes.The mitochondria were labeled with MitoTracker Red CMXRos for 30 min (PBS washing solution + 500 nmol/L MitoTracker Red CMXRos), mounted onto slides, and analyzed under a fluorescence microscope.
Subcellular Localization of AANAT, Detected by Immunoelectron Microscopy
Approximately 1000 oocytes were collected for fixation with paraformaldehyde.The fixed samples were washed with PBS to remove the glutaraldehyde residuals and dehydrated through a 30, 50, 75, 85, 95, and 100% alcohol gradient, respectively, in sequence and with xylene in the final stage.The samples were then soaked in epoxy resin, fixed, and embedded into blocks by temperature gradient treatment in an oven.The block was then trimmed with a razor blade and any obvious follicle structure on the surface of the ovary was removed.The trimmed samples were sliced with an automatic microtome in sequence.The slice thickness was 100 nm.The slices containing the oocyte structure were selected and fixed on the copper grid.The AANAT antibody was diluted to 1:100 and made into 30 µL small droplets, then the sample was submerged into the droplets, pre-incubated at room temperature for 1 h, and then incubated at 4 • C overnight.Thereafter, the samples were fully washed to remove the primary antibody and then incubated with 30 µL of goldlabeled secondary antibody (diluted 1:2000) at room temperature for 2 h.After washing away the secondary antibody, the samples were incubated in uranyl acetate for 15 min.The samples were analyzed under an electron microscope and then photographed.
Lipid Droplet Staining of Porcine Oocytes
After 44 h of maturation, the COCs were harvested from the IVM medium, then the cumulus cells around the oocytes were removed and stained with 20 µg/mL BOD-IPY 493/503 (Thermo, Waltham, MA, USA, D3922).The stained MII oocytes were then placed in a glass petri dish and observed under a confocal microscope with image-taking (Nikon A1HD25, Tokyo, Japan).The excitation wavelength was 405 nm for LipiBlue and 488 nm for BODIPY 493/503.An NIS (Nikon) was used to take pictures and to calculate the fluorescence intensity of the lipid droplets.
Mitochondrial Membrane Potential Analysis with JC-10 Staining
JC-10 is a fluorescent probe used for detecting mitochondrial membrane potential, ∆Ψm.When the mitochondrial membrane potential is high, JC-10 gathers in the mitochondrial matrix to form a polymer with red fluorescence; when the mitochondrial membrane potential is low, JC-10 is a monomer with green fluorescence.The oocytes were incubated with the diluted (200XJC-10 to 1X) JC10 solution at 37 • C for 20 min, then washed with JC-10 staining buffer 3 times, after which the samples were placed in the covered slides and observed under a laser confocal microscope.Changes in mitochondrial membrane potential are detected by fluorescent color shifts.The relative ratio of red-green fluorescence is commonly used to measure the ratio of mitochondrial depolarization.
Procedure of Immunofluorescence Staining
The oocytes were fixed in 4% paraformaldehyde (PFA) at room temperature for 45 min and washed with PBS-0.1%PVA 3 times, for 10 min each time.Hole punching in the cell membrane was achieved with 0.5%Triton-X 100-PBS-0.1%PVA incubation at room temperature for 1 h.First, the samples were blocked in 3% BSA-0.1%Triton-X100-PBS-0.1%PVA solution for 1 h at room temperature; then, the MII oocytes were incubated with AANAT antibody (1:100) and ASMT (1:100) antibody (diluted in sealing solution) at 4 • C for 12 h, and washed with PBS-0.1% PVA for 3 times, for 10 min each time.Then, they were incubated with the secondary antibody (1:200) at room temperature in the dark for 1 h, washed with DPBS-0.1%PVA 3 times, for 20 min each time; Hoechst33342 was used to stain the nuclei.The samples were mounted as slices, observed under a laser confocal microscope, and photographs were taken.
Melatonin Assay with High-Performance Liquid Chromatography (HPLC)-Tandem Mass Spectrometry
First, 200 µL of mitochondrial culture solution was mixed with 800 µL methanol and centrifuged at 12,000 r/min, at 4 • C for 20 min.The samples were filtered with a 2-um filter.The sample was injected into a HPLC-tandem mass spectrometry system.For the HPLC detecting system, the mobile phase was formulated with solutions A (0.1% formic acid solution) and B (methanol).A gradient elution procedure was carried out in the order of 10% B-phase elution for 0-1 min, 60% B-phase for 2-3.5 min, and 10% B-phase for 3.5-5 min.The flow rate of the mobile phase was 0.4 mL/min.A temperature of 40 • C was chosen for the column.The injected sample volume was 2 µL.The MS/MS system comprised a triple quadrupole mass spectrometer with electrospray ionization (ESI).The positive mode acquisition is used to collect MS/MS data.Multiple reaction monitoring (MRM) was used to identify MT and MT-d4.The gas temperature and flow rate were maintained at 350 • C and 6 L per minute, respectively.The nebulizer pressure was 50 psi, and the capillary voltage setting was 3500 V.The sheath gas heater reached 300 • C, with the flow of sheath gas at 10 mL/min.MT's production ions (m/z) were 174.2 and 159.1, while its precursor ion (m/z) was 233.1.The collision energy at 10 V was 233.1 > 174.1.The collision energy at 25 V was 233.1 > 159.1.MT-d4's precursor ion (m/z) was 237.1, while its production ions (m/z) were 163.1 and 178.2.Where 237.1 > 178.2 and 237.1 > 163.1, the collision energy was 25 V. Every fragmentor above this was 75 V.
Assay of ATP
The zona pellucida of the oocytes were removed with pronase and the cleaned oocytes were washed 3 times with TL solution; 12 oocytes were transferred into 50 µL lysate solution, vortexed until they were fully lysed, and the samples were kept at 4 • C or on the top of the ice for a short period.A 96-well light-proof enzyme labeling plate was used for the study.First, 50 µL of ATP detection solution was added to the standard and sample wells of this plate, respectively, and left at room temperature for 5 min to consume the background ATP; then, the lysed samples were added to the preprepared wells of the plate and mixed well.An Infinite F200 microplate reader was used to detect the ATP content.The ATP content of each sample well was calculated based on the ATP standard curve, and the value was divided by the number of oocytes to obtain the ATP content of each oocyte.
RNA Interference Assay
The porcine AANAT gene was used as a template to design interfering RNA by the Suzhou Gemma Gene Co., Ltd., Suzhou, China, and the designed sequence was compared with BLAST to exclude homology with other genes.The final designed porcine siRNA sequence is as follows: siAANAT: sense(5 ′ -3 ′ ): GGGACUGAAAUAAAGAGAUTT; antisense(5 ′ -3 ′ ): AUCUCUUUAUUUCAGUCCCTT.The cumulus oocytes were removed from the cumulus granulosa cells, and 10 plsiRNA (20 µM) was injected into the cytoplasm of each oocyte using a micromanipulator [29].The experiment was divided into 3 groups: siNC, siAANAT, and siAANAT + MT groups, respectively.
Real-Time Fluorescent Quantitative PCR
The oocytes extracted from COCs were washed 3 times with PBS and stored at 80 • C until RNA extraction.Total RNA was extracted using TRIzol (Invitrogen Inc., Carlsbad, CA, USA), quantified by measuring the absorbance at 260 nm, and stored at −80 • C until it was assayed.The mRNA levels of the relevant genes were assessed in LightCycler (Roche Applied Science, Mannheim, Germany) by quantitative RT-PCR using the OneStep SYBR PrimeScript RT-PCR kit (Takara Bio.Inc., Tokyo, Japan).After melting curve analysis, the accumulated level of fluorescence was analyzed by the second derivative method, then the expression level of the target gene in each sample was normalized to that of β-actin.The primer pairs for mRNAs are shown in Table 1.
Genes Accession Number
Sequence (5
Statistical Analysis
Unless otherwise specified, the data are expressed as mean ± SME.An analysis of variance (ANOVA) was used to analyze the normality among the groups, followed by Dunnett's test.All tests were performed by SPSS26.0 statistical software.p < 0.05 denoted a significant difference.
The Capacity of Mitochondria in Porcine Oocytes on Melatonin Biosynthesis during Their In Vitro Maturation
By the use of immunofluorescence staining and confocal microscopy, both the synthetic melatonin enzymes AANAT and ASMT were found to be expressed in oocytes and colocalized with mitochondria (Figure 1A).The immunoelectron microscopy results confirmed that a major portion of AANAT was distributed in mitochondria but some was also present in the cytoplasm (Figure 1B).In order to explore whether melatonin is synthesized during the process of maturation of porcine oocytes, 5HT, a precursor of melatonin, was added to the maturation medium.Then, the medium was collected for melatonin assay by ultrahigh performance liquid chromatography-tandem mass spectrometry (UPLC-MS/MS) The results showed that during the process of maturation, the level of melatonin gradually increased and was then sharply elevated from the MI to MII stages, compared to the control group (p < 0.01) (Figure 1E).Accordingly, after adding 5-HT, the protein level of AANAT was also significantly increased compared to the control group (p < 0.01) (Figure 1C,D).To further explore the subcellular sites of melatonin synthesis, the mitochondria of oocytes and cumulus cells were extracted and incubated with 5-HT.The culture fluid was collected at 0, 1, and 2 h, respectively, for melatonin detection.The results showed that melatonin was detected in the mitochondrial culture medium.The melatonin production that was extracted from the mitochondria of oocytes supplemented with 5-HT significantly increased compared to the controls after 1 h of incubation (Figure 1F).These results indicate that the mitochondria of porcine oocytes synthesize melatonin during in vitro maturation.
5-HT Supplementation Improves the Quality of Porcine Oocytes
Since 5-HT could increase melatonin production in oocytes, 5-HT was added to the COCS maturation medium.The results showed that 5-HT significantly increased the
5-HT Supplementation Improves the Quality of Porcine Oocytes
Since 5-HT could increase melatonin production in oocytes, 5-HT was added to the COCS maturation medium.The results showed that 5-HT significantly increased the expansion rate of porcine cumulus cells (p < 0.05) (Figure 2A,B) and the normal migration of cortical granules (p < 0.05) (Figure 2D,E), compared to the control group.While the addition of Luzindole, a melatonin receptor inhibitor, significantly reduced the normal mobility of cortical granules (p < 0.001) and the cumulus expansion rate (p < 0.05) (Figure 2B), compared to the control group, Luzindole also reduced the polar body excretion rate, but this decrease was not significantly different compared to the control group (p > 0.05) (Figure 2C).All the results indicated the promotive effects of melatonin on both the nuclear and cytoplasmic maturation of porcine oocytes, and these activities might be partially mediated by melatonin receptors.
Antioxidants 2024, 13, x FOR PEER REVIEW 9 of 23 expansion rate of porcine cumulus cells (p < 0.05) (Figure 2A,B) and the normal migration of cortical granules (p < 0.05) (Figure 2D,E), compared to the control group.While the addition of Luzindole, a melatonin receptor inhibitor, significantly reduced the normal mobility of cortical granules (p < 0.001) and the cumulus expansion rate (p < 0.05) (Figure 2B), compared to the control group, Luzindole also reduced the polar body excretion rate, but this decrease was not significantly different compared to the control group (p > 0.05) (Figure 2C).All the results indicated the promotive effects of melatonin on both the nuclear and cytoplasmic maturation of porcine oocytes, and these activities might be partially mediated by melatonin receptors.
Effects of 5-HT Supplementation on the Distribution and Function of Mitochondria in Oocytes
To explore the effect of endogenously produced melatonin on mitochondrial properties in oocytes, 5-HT was supplied to the cell culture medium to improve the melatonin production of oocytes.The result showed that this treatment significantly increased the mitochondrial density (p < 0.05) (Figure 3A,B) and the normal distribution of oocytes compared to the control group (p < 0.05).Accordingly, the mitochondrial biogenesis-related gene SIRT1 was significantly upregulated compared to the controls (Figure 3C).Again, the melatonin receptor inhibitor, Luzindole, significantly blunted all these beneficial alterations of mitochondria (p < 0.05) (Figure 3B,D), indicating that the beneficial effects of melatonin are mediated by its receptor.It is worth noting that the 5-HT supplementation also increased the number of lipid droplets in porcine oocytes compared to the control (p < 0.05), while Luzindole reduced this increase (p < 0.05) (Figure 3A,E).To explore whether endogenously generated MT promotes the energy metabolism of oocytes, the oocytes were stained with JC-10 fluorescence and the effect was analyzed with a confocal microscope (Figure A1A).The results showed that 5-HT supplementation significantly increased the mitochondrial membrane potential (p < 0.0001), as indicated by increased JC-10 fluorescence intensity, and also the ATP content of oocytes compared to the control (p < 0.05).Luzindole significantly blunted the mitochondrial membrane potential (p < 0.0001) and the ATP content (p < 0.05) was increased by 5-HT (Figure 3G-F).At the molecular level, 5-HT treatment increased the activities of mitochondrial complex I (ND1), complex III (COX3), and complex IV (CytB), but the differences did not reach statistical significance, while complex V (ATPase6) showed little change compared to the control (Figure A1E-H).The 5-HT treatment upregulated the gene expressions of SIRT3 and SOD1 (Figure A1B-D) and significantly decreased the ROS level in oocytes (p < 0.05), but this decrease was blunted by Luzindole (p < 0.001), which hindered the antioxidant effect of endogenous melatonin (p < 0.0001) (Figure 3H,I).
Effects of siAANAT on the Quality and Maturation of Porcine Oocytes
siAANAT was designed to suppress the melatonin production in oocytes.siAANAT was microinjected into denuded oocytes in the GV stage and cultured in vitro for 44 h, then the MII-stage oocytes were selected for immunofluorescence staining.The results showed that the siAANAT oocytes had significantly lower AANAT protein expression than that in siNC oocytes (p < 0.01), indicating that siRNA successfully knocked down the expression of AANAT in the oocytes.Then, the polar body excretion rate of the oocytes was counted.The results showed that the polar body excretion rate of oocytes in the siAANAT group was significantly lower than that in the control group (p < 0.01).Melatonin supplementation significantly increased the suppressed polar body excretion rate caused by siAANAT (p < 0.05) (Figure 4D).After the parthenogenetic activation of oocytes, it was found that the cleavage rate and blastocyst rate of the siAANAT group were significantly lower than those of the siNC group, while melatonin supplementation could improve the quality of porcine oocyte maturation and embryo development potential (p < 0.05) (Figure 4E,F).
Effects of siAANAT on Mitochondrial Distribution and ATP Production in Porcine Oocytes
Since the lipid is an important substrate of mitochondrial metabolism, the lipid levels in oocytes were measured.The results showed that the number of lipid droplets in siAANAT oocytes showed no significant difference compared to other groups (Figure A2A,B).The number of mitochondria in the siAANAT oocytes showed no significant difference with the siNC group but had a higher abnormal distribution rate than that in the siNC oocytes (p < 0.001) (Figure 5E).Interestingly, melatonin treatment rescued the mitochondrial abnormal distribution caused by siAANAT (p < 0.01), and also significantly increased the number of mitochondria (p < 0.01) (Figure 5A,D).In addition, the expression of SIRT1 in the melatonin treatment group was significantly upregulated compared to other groups (p < 0.05) (Figure 5G).At the same time, the ROS level in the siAANAT group was significantly increased (p < 0.0001), and was also reduced by melatonin supplementation (p < 0.0001) (Figure 5B,C).The mitochondrial membrane potential in the siAANAT oocytes was lower than that in the control group and in the melatonin-treated group, but the difference was not significant (p > 0.05) (Figure A2C,D).However, the ATP level in the siAANAT oocytes was significantly lower than that in the control and melatonin-treated oocytes (p < 0.05) (Figure 5F).
significantly lower than that in the control group (p < 0.01).Melatonin supplementation significantly increased the suppressed polar body excretion rate caused by siAANAT (p < 0.05) (Figure 4D).After the parthenogenetic activation of oocytes, it was found that the cleavage rate and blastocyst rate of the siAANAT group were significantly lower than those of the siNC group, while melatonin supplementation could improve the quality of porcine oocyte maturation and embryo development potential (p < 0.05) (Figure 4E,F).
Effects of siAANAT on the Metabolic Pattern of Porcine Oocytes
In this study, we also found that melatonin slightly increased the expression of mitochondrial complex III (COX3) but significantly upregulated the expression of complex IV (CytB) and complex V in siAANAT oocytes compared to the other groups (p < 0.05) (Figure 5H-J).The results led us to wonder whether siAANAT makes oocytes mainly to produce energy through glycolysis.To further understand the effects of siAANAT on the metabolic pattern of porcine oocytes, key gene expression in the glycolytic pathway was measured.The results showed that the expressions of HIF1A and GLUT1 in siAANAT oocytes were slightly upregulated (Figure 5K,L); however, at the same time, the expressions of phospho-6-gluconate dehydrogenase (PGD) and lactate dehydrogenase (LDHA) were significantly upregulated compared to the control oocytes (p < 0.05) (Figure 5N,O).Melatonin supplementation significantly downregulated the expression levels of PGD (p < 0.05) and serine-threonine protein kinase 1 (AKT1) (p < 0.05) (Figure 5M,N).
Discussion
The maturation of mammalian oocytes requires substantial energy, and studies have shown that porcine oocytes consume glucose to support their final maturation [30].Recently, evidence has emerged that lipids are a key nutrient and are even a major energy source for porcine oocytes [31].Numerous studies have shown that in early embryo development, mitochondria are responsible for providing sufficient ATP for most cellular processes through oxidative phosphorylation.The number of mitochondria increases substantially during embryonic development to provide the energy required for blastocyst formation; therefore, mitochondrial dysfunction leads to developmental arrest in early embryos [32].
Evidence has shown that mitochondrial functions can be influenced by several factors, and melatonin is one of them.Melatonin can regulate mitochondrial functions by scavenging free radicals, activating uncoupling proteins, maintaining optimal mitochondrial membrane potential, and promoting mitochondrial biogenesis [25,[33][34][35][36][37].The activities of melatonin on the mitochondria may relate to its effect in promoting oocyte maturation, fertilization, and early embryonic development in mammals [38].AANAT is the rate-limiting enzyme for melatonin synthesis [39].Our previous study found that melatonin was synthesized in the mitochondria of mouse oocytes during its maturation [6].In the current study, we confirmed that AANAT was co-localized with mitochondria in porcine oocytes, and 5-HT supplementation during the in vitro porcine oocyte maturation process significantly increased the melatonin production compared to the control group (p < 0.01).In addition, both the in vitro-cultured mitochondria isolated from cumulus cells and oocytes can release melatonin to the culture medium; with 5-HT supplementation, the mitochondria produced significantly higher levels of melatonin than in the control group.The results showed that the mitochondria of porcine oocytes synthesized melatonin during oocyte maturation.
During maturation, the oocytes demand substantial amounts of ATP for their continuous transcriptional and translational activities.Mitochondria are the primary source of ATP production and sufficient numbers of functional mitochondria are critical for oocyte maturation.Therefore, the quality of oocytes is positively related to mitochondrial DNA copy number and ATP content.Under physiological conditions, the copy number of mitochondrial DNA, as well as mitochondrial distribution, significantly improved during oocyte maturation.However, in vitro maturation (IVM) may result in altered mitochondrial morphology and the expression of genes related to mitochondrial function [40].In this study, we found that 5-HT supplementation significantly increased melatonin production; this elevated melatonin production increased the number of mitochondria, thereby promoting the uniform distribution of mitochondria, and thus increasing the ATP content in oocytes.These activities of melatonin were probably mediated by its receptor since the melatonin receptor inhibitor Luzindole blunted all these activities.To further identify the effects of melatonin on mitochondrial function and oocyte maturation, the AANAT was silenced by siAANAT, which can significantly reduce melatonin production.The siAAMAT caused abnormal mitochondrial distribution and mitochondrial dysfunction, while mela-tonin supplementation corrected these abnormalities.The results showed that melatonin is necessary for mitochondrial function, oocyte maturation, and embryonic development.
To further prove that endogenously generated melatonin is involved in oocyte quality and its maturation, 5-HT was used to increase endogenous melatonin production.The results showed that 5-HT supplementation had similar effects to melatonin supplementation.Melatonin is a mitochondrial-targeted antioxidant and it can upregulate the expression of SIRT3 and SOD1 [41].Melatonin also protects against mitochondrial depletion and the energy deficiency caused by environmental toxin exposure by activating the SIRT1/PGC-1α pathway.These activities of melatonin promote mitochondrial biogenesis, suggesting that melatonin can be used in early embryonic development to counteract the state of mitochondrial deficiency [42].In this study, we found that 5-HT significantly upregulated the expression level of SIRT1 after siAANAT, indicating that this effect may not involve melatonin, but instead, melatonin metabolites (see below).He et al. reported that melatonin can reduce the mitochondrial membrane potential, resulting in the quiescence of mitochondrial respiration and in maintaining a state of low metabolism [43].However, our results found that melatonin increased the mitochondrial membrane potential, which may be due to a compensatory mechanism caused by the in vitro culture environment, in which there was an energy supply shortage.
It has been reported that 5-HT and its receptors were present in mouse and human oocytes and cumulus cells [44][45][46], indicating their involvement in mammalian reproductive activity, including normal embryonic development.Other studies have reported that 5-HT administration caused blastocyst cell apoptosis and a decrease in blastocyst cell number and blastocyst rate [45,47].The administration of 5-HT at a concentration of 10 −4 M inhibited the maturation of porcine oocytes by inhibiting the synthesis of estradiol in granulosa cells, while its antagonists also inhibited mouse embryonic development and even caused embryonic development blockage at high concentrations under in vitro conditions [48].In the current study, we speculated that some activities of 5-HT regarding oocytes and embryonic development were mediated by its metabolite, melatonin.The evidence obtained from this study strongly supported our speculation.Our results showed that 5-HT supplementation not only upregulated the expression of AANAT but also increased melatonin production in oocytes.The increased melatonin level was positively associated with oocyte quality and embryonic development (Figure 6).
Traditionally, it was believed that energy metabolism in the COCs was in collaboration among the cells.The cumulus cells are responsible for metabolizing glucose to form pyruvate and lactate, then both of them are transported to the oocyte through gap junctions, and finally enter the TCA cycle to produce ATP in the oocytes [49][50][51].However, the porcine COCs may prefer to use fatty acids as the energy source [52].Fatty acids undergo beta oxidation to produce ATP [53].The reduced abundance of CPT1 impairs the transportation of fatty acids into mitochondria, leading to reduced β-oxidation.This will be compensated for by the elevation of glucose metabolism in porcine embryos, suggesting that fatty acid oxidation is prevalent in the alternative energy pathway of glucose metabolism in this cell [54].Further evidence showed that siAANAT significantly upregulated the expression of the key genes of PGD and LDHA, related to the Warburg effect, to increase glycolytic activity, but with decreased ATP production in the siAANAT oocytes compared to the controls.Melatonin supplementation reduced these upregulated genes.The results indicate that melatonin may induce porcine oocytes to preferentially use lipids for fatty acid βoxidation to provide energy for porcine oocyte maturation.Traditionally, it was believed that energy metabolism in the COCs was in collaboration among the cells.The cumulus cells are responsible for metabolizing glucose to form pyruvate and lactate, then both of them are transported to the oocyte through gap junctions, and finally enter the TCA cycle to produce ATP in the oocytes [49][50][51].However, the porcine COCs may prefer to use fatty acids as the energy source [52].Fatty acids undergo beta oxidation to produce ATP [53].The reduced abundance of CPT1 impairs the transportation of fatty acids into mitochondria, leading to reduced β-oxidation.This will be compensated for by the elevation of glucose metabolism in porcine embryos, suggesting that fatty acid oxidation is prevalent in the alternative energy pathway of glucose metabolism in this cell [54].Further evidence showed that siAANAT significantly upregulated the expression of the key genes of PGD and LDHA, related to the Warburg effect, to increase glycolytic activity, but with decreased ATP production in the siAANAT oocytes compared to the controls.Melatonin supplementation reduced these upregulated genes.The results indicate that melatonin may induce porcine oocytes to preferentially use lipids for fatty acid β-oxidation to provide energy for porcine oocyte maturation.
Conclusions
In summary, in this study, for the first time in the literature, we have identified that the mitochondria of porcine oocytes are major sites for melatonin biosynthesis.This mitochondrially originated melatonin reduces ROS, upregulates the expression of SIRT1, and increases the number of mitochondria and their uniform distribution, as well as their oxidative phosphorylation, thereby improving the maturation efficiency of porcine oocytes.The reduced melatonin level by siAANAT upregulates the expression of PGD and LDHA, thereby switching the mitochondrial oxidative phosphorylation to glycolysis and reducing the maturation efficiency of porcine oocytes.These abnormalities are counteracted by melatonin supplementation.All these effects of melatonin are at least
Conclusions
In summary, in this study, for the first time in the literature, we have identified that the mitochondria of porcine oocytes are major sites for melatonin biosynthesis.This mitochondrially originated melatonin reduces ROS, upregulates the expression of SIRT1, and increases the number of mitochondria and their uniform distribution, as well as their oxidative phosphorylation, thereby improving the maturation efficiency of porcine oocytes.The reduced melatonin level by siAANAT upregulates the expression of PGD and LDHA, thereby switching the mitochondrial oxidative phosphorylation to glycolysis and reducing the maturation efficiency of porcine oocytes.These abnormalities are counteracted by melatonin supplementation.All these effects of melatonin are at least partially mediated by its receptors since the non-selective melatonin receptor blocker, Luzindole, blunts these activities.The research results provide an experimental basis for further revealing the metabolic mode of melatonin in porcine oocyte maturation.If this observation is confirmed by others or in different mammalian species, it will provide new insights to treat human infertility and support the conservation of germplasm resources in animal husbandry.
Antioxidants 2024 , 23 Figure 1 .
Figure 1.Melatonin synthesis in mitochondria during the process of the in vitro maturation of porcine oocytes.(A) Immunofluorescent staining of AANAT and ASMT in porcine oocytes.Scale bar: 50 µm.(B) AANAT subcellular distribution in porcine oocytes; red arrows point to mitochondria and green arrows point to AANAT synthetase.(C) Immunofluorescent staining of AANAT in porcine oocytes with 5HT treatment; scale bar: 100 µm.(D) Statistical analysis of AANAT level of oocytes (n = 25).(E) Melatonin levels in the culture medium of oocytes at each mature stage; n = 8. (F) Melatonin levels in the mitochondrial culture medium of granulosa cells (left) and oocytes (right); n = 3. ** p < 0.01, **** p < 0.0001.
Figure 1 .
Figure 1.Melatonin synthesis in mitochondria during the process of the in vitro maturation of porcine oocytes.(A) Immunofluorescent staining of AANAT and ASMT in porcine oocytes.Scale bar: 50 µm.(B) AANAT subcellular distribution in porcine oocytes; red arrows point to mitochondria and green arrows point to AANAT synthetase.(C) Immunofluorescent staining of AANAT in porcine oocytes with 5HT treatment; scale bar: 100 µm.(D) Statistical analysis of AANAT level of oocytes (n = 25).(E) Melatonin levels in the culture medium of oocytes at each mature stage; n = 8. (F) Melatonin levels in the mitochondrial culture medium of granulosa cells (left) and oocytes (right); n = 3. ** p < 0.01, **** p < 0.0001.
Figure 2 .
Figure 2. The effects of 5-HT and Luzindole on the maturation of porcine oocytes.(A) Cumulus cell expansion of oocytes after 44 h of maturation, scale: 200 µm.(B) The statistical analyses of cumulus
Figure 3 .
Figure 3. Effects of 5HT on mitochondrial function in porcine oocytes.(A) Lipid droplet and mitochondrial staining images of oocytes.(B) Statistical analysis of Mitotracker Red fluorescence
Figure 6 .
Figure 6.The potential pathways of porcine oocyte-synthesized melatonin and their effect on oocyte quality and maturation.
Figure 6 .
Figure 6.The potential pathways of porcine oocyte-synthesized melatonin and their effect on oocyte quality and maturation. | 2024-07-10T15:13:31.228Z | 2024-07-01T00:00:00.000 | {
"year": 2024,
"sha1": "9981cdac0dfbbbe0549dec7e7bfdffe4ef3eaf99",
"oa_license": "CCBY",
"oa_url": "https://doi.org/10.3390/antiox13070814",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "a022ad1fb6af690fffa917259f99c8c79011eac9",
"s2fieldsofstudy": [
"Medicine",
"Biology"
],
"extfieldsofstudy": [
"Medicine"
]
} |
230596703 | pes2o/s2orc | v3-fos-license | Sulfate Resistance in Cements Bearing Bottom Ash from Biomass-Fired Electric Power Plants
: To address some of the gaps in the present understanding of the behavior of new supplementary cementitious materials such as bottom ash (BA) from biomass-fired electric power plants in cement manufacture, this study explored the e ff ect of this promising material on the sulfate resistance of the end product. Cement paste prepared with 10% or 20% (previously characterized for mineralogy and chemical composition) BA was Köch–Steinegger tested for sulfate resistance. The hydration products, in turn, were analyzed before and after soaking the reference and experimental cements in sodium sulfate to determine whether the use of the addition hastened microstructural, mineralogical, or morphological decay in the material. The 56 days findings showed that the presence of BA raised binder resistance to sulfate attack. Köch–Steinegger corrosion indices of 1.29 and 1.27 for blended cements OPC + 10 BA and OPC + 20 BA, respectively, were higher than the 1.26 recorded for ordinary Portland cement (OPC). In addition, weight gain was 20.5% and volume expansion was 28.5% lower in the new materials compared to OPC. The products resulting from the external sulfate-cement interaction, gypsum and ettringite, were deposited primarily in the pores present in the pastes. The conclusion drawn is that binders bearing 10% or 20% BA are, a priori, apt for use in the design and construction of cement-based elements exposed to sulfate-laden environments. formal analysis, J.M.M. and I.F.S.d.B.; resources, M.I.S.d.R., M.F. and C.M.; writing—original draft preparation, J.M.M. and I.F.S.d.B.; writing—review and editing, M.F., M.I.S.d.R. and C.M.; supervision, I.F.S.d.B.; project administration, M.I.S.d.R. and C.M.
Introduction
Growing concern in civil and building construction around concrete structure durability has spurred related industrial and technological development and progress in recent decades [1]. The second-most frequent (after corrosion) pathology that shortens concrete structure service life, and one of the most aggressive, is external sulfate attack (ESA) [2]. The resulting decay is governed by complex and, at this time, poorly understood physical, mechanical, and chemical processes [3]. Sulfate attack involves essentially four processes: transport, chemical reactions, expansive forces, and mechanical response [4].
Chemically speaking, ESA results from the reaction between sulfate ions and the hydrated (portlandite, CH; monosulfoaluminate, C 6 AS 3 H 32 ) and unhydrated (C 3 A) phases in concrete cementitious matrices, yielding gypsum (CSH 2 ), ettringite (C 6 AS 3 H 32 ), and sodium hydroxide (NaOH). The third, NaOH, forms in the presence of alkaline sulfates (Na 2 SO 4 ). Because the volume of the resulting end products is 1.2-to 2.2-fold greater than that found in the starting reagents [5], the matrix expands and cracks, increasing permeability and water ingress in structures, and consequently the rate of decay. A second effect of ESA is the steady loss of strength, weight, and cohesion in cement hydration products [6].
From the perspective of durability, the technical implications of using supplementary cementitious materials (SCMs) in cement manufacture can be summarized as: (i) lower reactivity, resulting in less heat of hydration; (ii) lower aluminate phase (C 3 A and C 4 AF) content due to the dilution effect and the presence of the CH needed for the pozzolanic reaction; and (iii) pore system refinement resulting from that reaction (between the SCMs and CH), because the C-S-H gels generated settle in the pores, enhancing concrete impermeability [7][8][9]. In addition to these favorable technical effects, SCM use is associated with social and environmental benefits, including the reduction in natural resource deployment and the furtherance of progress in the pursuit of alternative SCMs drawn from industrial waste. The latter contributes to the institution of circular economy principles in construction and compliance with cement industry environmental commitments (lowering CO 2 emissions and energy consumption, among others) [10].
Scientific community sights are presently trained on assessing agroforestry waste as a possible source of SCMs, given that 140 × 10 9 tonnes of biomass waste are generated yearly [11]. The focus in that line of research has been on waste whose origin lies in: (i) agroforestry biomass consisting primarily of bagasse, rice, and to a lesser extent bamboo ash, laboratory-calcined at different temperatures [12][13][14]; and (ii) biomass ash or biomass bottom ash from heat and/or power plants [15][16][17],~10 × 10 6 tonnes of which are generated yearly [18]. Research efforts in connection with the latter have consisted of analyzing the effect of using biomass ash (BA) on the mechanical characteristics of the new mortars. The findings vary depending on the origin and nature of the waste, in addition to the replacement ratio [16,19,20].
Very few papers have been published, however, on the behaviour of new eco-cements in aggressive environments (chlorides, carbonation, freeze-thaw), although replacement ratios of over 10% BA have been shown to lower resistance [21]. Blending ordinary Portland cement (OPC) with up to 20% BA has nonetheless been reported to prompt water sorptivity and a decline in swelling-and shrinkage-induced variations in volume [22]. Only one paper has been published to date on sulfate resistance in cements bearing this waste. According to the authors, Modolo et al. [23], who studied the replacement of 20% to 100% of the calcite in mortars with forestry-based BA (primarily eucalyptus) from a biomass-fired power plant, compressive strength declined and surface cracks appeared on samples exposed to a 0.1 M Na 2 SO 4 solution for 1 year.
Against that backdrop, the present study aims to provide scientific-technical insight into the effect of blending binders with 10% or 20% bottom ash from biomass-fired power plants on sulfate resistance in the resulting eco-cements. The mechanical behaviour, porosity, and soundness of cement pastes made with new blended cements exposed to aggressive environments for different times, and the respective microstructural changes, are explored with mercury intrusion porosimetry (MIP), X-ray diffraction (XRD), and scanning electron microscopy-energy dispersive X-ray spectroscopy (SEM/EDX) analyses.
Materials
The biomass ash (BA) used in this study was sourced from a Spanish electric power plant fired with non-woody + woody (eucalyptus, fruit tree, pine, etc.) biomass. The waste was collected randomly in situ at three representative heights in the airtight containers in which it was stored. At the laboratory the samples were pre-conditioned (dried and ground) and analyzed for their chemical, physical and mineralogical characteristics, as reported in earlier papers [24,25]. The specific surface was also determined and found to be 6.63 m 2 /g. The X-ray fluorescence-determined chemical composition of the BA wafer revealed that it contained~66 wt% CaO + MgO + SiO 2 ,~13 wt% K 2 O, and~5 wt% Al 2 O 3 + Fe 2 O 3 . According to the Vassilev et al. [26] diagram, these values are indicative of type S, subtype medium acid (MA) biomass. A net 28.2% of the silica content was found to be reactive.
The X-ray diffraction findings, in turn, showed that this material had an amorphous hump across
Blends
The new cements, stirred in a high-speed power mixer to ensure uniformity, comprised OPC blended with 10% or 20% BA. These values lay within the 6% to 20% range for cement type II/A and 11% to 35% for cement type IV/A stipulated in the aforementioned standard EN 197-1 [27]. The physical, mechanical, and chemical properties of the new cements given in Table 1 show that irrespective of the replacement ratio, they met all the requirements laid down in EN 197-1 [27] for ordinary cements.
Method
The OPC, OPC + 10BA, and OPC + 20BA pastes were mixed with deionized water at a water/cement ratio of 0.5 to prepare 1 × 1 × 6 cm 3 prismatic specimens (further to the Köch-Steinegger method), 12 each per mix, medium (sulfates or water), and exposure time. They were demolded after 24 h and subsequently cured for 21 days at 100% relative humidity and a temperature of 20 ± 1 • C (consistent with the aforementioned Köch-Steinegger procedure). Groups of 12 specimens were then soaked in an aggressive 0.3 M sodium sulfate solution (4.4 wt% Na 2 SO 4 at a liquid/solid volume ratio of 22) or deionized water as the reference at 20 • C for 14 days, 56 days, 90 days or 180 days. Known as the Köch-Steinegger method [28,29], this procedure is deemed optimal for assessing blended cement resistance to this aggressive medium because it simultaneously monitors the pozzolanic reaction and assesses most of its benefits [30].
At each test age and prior to characterization, the specimens were washed three times in deionized water to eliminate any excess salts and dried to a constant weight in a laboratory kiln at 40 • C.
Specimen flexural strength and variation in weight and length were also determined at each exposure time, and pore size distribution was analyzed in the 56 days and 180 days specimens. These microstructural studies were supplemented with XRD and SEM/EDX identification of the new components formed.
Instrumental Techniques
Sample mineralogy was determined on a Bruker AXS D8 (Bruker, Karlsruhe, Germany) X-ray powder diffractometer fitted with a 3 kW (Cu Ka1.2) copper anode and a Wolfram cathode X-ray generator. Scans were recorded between 2 θ angles of 5 • to 60 • at a rate of 2 • /min. The voltage generator tube operated at standard 40 kV, 30 mA settings.
The Hitachi S4800 (Bruker, Tokyo, Japan) electron microscope used to study the morphology of the blended cement exposed to the aggressive medium for 180 days was coupled to a Bruker Nano XFlash 5030 silicon drift detector for EDX determination of the chemical composition of the samples.
Porosity was quantified on a Micromeritics Autopore IV 9500 (Micromeritics, Norcross-GA, United States) mercury porosimeter designed to measure pore diameters of 0.006 to 175 µm and operate at pressures of up to 33,000 psi (227.5 MPa) [31].
Mean pore size (∅ med ) was found with Equation (1): where V is median pore diameter (volume) and A is median pore diameter (area). Mechanical strength was found on an Ibertest Autotest 200/10-SW (Ibertest, Madrid, Spain) test frame fitted with an adapter for 1 × 1 × 6 cm specimens.
Mechanical Properties
According to the data graphed in Figure 1 for sulfate-soaked specimen flexural (FS) and compressive (CS) strength, the latter rose in all of the blends analyzed up to 90 days and then flattened until the end of the 180 days test period. Although flexural strength also rose in specimens soaked for up to 90 ddays in the OPC + 10 BA and OPC + 20 BA blends, the patterns subsequently diverged, with strength declining in the 10% mix after that time. In OPC, flexural strength was constant until day 56 and declined thereafter. This behavior was associated with: (i) a change in cement pore size distribution; (ii) waste pozzolanicity [25], resulting in more elastic and flexible hydration products [32]; (iii) greater degree of cement hydration [32]; and (iv) initial prestressing induced by the formation of expansive compounds prior to the onset of microcracking [30]. 1 also shows that the impact of chemical attack was greater on flexural than compressive strength, as reported by Köch-Steinegger, who used this property to assess the "corrosion index" or resistance to this degenerative process [28].
Sulfate Resistance
Sulfate resistance was determined with the expression for corrosion index proposed by Köch-Steinegger (Equation (2)): where CI is corrosion index; F SS flexural strength at aggressive sulfate exposure time i; and F SW sulfate resistance in water-soaked specimens at the same exposure time.
Further to the corrosion index found (Table 2) for the pastes studied at different exposure times, the new cements were more aggressive agent-resistant than OPC. In the Köch-Steinegger method, pastes are deemed sulfate-resistant when their 56 days corrosion index is greater than or equal to 0.70. In keeping with this criterion, the cementitious matrices designed with the new cements behaved satisfactorily when exposed to Na 2 SO 4 , with OPC + 10 BA exhibiting a CI of 1.29 and OPC + 20 BA of 1.27. The CI for OPC declined at longer exposure (~15%), whereas the index for the BA-bearing cements remained constant until 90 days (10% material) or rose (by~6.2% in replacement = 20%). The 11.5% to 13.1% decline recorded after that age denoted the onset of decay.
Pore Structure
The effect of 56 and 180 days sulfate soaking on the pore systems in the cement pastes analyzed is graphed in Figure 2. Figure 2 shows that total porosity was greater in the BA cements than in OPC due to the intrinsic impact of the new addition on pore structure. This effect has been reported in earlier studies on cement pastes containing other pozzolanic materials, such as slag, fly ash, or silica fume [36,37]. Total 56 days porosity (20% to 24%), for instance, was similar to the values observed by Frías et al. [33] for matrices bearing 5% or 15% silico-manganese slag, or Sánchez de Rojas et al. [32] for cementitious systems with 20% masonry product sludge.
Total porosity declined with exposure time in all of the cements, by 6.8% in OPC, 9.9% in OPC + 10 BA, and 10.7% in OPC + 20 BA. The steeper decline in the BA-bearing cements was associated with their pozzolanicity [19], and the precipitation of expansive compounds such as gypsum and ettringite in the pore system [38]. The combined effect of those two developments was lesser permeability and a delay in decay [37,39], as attested to by the corrosion index values listed in Table 2. Further to Figure 2, between days 56 and 180, pore size declined with longer exposure time, by 13.0% in OPC, 13.5% in OPC + 10 BA, and 19.2% in OPC + 20 BA. This finding was related to the inside-pore precipitation of the new compounds and BA pozzolanicity, which improved the matrix pore structure by reducing the volume of macropores and raising the fractions of medium (0.01 µm to 0.05 µm) and small (0.002 µm < Φ << 0.01 µm) capillary pores.
Soaking-Induced Mass and Size Changes
Weight was observed to rise over time in all of the specimens soaked in sulfates (Figure 3), 1.7-fold in OPC, 2.2-fold in OPC + 10 BA, and 2.7-fold in OPC + 20 BA cements.
Weight gain induced by sulfate attack was more accentuated during the early weeks of hydration (t < 56 days), after which it tapered due to declining cement paste permeability. The latter was a direct result of secondary ettringite and gypsum formation prompted by the interaction between cement hydrated phases and external sulfate ions entering the pore system [40]. Weight gain was consistently greater in the BA-bearing than in the BA-free cements at all of the times studied, perhaps due to the higher porosity in the former.
The variation in length with time of exposure to Na 2 SO 4 in the cements analyzed (plotted in Figure 4) revealed that weight gain (Figure 3) was attendant upon expansive product formation during sulfate penetration [41]. Inasmuch as the products occupied more space than the reagents, such crystallization was followed by expansion, cracking, and surface spalling. The curves in Figure 4 followed a pattern observed by other authors, who divided expansion into two periods: initial or induction characterized by "steady, slow" or "progressive" expansion (up tõ 56 days) followed by a stage with a "sharply" or "rapidly" increasing rate of phase expansion that proceeded until the sample disintegrated entirely [42,43].
Finally, the graph also indicates the expansion was less intense in the blends containing BA than in the reference OPC, by 7.1% in OPC + 10 BA and 28.5% in OPC + 20 BA. This behaviour might be related to the presence of larger pores and greater connectivity in the blended materials, which would favor first-stage ion transport toward macropores and concomitant crystal formation in their more thermodynamically stable interiors [44], ultimately mitigating expansion. As Ikumi et al. [38] contended, the greater total porosity in the new pastes may have a beneficial long-term effect by enhancing these materials' capacity to accommodate precipitates generated during exposure.
Composition and Microstrutural Analysis
The 180 d XRD patterns for OPC, OPC + 10 BA, and OPC + 20 BA soaked in water or Na 2 SO 4 (reproduced in Figure 5) confirmed the presence of ettringite and portlandite, the hydrated phases normally identified in cements. Unreacted BA (i.e., quartz) and calcite resulting from the carbonate formed when carbon dioxide dissolved in the solution were also detected in the blended pastes [45]. The reflections for ettringite and gypsum, the two products primarily associated with sulfate attack, were more intense in the samples exposed to sulfates for 180 days. These findings were consistent with earlier reports for Portland cement pastes [46], pastes bearing blast furnace slag [47], and cements prepared with granite sludge [29]. The secondary electron (SE) SEM micrographs for the OPC and OPC + 20 BA pastes soaked in Na 2 SO 4 for 180 days reproduced in Figure 6a,b attest to the microcracks resulting from the generation of internal stress (ε local ) higher than the tensile strength (ε macro ) of the matrix. Such fissures appeared in stages 3 and 4 of the mechanism proposed by Santhanam et al. [42] to describe sodium sulfate attack in mortars. As noted earlier, these circumstances are the result of the expansive nature of gypsum and ettringite, the most prominent products of sulfate attack, characterized by expansion factors ranging from 1.25 to 2.76 [4]. The presence of gypsum deposits resulting from the portlandite-sodium sulfate reaction are visible in Figure 6c,d, where they can be seen to have clustered primarily inside pores (Figure 6d), the sites most favorable to nucleation [42,48]. Microcracking originated in this region and subsequently extended across the matrix. Figure 6e,f, in turn, attest to the presence of ettringite, primarily in the form of elongated needles [49,50]. The micrographs provide support for the premise that ettringite forms primarily inside pores [51]. In addition to cracks, these are the sites at which it normally crystallizes, given the favorable pressure conditions and the presence of the ions required for the needles to form there [52,53].
Conclusions
The following conclusions can be drawn from this study: -BA-bearing paste cement resistance to sulfate attack rises with the replacement ratio. In the 180 days materials containing 20 wt% of the addition, resistance was 6.7% higher than in the OPC of the same age. -Because the cement pastes studied, irrespective of replacement ratio, exhibited a 56 days Köch-Steinegger corrosion index of >0.70, they may be deemed sulfate resistant at the concentrations and other experimental conditions established in this study. - The weight and volume gains induced by sulfate soaking were lower in OPC + 20 BA than in OPC pastes, the former by 20.5% and the latter by 28.5%. - The microcracking observed in the pastes analyzed is attributable to the expansive properties of the products of sulfate attack. -Sulfate and sodium ingress into the paste microstructure translates primarily into inside-pore ettringite formation and gypsum plate precipitation, densifying the cementitious matrix. -Gypsum and ettringite form primarily within the pore system, inducing its refinement. | 2020-12-17T09:10:55.411Z | 2020-12-16T00:00:00.000 | {
"year": 2020,
"sha1": "5a8242027048f34f88dabd95b65445641b94587b",
"oa_license": "CCBY",
"oa_url": "https://www.mdpi.com/2076-3417/10/24/8982/pdf",
"oa_status": "GOLD",
"pdf_src": "ScienceParsePlus",
"pdf_hash": "a7d2974a0693d2c9f5730f8bb898a1b719d97e82",
"s2fieldsofstudy": [
"Environmental Science",
"Materials Science",
"Engineering"
],
"extfieldsofstudy": [
"Environmental Science"
]
} |
258739319 | pes2o/s2orc | v3-fos-license | Natural Convection of Ternary Hybrid Nanofluid in a Differential-Heated Enclosure with Non-Uniform Heating Wall
In the field of convective energy transfer, natural convection is one of the most studied phenomena, with applications ranging from heat exchangers and geothermal energy systems to hybrid nanofluids. The aim of this paper is to scrutinize the free convection of a ternary hybrid nanosuspension (Al2O3-Ag-CuO/water ternary hybrid nanofluid) in an enclosure with a linearly warming side border. The ternary hybrid nanosuspension motion and energy transfer have been modelled by partial differential equations (PDEs) with appropriate boundary conditions by the single-phase nanofluid model with the Boussinesq approximation. The finite element approach is applied to resolve the control PDEs after transforming them into a dimensionless view. The impact of significant characteristics such as the nanoparticles’ volume fraction, Rayleigh number, and linearly heating temperature constant on the flow and thermal patterns combined with the Nusselt number has been investigated and analyzed using streamlines, isotherms, and other suitable patterns. The performed analysis has shown that the addition of a third kind of nanomaterial allows for intensifying the energy transport within the closed cavity. The transition between uniform heating to non-uniform heating of the left vertical wall characterizes the heat transfer degradation due to a reduction of the heat energy output from this heated wall.
Introduction
During the last several years, there has been a lot of focus on the occurrence of natural convection in enclosures. This phenomenon has attracted attention primarily because it frequently affects thermal performance in a wide range of fundamental and industrial applications, including those involving heat exchangers, chemical reactors, solar collectors, fire systems, and the electronics, chemical, and power energy apparatus. Several thermal engineering applications use natural convection to remove heat without the aid of outside motion. Many heat transfer applications, including solar collectors, electronic device cooling, heat exchangers, and energy storage tanks, among others, make use of the benefits of natural convection [1][2][3][4][5].
Thermal systems' performance and compactness are primarily constrained by the weak heat conductivity of common energy transport liquids like ethylene-glycol (EG) or water. In recent years, a novel method for enhancing heat transmission that uses nano-sized additives located in a host liquid, known as nanosuspension, has undergone substantial research [6][7][8]. It is also significant to highlight that new kinds of nanofluids, referred to as hybrid nanosuspension, can be created using improved attributes. Different nano additives are disseminated in a host liquid to create hybrid nanosuspension. By balancing the benefits and drawbacks of individual nanoparticles, hybrid nanofluids can produce designed liquids with modified thermal and chemical attributes [9][10][11].
The slow heat transmission between the fluid and the walls is one of natural convection's key drawbacks. As a result, strategies for increasing the rate of heat transfer have been developed. These approaches include creating cavities with complicated geometries, employing cavities filled with porous media, adding fins to the wall(s), using magnetic fields or using nano and hybrid nanofluids. The thermal convection of a hybrid Al 2 O 3 -Cu/H 2 O nanosuspension was studied by Mehryan et al. [12] in a heated porous enclosure. An entropy generation study was provided by Tayebi and Chamkha [13] for a hybrid nanofluid moving in an MHD thermal convection motion via an enclosure with a corrugated conducting block. The conjugate thermal convective motion of a Ag-MgO/H 2 O nanosuspension was investigated by Ghalambaz et al. [14] in an enclosure. In a hybrid nanosuspension area, Chamkha et al. [15] investigated the MHD thermogravitational energy transfer of a localised heater/cooler. In a chamber saturated with a hybrid nanosuspension and a solid cylinder, Tayebi and Chamkha [16] analysed the entropy generated as an outcome of MHD thermal convective flow. Free convection and entropy generation were investigated by Tayebi et al. [17] in a hybrid nanosuspension saturated-elliptical chamber that generates or absorbs heat from within. Using an irregular solid circular cylinder, Tayebi and Chamkha [18] examined the MHD thermal convective energy transport of a hybrid nanosuspension in a chamber. Nanofluid natural convection in a square cavity subjected to thermal radiation was investigated by Reddy and Sreedevi [19] based on a model developed by Buongiorno. In a region with an elliptical barrier, Belhaj and Ben-Beya [20] reported a thermal investigation of thermal convection using hybrid nanofluids and a varying magnetic field. Nabwey et al. [21] used a hybrid nanofluid with a square obstruction to investigate the radiative influence on transient MHD thermal convection circulation in an inclined irregular porous chamber.
In addition, ternary hybrid nanofluids have lately become the focus of study to further accelerate the rate of heat transmission. For a porous prismatic chamber with two moving heated obstacles, Shao et al. [22] investigated the natural convection of ternary hybrid nanofluids. Employing changeable diffusion and a non-Fourier's notion, Algehyne et al. [23] established a computational approach to ternary hybrid nanofluid flow. Using ternary-hybrid nanofluids, Elnaqeeb et al. [24] studied the effects of suction and dual-stretching on the three-dimensional motion of water-carrying nano additives of varying geometries and densities. Numerical simulations of ternary nanosuspension circulation with different slip and heat jump restrictions were published by Alshahrani et al. [25]. Convective energy transport in a ternary nanofluid that is moving over a stretching plate was theoretically explored by Manjunatha et al. [26]. Other related recent works can be found in [27][28][29][30][31][32][33][34][35][36][37][38][39][40][41].
In light of the numerous applications in engineering and technology, as well as the literature mentioned above, in this paper, the thermal convection of a ternary hybrid nanosuspension (Al 2 O 3 -Ag-CuO/water) in an enclosure is investigated. Linearly heating the side wall is considered. The PDEs controlling the liquid circulation and energy transport with suitable boundary conditions are described by using the single-phase nanofluid model. The finite element technique based on the COMSOL Multiphysics simulation software is applied to resolve the control PDEs after transforming them into a dimensionless form. The effect of governing characteristics, such as the nanoparticles, concentration, Rayleigh number, and linearly heating temperature constant on the velocity and temperature fields using streamlines and isotherms and mean Nusselt number at the heated border with appropriate patterns, has been investigated and analyzed.
Mathematical Analysis
We consider a square enclosure of size L for the analysis, as presented in Figure 1. The vertical border of the left side is linearly warmed, with a temperature pattern of T * (0, y * ) = T h − (T h − T c ) m y * L , and the right vertical border is cooled with a temperature of T * (L, y * ) = T c . The other two (horizontal) walls are well-insulated. For this investigation, ternary hybrid nanofluid Al 2 O 3 -Ag-CuO/water is considered as the working fluid. In addition, there is no inner thermal production, and the present research disregards the effects of radiation and viscous dissipation. The operating fluid is a Newtonian fluid that is incompressible and has constant characteristics. The Oberbeck-Boussinesq equations have been applied to model the issue for 2D steady and laminar circulation situations. Changes in the density of the nanofluid are taken into account using the Boussinesq approximation. The host liquid (water) and the nano additives are believed to be in heat equilibrium. Below is a representation of the continuity, momentum, and energy equations: [42,43] ∂v * The physical attributes of ternary nanosuspension, including density ρthnf, viscosity µthnf, thermal volume capacity (ρcp)thnf, volume heat expansion parameter (ρβ)thnf, and heat conductivity κthnf, are shown in Tables 1 and 2. The additional border restrictions are The physical attributes of ternary nanosuspension, including density ρ thnf , viscosity µ thnf , thermal volume capacity (ρc p ) thnf , volume heat expansion parameter (ρβ) thnf , and heat conductivity κ thnf, are shown in Tables 1 and 2.
Utilizing the below-stated transformations in the above Equations (1)-(4), we get ∂v 1 ∂x ρc p thn f The corresponding boundary conditions are The quantity of practical interest in this research is the average Nusselt number, which is given as dy.
Numerical Solution
The non-dimensional Equations (7)-(10) and the boundary conditions provided in (11) are solved using the COMSOL Multiphysics simulation software, adopting the Galerkin finite element technique. To achieve optimum balance between numerical accuracy and associated computational cost, the best mesh type is determined by examining a number of elements that range from extremely coarse to extremely fine with the average Nusselt number. Finally, a 16,946-element extra fine mesh is selected, which has shown to be satisfactory due to an insignificant change in the average Nusselt number. This grid refinement study shown in Table 3 confirms that the mesh statistics provided in Table 4 are found to be optimal in the context of balancing accuracy and computational time. The computational mesh chosen after the grid refinement study is presented in Figure 2. Further, to confirm the accuracy of the computational data, the mean Nu of this research for various values of Ra when Pr = 0.71, φ1 = 0, φ2 = 0, φ3 = 0, and m = 0 are compared with [45,46] in Figure 3. Also Figure 4 presents an excellent agreement between the isotherms of the current study with [46]. This authenticates the validity of the present model throughout this research work. The computational mesh chosen after the grid refinement study is presented in Figure 2. Further, to confirm the accuracy of the computational data, the mean Nu of this research for various values of Ra when Pr = 0.71, ϕ1 = 0, ϕ2 = 0, ϕ3 = 0, and m = 0 are compared with [45,46] in Figure 3. Also Figure 4 presents an excellent agreement between the isotherms of the current study with [46]. This authenticates the validity of the present model throughout this research work.
Results and Discussion
In this analysis, we kept Ra = 1000, Pr = 6.2, CuO nanoparticles volume fraction φ1 = 0.04, Ag nanoparticles volume fraction φ2 = 0.04, Al 2 O 3 nanoparticles volume fraction φ3 = 0.04, m = 0 (constant heating), and m = 1 (linear heating) fixed throughout the study unless otherwise specified except for the concerned parameter under analysis. Figure 5 shows streamlines and isotherms within the enclosure for various Ra and the working liquid. For the considered case, the temperature of the left vertical border changes from 1.0 at the bottom wall to 0.0 at the upper one. Taking into account this non-uniform warming of the left vertical border, a secondary recirculation can be found in the upper part close to the left corner, while a global circulation is placed in the central part. The appearance of the weak eddy in the upper corner can be explained by the formation of the low wall temperature in this part while the temperature in the major convective cell is high, and as a result, one can find a formation of counter-clockwise circulation in this part. These two cells' circulation structure is formed within the enclosure regardless of the Rayleigh numbers. For Ra = 10 3 , one can find a formation of streamlines like concentric circles and isotherms, illustrating propagation of a high temperature from the lower left corner where a high temperature is maintained. Moreover, the heat transfer mode is heat conduction, and the propagation rate of heat along the horizontal direction is low compared to the vertical direction due to the non-uniform wall temperature. Taking into account such weak circulation, the shape of the considered eddies is circle-like. It should be noted that the shape of the streamlines does not depend on the nanoparticles' addition due to the low Rayleigh number and domination of heat conduction. Whilst isotherms have some differences, an introduction of nanoparticles increases the effective thermal conductivity, and as a result, heating/cooling of the cavity occurs more intensively in the case of hybrid nanofluid (see differences in red and green lines). = 0.04, m = 0 (constant heating), and m = 1 (linear heating) fixed throughout the study unless otherwise specified except for the concerned parameter under analysis. Figure 5 shows streamlines and isotherms within the enclosure for various Ra and the working liquid. For the considered case, the temperature of the left vertical border changes from 1.0 at the bottom wall to 0.0 at the upper one. Taking into account this nonuniform warming of the left vertical border, a secondary recirculation can be found in the upper part close to the left corner, while a global circulation is placed in the central part. The appearance of the weak eddy in the upper corner can be explained by the formation of the low wall temperature in this part while the temperature in the major convective cell is high, and as a result, one can find a formation of counter-clockwise circulation in this part. These two cells' circulation structure is formed within the enclosure regardless of the Rayleigh numbers. For Ra = 10 3, one can find a formation of streamlines like concentric circles and isotherms, illustrating propagation of a high temperature from the lower left corner where a high temperature is maintained. Moreover, the heat transfer mode is heat conduction, and the propagation rate of heat along the horizontal direction is low compared to the vertical direction due to the non-uniform wall temperature. Taking into account such weak circulation, the shape of the considered eddies is circle-like. It should be noted that the shape of the streamlines does not depend on the nanoparticles' addition due to the low Rayleigh number and domination of heat conduction. Whilst isotherms have some differences, an introduction of nanoparticles increases the effective thermal conductivity, and as a result, heating/cooling of the cavity occurs more intensively in the case of hybrid nanofluid (see differences in red and green lines). A growth in Ra (see Figure 5(b1,b2)) results in the weak modification of the major convective cell structure. One can find a weak elongation of the concentric circles along the cavity's secondary diagonal due to a reduction of the boundary layer's thickness near the isothermal walls. The isotherms for this Rayleigh number reflect the appearance of the heat plume close to the left vertical border where an ascending fluid flow is formed. In this case, one can find differences not only in the temperature fields, but also in the flow structure for various working fluids. Thus, the addition of nanoparticles leads to a not-soessential intensification of the flow, and as a result, the streamlines have very weak elongation, and isotherms reflect the not-so-essential inner stratification zone for the host liquid. Thus, weak modification can be explained by a growth in not only the effective thermal conductivity with nanoparticle concentration but also in the effective viscosity increases. As a result, weak circulation occurs.
Further growth of the buoyancy force characterizes an elongation of the streamlines along the middle horizontal line, and the thermal boundary layer close to the left vertical boundary becomes thinner, while the size of the secondary recirculation increases due to the more essential temperature gradient formed in this zone. In the case of Ra = 10 6 (see Figure 5(d1,d2)), one can find an essential modification of the flow structure with a dislocation of the convective cell close to the left vertical border, a formation of thin dynamic boundary layers near the vertical borders, whilst the central zone is characterized by the formation of a stratified temperature field. Taking into account the formed temperature A growth in Ra (see Figure 5(b1,b2)) results in the weak modification of the major convective cell structure. One can find a weak elongation of the concentric circles along the cavity's secondary diagonal due to a reduction of the boundary layer's thickness near the isothermal walls. The isotherms for this Rayleigh number reflect the appearance of the heat plume close to the left vertical border where an ascending fluid flow is formed. In this case, one can find differences not only in the temperature fields, but also in the flow structure for various working fluids. Thus, the addition of nanoparticles leads to a not-so-essential intensification of the flow, and as a result, the streamlines have very weak elongation, and isotherms reflect the not-so-essential inner stratification zone for the host liquid. Thus, weak modification can be explained by a growth in not only the effective thermal conductivity with nanoparticle concentration but also in the effective viscosity increases. As a result, weak circulation occurs.
Further growth of the buoyancy force characterizes an elongation of the streamlines along the middle horizontal line, and the thermal boundary layer close to the left vertical boundary becomes thinner, while the size of the secondary recirculation increases due to the more essential temperature gradient formed in this zone. In the case of Ra = 10 6 (see Figure 5(d1,d2)), one can find an essential modification of the flow structure with a dislocation of the convective cell close to the left vertical border, a formation of thin dynamic boundary layers near the vertical borders, whilst the central zone is characterized by the formation of a stratified temperature field. Taking into account the formed temperature fields, it is possible to conclude that the growth of the buoyancy force strength increases the average cavity temperature, and the size of the secondary recirculation also rises. Simultaneously, some differences can be found in isolines for pure liquid and hybrid nanofluid due to differences in effective thermal conductivity and dynamic viscosity. The different thicknesses of the boundary layers near isothermal walls characterize the different sizes of the secondary eddy, located in the upper left corner and the major convective cell core.
For the uniform warming of the left vertical border (T = 1), the growth of Ra presented in Figure 6 reflects a formation of one clockwise circulation due to warming from the left wall and cooling from the right. These flow structures and temperature patterns are similar to the same fields in the case of a pure fluid without nanoparticles [46]. fields, it is possible to conclude that the growth of the buoyancy force strength increases the average cavity temperature, and the size of the secondary recirculation also rises. Simultaneously, some differences can be found in isolines for pure liquid and hybrid nanofluid due to differences in effective thermal conductivity and dynamic viscosity. The different thicknesses of the boundary layers near isothermal walls characterize the different sizes of the secondary eddy, located in the upper left corner and the major convective cell core.
For the uniform warming of the left vertical border (T = 1), the growth of Ra presented in Figure 6 reflects a formation of one clockwise circulation due to warming from the left wall and cooling from the right. These flow structures and temperature patterns are simi A raise in Ra illustrates a modification of the flow structures from the concentric circles to the circulation with two convective cells placed near the vertical walls, while temperature fields illustrate a strengthening of convective energy transport with the formation of a stratified temperature zone in the central part, which heats from the upper part to the bottom one.
At the same time, the introduction of nanoparticles reflects less intensive circulation and not-so-thin boundary layers near the isothermal vertical walls compared to the pure fluid. The reason for such differences was explained in the case of Figure 5.
Figures 7-10 demonstrate the mean Nu for different governing parameters. Thus, one can find growth of the energy transport strength with Ra in Figure 7 due to the formation of more intensive circulation with a larger magnitude of the buoyancy force. An inclusion of nano-additives to the host liquid (water) intensifies the energy transport, and this intensification becomes more essential for high Ra. Such an intensification can be explained by the more essential contribution of the thermal conductivity and not the dynamic viscosity because growth of the viscosity characterizes a reduction of the flow intensity. A raise in Ra illustrates a modification of the flow structures from the concentric circles to the circulation with two convective cells placed near the vertical walls, while temperature fields illustrate a strengthening of convective energy transport with the formation of a stratified temperature zone in the central part, which heats from the upper part to the bottom one.
At the same time, the introduction of nanoparticles reflects less intensive circulation and not-so-thin boundary layers near the isothermal vertical walls compared to the pure fluid. The reason for such differences was explained in the case of Figure 5.
Figures 7-10 demonstrate the mean Nu for different governing parameters. Thus, one can find growth of the energy transport strength with Ra in Figure 7 due to the formation of more intensive circulation with a larger magnitude of the buoyancy force. An inclusion of nano-additives to the host liquid (water) intensifies the energy transport, and this intensification becomes more essential for high Ra. Such an intensification can be explained by the more essential contribution of the thermal conductivity and not the dynamic viscosity because growth of the viscosity characterizes a reduction of the flow intensity. A raise in Ra illustrates a modification of the flow structures from the concentric circles to the circulation with two convective cells placed near the vertical walls, while temperature fields illustrate a strengthening of convective energy transport with the formation of a stratified temperature zone in the central part, which heats from the upper part to the bottom one.
At the same time, the introduction of nanoparticles reflects less intensive circulation and not-so-thin boundary layers near the isothermal vertical walls compared to the pure fluid. The reason for such differences was explained in the case of Figure 5.
Figures 7-10 demonstrate the mean Nu for different governing parameters. Thus, one can find growth of the energy transport strength with Ra in Figure 7 due to the formation of more intensive circulation with a larger magnitude of the buoyancy force. An inclusion of nano-additives to the host liquid (water) intensifies the energy transport, and this intensification becomes more essential for high Ra. Such an intensification can be explained by the more essential contribution of the thermal conductivity and not the dynamic viscosity because growth of the viscosity characterizes a reduction of the flow intensity. A transition between the uniform and non-uniform wall temperature profiles characterizes the heat transfer strength augmentation, while the influence of nanoparticles and the Rayleigh number is the same regardless of the character of the left border temperature. The reduction of the average Nusselt number in the case of non-uniform heating compared to the uniform case can be explained by a decrease in the total heat flux from the heated wall. A transition between the uniform and non-uniform wall temperature profiles characterizes the heat transfer strength augmentation, while the influence of nanoparticles and the Rayleigh number is the same regardless of the character of the left border temperature. The reduction of the average Nusselt number in the case of non-uniform heating compared to the uniform case can be explained by a decrease in the total heat flux from the heated wall. Figure 8 shows the energy transport enhancement for the nanofluid with the addition of nanoparticles of various kinds. These results characterize the most essential heat transfer enhancement for the ternary nanosuspension. Such an intensification can be explained by the growth of the effective thermal conductivity of the nanosuspension. A rise in the parameter m reflects a diminution of the mean Nu due to a reduction of the heat energy generated by the warmed left border. Figure 9 shows the energy transfer augmentation with alumina nano-additives for Ra = 10 3 . As mentioned previously, the rise of m reflects a decrease in heat transfer strength. Therefore, the major energy transport augmentation can be achieved for high φ3 and m = 0. Figure 10 illustrates the energy transport enhancement with the Rayleigh number and parameter m. It should be noted that there is no difference between the different considered hybrid nanofluids for the average Nusselt number, namely, Al 2 O 3 -CuO/water, Al 2 O 3 -Ag/water, or Ag-CuO/water. However, the addition of a third kind of nanomaterial for the case of ternary nanosuspension leads to a rise in the energy transport strength (see Figure 8). Table 5 shows a comparison of the average Nusselt number between pure water and ternary hybrid nanofluid for uniform and non-uniform heating. One can find an essential intensification of the heat transfer with a hybrid nanofluid, and this intensification is more essential for low Rayleigh numbers when heat conduction is the dominant heat transfer mechanism.
Conclusions
The natural convection of a ternary hybrid nanosuspension in a differential-heated enclosure under non-uniform heating from the left vertical border was studied. The singlephase nanofluid model was used for analysis. The performed analysis showed that − the addition of a third kind of nanomaterial allows for intensifying the energy transport within the closed chamber. This intensification is more essential in the case of a low Rayleigh number (it achieves 30% for Ra = 10 3 ) when heat conduction is a dominant heat transfer regime; − a transition from the uniform heating to non-uniform heating of the left vertical border (from m = 0 to m = 1) characterizes the heat transfer degradation due to a reduction of the total heat energy output from this warmed border, and this difference becomes more essential with the growth of the Rayleigh number. Thus, for Ra = 10 6, the average Nusselt number decreases at about 58% with a transfer between m = 0 and m = 1; − the presence of a non-uniform heating left vertical wall results in the appearance of secondary recirculation close to the upper left corner. This eddy becomes wider with the Rayleigh number. Moreover, the size of this secondary eddy is large in the case of pure water compared to the nanofluid due to the influence of effective viscosity; − as a result, the ternary hybrid nanofluids can be used for an intensification of heat transfer in closed chambers, e.g., for cooling of heat-generating elements. Data Availability Statement: Data is contained within the article.
Conflicts of Interest:
The authors declare no conflict of interest. | 2023-05-17T15:06:52.689Z | 2023-05-01T00:00:00.000 | {
"year": 2023,
"sha1": "b01f73f033b1747bf0bafb68e3a6f450c9aa05e2",
"oa_license": "CCBY",
"oa_url": "https://doi.org/10.3390/mi14051049",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "fc85df42eda1ecc4779dd53001449a869c62f814",
"s2fieldsofstudy": [
"Physics"
],
"extfieldsofstudy": []
} |
198981726 | pes2o/s2orc | v3-fos-license | CYP1A1 methylation mediates the effect of smoking and occupational polycyclic aromatic hydrocarbons co-exposure on oxidative DNA damage among Chinese coke-oven workers
Background Multiple factors, including co-exposure between lifestyle and environmental risks, are important in susceptibility to oxidative DNA damage. However, the underlying mechanism is not fully understood. This study was undertaken to evaluate whether Cytochrome P4501A1 (CYP1A1) methylation can mediate the co-exposure effect between smoking and occupational polycyclic aromatic hydrocarbons (PAH) in development of oxidative DNA damage. Methods We explored the associations between smoking and occupational PAH co-exposure effect, CYP1A1 methylation and oxidative DNA damage among 500 workers from a coke-oven plant in China. Urine biomarkers of PAH exposure (1-hydroxypyrene, 1-OHP; 2-hydroxynaphthalene, 2-NAP; 2-hydroxyfluorene, 2-FLU; and 9-hydroxyphenanthren, 9-PHE) and a marker of oxidative DNA damage (8-hydroxy- 2′- deoxyguanosine, 8-OHdG) were measured by high performance liquid chromatography. CYP1A1 methylation was measured by pyrosequencing. Finally, mediation analysis was performed to investigate whether CYP1A1 methylation mediated smoking and occupational PAH co-exposure effect on oxidative DNA damage. Results We observed significant associations of smoking and 1-OHP co-exposure with CYP1A1 hypomethylation (OR: 1.87, 95% CI: 1.01–3.47) and high 8-OHdG (OR: 2.13, 95% CI: 1.14–3.97). There was a significant relationship between CYP1A1 hypomethylation and high 8-OHdG (1st vs. 3rd tertile = 1.58, 95% CI: 1.01–2.47, P for trend = 0.046). In addition, mediation analysis suggested CYP1A1 hypomethylation could explain 13.6% of effect of high 8-OHdG related to smoking and 1-OHP co-exposure. Conclusions Our findings suggested that the co-exposure effect of smoking and occupational PAH could increase the risk of oxidative DNA damage by a mechanism partly involving CYP1A1 hypomethylation. Electronic supplementary material The online version of this article (10.1186/s12940-019-0508-0) contains supplementary material, which is available to authorized users.
Introduction
Oxidative DNA damage induced by reactive oxygen species (ROS) plays a pivotal role in the nosogenesis of respiratory disease such as lung cancer and asthma [1,2]. ROS, resulting from chemical compounds or the action of exogenous physical factors such as ultraviolet A, or the metabolism of cells, can induce a great many DNA damage, including base modification and DNA strand breaks [3], were generally believed to be involved in the carcinogenic mechanism [4,5]. Previous studies have shown that 8-hydroxy-2′-deoxyguanosine (8-OHdG) is a widely accepted biomarker for assessing the extent of oxidative damage to DNA [6]. Lifestyleenvironmental factors, such as smoking [7][8][9] and polycyclic aromatic hydrocarbons (PAH) exposure [10,11], have been proved to relate with urine 8-OHdG levels.
Smoking has a strong effect on oxidative DNA damage, and PAH exposure is also related to oxidative DNA damage among occupational workers and general population [11][12][13][14]. There is a dose-dependent relationship of smoking and PAH metabolites in the risk of oxidative damage to DNA [15]. However, a challenge remains to fully understand the molecular mechanism of lifestyleenvironmental factors between smoking and occupational PAH co-exposure effect on oxidative DNA damage. Epigenetic modifications, for example, DNA methylation, which can be influenced by lifestyleenvironmental factors, may provide a possible biological link between risk factors and the disease. Cytochrome P4501A1 (CYP1A1) is answerable for PAH metabolism, which participated in the metabolic process of exogenous compounds via the excessive formation of ROS [16,17], eventually lead to the oxidative DNA damage. It has also been demonstrated that CYP1A1 can be induced by PAH and cigarette consumption can influence the CYP1A1 methylation levels [18]. Increased lung cancer risk has been associated with high CYP1A1 expression and hypermethylation [19]. Therefore, itʼs imperative to research whether CYP1A1 methylation could mediate effect of smoking and PAH co-exposure on the development of oxidative damage to DNA.
We hypothesized that the co-exposure effect of smoking and PAH was involved in the development of oxidative DNA damage via CYP1A1 methylation. To prove our hypothesis, we carried out the research to evaluate the coexposure effect between smoking and PAH exposure on CYP1A1 methylation and oxidative damage to DNA among coke-oven workers in China, so as to estimate whether CYP1A1 methylation is responsible for increased risk of oxidative DNA damage related to the co-exposure effect of smoking and PAH.
Study subjects
The basic demographic data was collected from a cokeoven plant in China by using a cross-sectional survey in 2014. 950 workers participated in the study. We restricted our analyses to who had worked for more than 1 year, and who were non-exposed to known mutagens, for example, chemotherapy and radiotherapy in the last three months. We excluded individuals who were missing with sufficient blood samples (n = 360), sufficient urine samples (n = 228), or demographic characteristics (n = 278). Thus, the final analytic sample was 500 participants, of whom 389 coke-oven workers prolonged exposed to PAH and other 111 water treatment workers in the same plant without exposure to PAH in the workshop.
Trained interviewers collected the information regarding sex, age, years of working, education, smoking and drinking status, central heating and occupational exposure history by a pre-tested questionnaire.
Smokers were defined as those who smoked at least 1 cigarette every day and continuously more than six months, and drinkers were drank at least once a week on average and continuously more than six months. After signing the informed consent, every participant provided with venous blood (5 mL) and morning urine (20 mL). And the study was approved by the Medical Ethics Committee of the Shanxi Medical University.
Urine 8-OHdG measurement
Urine 8-OHdG was measured by HPLC -electrochemical detector according to Yuan et al. [23] manuscript. In brief, about 2 mL supernatant of urine was prepared to elute twice with 0.1 mol/L KH 2 PO 4 (PH 6.0), evaporate and dissolve with 1 mL KH 2 PO 4 . Standard curves of urine 8-OHdG were run daily to identify and quantify the concentration of urine samples. Valid urine concentrations of 8-OHdG were adjusted using urine Cr concentrations and are expressed as mmol/mol Cr. The mean recovery rate, CV, R-square and LOD, were 81-105%, 3.1, 0.9998, and 7.0 nmol/L, respectively. CYP1A1 methylation measurement DNA was extracted from whole blood according to Mag-Bead blood DNA kit (ComWin Biotech, Beijing, China). The purity and concentration of extracted DNA was detected using ultraviolet and visible spectrophotometer (Eppendorf, Hamburg, Germany). Then bisulfite conversion using EZ DNA methylation Kit (ZYMO Research, California, USA). We choose a 153 bp fragment from − 944 to − 792 in promoter region of CYP1A1 and 5 CpGs sites as the regions and sites of interest of the converted DNA based on previously published literature on CYP1A1 methylation (Tekpli, 2012). The converted DNA was amplified using TaKaRa EpiTaq HS reagent (Takara, Dalian, China) in 50 mL, which contained 0.25 mL TaKaRa EpiTaq HS, 5 mL 25 mM MgCl 2 , 5 mL 10× EpiTaq Polymerase Chain Reaction (PCR) Buffer, 6 mL dNTP Mixture, 2 mL 10 mM forward primer, 2 mL 10 mM reverse primer, 100 ng bisulfite DNA and distilled water. Primer sequences for more detailed information could be seen in Additional file 1: Table S1. The PCR program was 95°C 30 s; 95°C 5 s, 60°C 30 s, 72°C 30 s, 40 cycles. PCR product was integrated in Streptavidin Sepharose High Performance (GE Healthcare, Sweden) to be purified, washed, denatured and washed again. The washed PCR product was annealed to 0.4 mM of sequencing primer and pyrosequencing was performed using PyroMark Q96 ID System (Qiagen, Hilden, Germany). The methylation degree was expressed as proportion of cytosines that were 5methylated.
During the experiment, the sample of coke oven workers and water treatment workers were arranged alternately. In addition, we set a bisulfite trearment control before the variable positions in dispensation order to judge whether bisulfite treatment completely, and internal controls using PyroMark Control Oligo (Qiagen, Hilden, Germany) to determine if an unexpected result is related to the reagents, to the Pyro-Mark Vacuum Workstations, or to the assay. The peak height of bisulfite control is not more than 7% of the average single peak height. The lowest single peak height was greater than 25 Relative Luminous Unit. The reduction in peak height between samples prepared using the PyroMark Q96 Vacuum Workstation compared with PyroMark Control Oligo added directly to the PyroMark Q96 Plate Low should not be more than 20%. Each CpG site had a quality assessments bar. Only quality assessments of all CpG sites in a well gave "Passed" quality, the methylation results could be used. The methylation levels were obtained from triplicate experiments, and the standard deviation was not exceed 2% units.
Statistical analysis
We used the median and interquartiles range to describe the basic characteristics and laboratory parameters for continuous variables, which were skewed normal distribution, and tested using Mann-Whitney U test. The data of categorical variables were presented as frequency and proportion and tested using Chi-square test. Correlations of each PAH metabolites and each CYP1A1 methylation sites were explored by Spearman's correlation. We evaluated the effect of smoking on urine PAH metabolites by calculating smoking contribution to PAH metabolite [defined as the R 2 difference between models with and without smoking]; other covariates were sex, age, years of working, drinking status, education and central heating. Logistic regression was conducted to evaluate associations of smoking and PAH metabolites co-exposure with CYP1A1 hypomethylation and high 8-OHdG. The cutoff points for CYP1A1 hypomethylation and high 8-OHdG defined as the 50th percentile were equal to 3.03 and 204.05 mmol/mol Cr, respectively. Covariates were adjusted including sex, age, years of working, smoking and drinking status (yes or no), education (< 9, 9-12, > 12), central heating (yes or no). A logistic regression model and a test of linear trend was used to estimate associations between CYP1A1 hypomethylation and high 8-OHdG. The test for trend across decreasing tertile of CYP1A1 methylation were conducted by assigning the medians of average CYP1A1 methylation in tertiles treated as a continuous variable. Finally, we run a mediation analysis to investigate whether CYP1A1 hypomethylation mediated smoking and PAH metabolites co-exposure effect on high 8-OHdG levels. The detailed instruction of mediation analysis could be seen in Lin et al. [24]. The mediation macro in SAS 9.4 (SAS Institute Inc., Cary, NC): %mediate (data=, id=, outcome=, exposure=, intermed=, modprint = T, intmiss = F, notes = nonotes, covars=, modopt=, pro-copt=, extrav=, where=, RR 2 = 1, debugdv = 1, surv = 0, type = 1). The mediation effect was evaluated using the mediation percentage. There were statistically significance for P value < 0.05. Table 1 showed the essential information of occupational workers with low (n = 250) and high (n = 250) 8-OHdG levels. Among 500 individuals, there were no statistically differences in sex, education, occupation, and central heating. Workers with high 8-OHdG were more likely to be older, longer years of working, smokers and drinkers. There were increasing trends in urine PAH metabolites with the increasing of urine 8-OHdG levels. And the urine 2-NAP and Σ PAH levels were significantly differences between low and high 8-OHdG levels (P < 0.05). The detailed distributions of urine PAH metabolites could be found in Additional file 1 : Table S2. Urine PAH metabolites were related to each other (Additional file 1 : Table S3); CYP1A1 methylation at each sites were correlated with each other (Additional file 1 : Table S4), and we used average CYP1A1 methylation to represent the CYP1A1 methylation at each site. Average CYP1A1 methylation levels in workers with high 8-OHdG levels were significantly lower than in workers with low 8-OHdG levels (2.42 vs. 3.10).
Essential information
The co-exposure effect of smoking and urine PAH metabolites on CYP1A1 hypomethylation First, we tested the contribution rates of smoking on urine PAH metabolites, and found smoking accounted for 0.29% of the 1-OHP variance, 8.99% of the 2-NAP variance, 0.11% Table S5). Because of the higher contribution rates of smoking on 2-NAP and 9-PHE, we chose 1-OHP and 2-FLU as biomarkers of urine PAH metabolites to explore the co-exposure effect of smoking and occupational PAH on CYP1A1 hypomethylation and high 8-OHdG. The odds radios (ORs) and 95% confidence intervals (CIs) for the associations of smoking and 1-OHP co-exposure with CYP1A1 hypomethylation were presented in Fig. 1. After adjusting covariates (i.e. sex, age, years of working, drinking status, education and central heating), we found that smoking and 1-OHP co-exposure was associated with CYP1A1 hypomethylation (P < 0.05). That is, smokers who had high 1-OHP levels had about 1.87 (1.01-3.47) times risk of CYP1A1 hypomethylation, compared to non-smokers who had low 1-OHP levels. The same increasing trends could be observed in smoking and 2-FLU co-exposure effects on CYP1A1 hypomethylation levels, but were not significantly difference (P > 0.05).
The co-exposure effect of smoking and urine PAH metabolites on high 8-OHdG
The ORs for association of smoking and urine PAH metabolites co-exposure with high 8-OHdG were presented in Fig. 1. After adjusting covariates, we observed smokers who with high 1-OHP levels had significantly higher 8-OHdG levels compared with non-smokers who with low 1-OHP levels [OR (95% CI): 2.13 (1.14-3.97)].
Smokers, no matter exposed to low or high levels of urine 2-FLU, had an increasing risk of high 8-OHdG levels compared with non-smokers.
The association between CYP1A1 hypomethylation and high 8-OHdG
The association between CYP1A1 hypomethylation and high 8-OHdG could be found in Table 2. The risk of high 8-OHdG levels was on the rise trend when the CYP1A1 methylation levels gradually decreased. The crude ORs (95%CI) of high 8-OHdG for decreasing tertile of CYP1A1 methylation were 1.00 (reference), 1.24 (0.81-1.91), and 1.66 (1.08-2.57), respectively (P trend = Fig. 1 Co-exposure effects of smoking and urine PAH metabolites on risk of CYP1A1 hypomethylation and high 8-OHdG. The statuses of smoking were stratified by non-smokers and smokers. The levels of urine PAH metabolites were stratified by the highest tertile into low exposure (< 67th percentile) and high exposure (≥ 67th percentile). The levels of CYP1A1 methylation were stratified by the median (3.03) into CYP1A1 hypomethylation (< 3.03) and CYP1A1 hypermethylation (≥ 3.03). The levels of urine 8-OHdG were stratified by the median (204.05 mmol/mol Cr) into low 8-OHdG (< 204.05 mmol/mol Cr) and high 8-OHdG (≥ 204.05 mmol/mol Cr). Adjusted for sex, age, years of working, drinking status, education and central heating 0.021). In the lowest tertile of CYP1A1 methylation, the OR of high 8-OHdG decreased to 1.58 (1.01-2.47) when adjusting for all covariates, with P for trend = 0.046.
CYP1A1 hypomethylation mediated the co-exposure effect of smoking and urine PAH metabolites on high 8-OHdG We performed mediation analysis of CYP1A1 hypomethylation in the association between smoking and PAH metabolites co-exposure and oxidative DNA damage. We observed a significant mediation effect of CYP1A1 hypomethylation in the association between smoking and 1-OHP co-exposure and high 8-OHdG in Table 3 (P = 0.047). The mediation analysis showed a mediation proportion of 13.6% (95% CI: 2.6-47.9%). These results suggested that CYP1A1 hypomethylation may be a potential mediator of smoking and 1-OHP co-exposure effect on the risk of oxidative DNA damage. However, we didn't find that the associations between smoking and 2-FLU co-exposure and high 8-OHdG were mediated by CYP1A1 hypomethylation.
Discussion
In this study, we observed the co-exposure effect of smoking and 1-OHP were positively associated with CYP1A1 hypomethylation and high 8-OHdG (a biomarker of oxidative damage to DNA) after adjusting for covariates. We also investigated a positive relationship between CYP1A1 hypomethylation and high 8-OHdG in an upwardly trending, dose-responsive manner. Moreover, CYP1A1 hypomethylation may serve as a potential mediator of smoking and occupational PAH co-exposure effect on risk of oxidative DNA damage.
In the current study, we also detected the concentration of environmental PAH, since the air PAH levels in the plant could represent the exposure status of occupational workers. The results showed the sum PAH of the workplace was markedly lower than Kuang et al. [11]: 0.38 mg/m 3 vs. 1.13 mg/m 3 in the non coke-oven place, 1.45 mg/m 3 vs. 11.08 mg/m 3 at the side of the coke-oven. The big difference of environmental PAH exposure may account for internal exposure different. PAH internal exposure, which showed various pathways of exposure, could be more accurate to reflect the actual levels of PAH exposure. Urinary 1-OHP was a widely used shorttime biomarker of PAH exposure, and had a linear relationship with PAH concentration in the workplace [25]. But it alone cannot reflect the overall internal PAH metabolites, and urine hydroxylated nathalene [26], hydroxylated fluorene and hydroxylated phenanthrenes were suggested to be good surrogate biomarkers of occupational PAH exposure. Urine PAH metabolites were regarded as biomarkers to evaluate external PAH exposure [27]. The concentration of urine PAH metabolites in our research were lower than Kuang et al. [11] and Talaska et al.ʼs [28] researches. Besides external PAH exposure, regional differences, air pollution, lifestyle behaviors and laboratory methods can also cause the difference in urine PAH metabolites.
Some studies indicated smoking can alter the DNA methylation status [29][30][31][32]. Tekpli [18] reported smoking was associated with CYP1A1 methylation. Other studies suggested PAH exposure was related to DNA methylation [33][34][35]. In our study, we observed smoking and 1-OHP co-exposure was associated with CYP1A1 hypomethylation, further proved that smokers exposed to PAH were more likely to lower CYP1A1 methylation levels, which were consisted with other studies. However, the underlying mechanisms of methylation changes resulted from smoking or PAH exposure remain unknown. As far as CYP1A1 is concerned, we can speculate aryl hydrocarbon receptor (AHR) binding to CYP1A1 promoter region accelerated after smoking or PAH exposure, may cause the CYP1A1 methyltransferase removed from the promoter and subsequently lead to a loss of methylation. Therefore, the CYP1A1 methylation could be inducted by gene expression, but could promote the binding of AHR and strengthen transcriptional activity in return [36]. PAH can be metabolized by CYP1A1, and then generate ROS, which were known to cause oxidative DNA modification. As one of the predominant forms of oxidative lesions in DNA, 8-OHdG is a specific and quantitative biomarker of oxidative damage to DNA [37,38]. Urine 8-OHdG is highly affected by many factors, such as sex, age, smoking, occupational exposure, and so on. Some studies had showed PAH exposure was positively related to urine 8-OHdG whether in the occupational worker or general population [14,39,40]. Asami et al. [9] showed a significant relationship between the Brinkman index and 8-OHdG levels. Our study revealed smoking and 1-OHP co-exposure was positively related to high 8-OHdG levels, indicating smokers, which are occupational exposed to PAH in the long term, may have more serious oxidative damage to DNA. These results were consistent with the previously reported findings [15]. Even though the smoking and 1-OHP co-exposure plays an important role in the high 8-OHdG levels, a challenge remains to provide a functional interpretation and investigate the further mechanism of smoking and PAH exposure on the development of oxidative damage to DNA. Some studies reported that gene methylation was associated with DNA damage [10,35,41]. We also showed a dose-responsive relationship between CYP1A1 hypomethylation and high 8-OHdG levels. These findings add to the strength of relationship between DNA methylation and oxidative DNA damage. DNA methylation, which can reflect the co-exposure between environmental factors, could be a possible "missing link" and is an attractive mechanism to explain the formation of oxidative damage to DNA. The mediation analysis suggested that 13.6% effect of oxidative DNA damage related to smoking and 1-OHP co-exposure was mediated by CYP1A1 hypomethylation, indicating that smoking and PAH co-exposure may influence oxidative damage to DNA through CYP1A1 hypomethylation. In fact, it has been established that CYP1A1 expression is important in the metabolic process of xenobiotic and CYP1A1 mRNA levels correlate with DNA damage levels [42,43]. Since the gene methylation plays a pivotal role in regulating expression, a compellent viewpoint is the epigenetic regulation of CYP1A1 expression may be an important factor in xenobiotic-related to oxidative DNA damage. The altered DNA methylation in promoter region, which may influence assembly and gene expression of CYP1A1, could influence PAH metabolic process in vivo, eventually lead to the oxidative DNA damage.
Nevertheless, our study also has some limitations. For the nature of cross-sectional, our results cannot establish a causal association between PAH exposure, CYP1A1 hypomethylation and oxidative DNA damage. Besides, urine cotinine levels as a specific biomarker of smoking, body mass index, physical activity, eating habits, etc., which can affect the urine 1-OHP and 8-OHdG concentration, and white blood cell subtype, which may have an impact on DNA methylation, weren't duly taken into account. Finally, the smaller sample size and relatively less biomarkers of PAH exposure are the shortages for our study. Nevertheless, we still observed the co-exposure effect of smoking and 1-OHP on oxidative DNA damage was partly mediated by CYP1A1 hypomethylation. We didnʼt find that CYP1A1 hypomethylation mediated the co-exposure effect of smoking and 2-FLU on oxidative DNA damage. One potential explanation is that the most representative metabolites of PAH exposure in urine were 1-OHP for occupational workers in coke oven plants. Many studies suggested that 1-OHP was a suitable PAH internal exposure biomarker in coke-oven workers [28,44], which was further confirmed by our results. In addition, there may be other pathways involved to further research.
Conclusions
In this study, we quantitative assessed the association of smoking and occupational PAH co-exposure with both CYP1A1 hypomethylation and oxidative damage to DNA in Chinese occupational workers. We observed CYP1A1 hypomethylation may partly mediate the co-exposure effect of smoking and occupational PAH on oxidative DNA damage for the first time. From the point of public health, further prospective researches are necessary.
Additional file
Additional file 1: Table S1. Primers used for pyrosequencing size of the PCR amplicons and position of the primers from the transcription start point. Table S2. Distributions of urine PAH metabolites among 500 occupational workers. Table S3. The correlation coefficients (r s ) of urine PAH metabolites among 500 occupational workers. Table S4. The correlation coefficients (r s ) of CYP1A1 methylation among 500 occupational workers. Table S5. Contribution rates of smoking on urine PAH metabolites among 500 occupational workers. (DOC 67 kb) | 2019-07-30T22:24:14.045Z | 2019-07-29T00:00:00.000 | {
"year": 2019,
"sha1": "3e742f56490409d87d3a75a1ffcf03afa13f578a",
"oa_license": "CCBY",
"oa_url": "https://ehjournal.biomedcentral.com/track/pdf/10.1186/s12940-019-0508-0",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "3e742f56490409d87d3a75a1ffcf03afa13f578a",
"s2fieldsofstudy": [
"Biology",
"Medicine",
"Chemistry"
],
"extfieldsofstudy": [
"Medicine"
]
} |
129884047 | pes2o/s2orc | v3-fos-license | Case Studies and Lessons Learned in Chemical Emergencies Related to Hurricanes
This book represents recent research on tropical cyclones and their impact, and a wide range of topics are covered. An updated global climatology is presented, including the global occurrence of tropical cyclones and the terrestrial factors that may contribute to the variability and long-term trends in their occurrence. Research also examines long term trends in tropical cyclone occurrences and intensity as related to solar activity, while other research discusses the impact climate change may have on these storms. The dynamics and structure of tropical cyclones are studied, with traditional diagnostics employed to examine these as well as more modern approaches in examining their thermodynamics. The book aptly demonstrates how new research into short-range forecasting of tropical cyclone tracks and intensities using satellite information has led to significant improvements. In looking at societal and ecological risks, and damage assessment, authors investigate the use of technology for anticipating, and later evaluating, the amount of damage that is done to human society, watersheds, and forests by land-falling storms. The economic and ecological vulnerability of coastal regions are also studied and are supported by case studies which examine the potential hazards related to the evacuation of populated areas, including medical facilities. These studies provide decision makers with a potential basis for developing improved evacuation techniques.
HSEES events involving evacuation, response, and nearby areas
Activities conducted to protect public health during hurricane-related events included health advisories (n=14) and environmental sampling (n=10). One building was evacuated because of a hurricane-related hazardous substances event; the evacuation lasted 2 hours. No response personnel were involved in 80 of the events; multiple types of responders responded to the remaining 165 events. Personnel most frequently responding to hurricane-related events included company response teams (n=128) and hospitals or poison control centers (n=22). Other responders included fire departments (n=8), law enforcement officials (n=7), certified HazMat teams (n=5), third-party clean-up contractors (n=5), emergency medical services (n=4), environmental agencies (n=2), and departments of public works (n=1). Most events (n=220) were contained inside the facility or within 200 feet of the release. The area impacted was missing for 19 events. In six events, the chemical extended beyond the facility and affected greater than 200 feet beyond the release. All of these events were in Louisiana, in predominately industrial areas with little or no residences, schools, daycare facilities, nursing homes, hospitals, or recreational parks within ¼ mile of the release. Descriptions of these 6 incidents are as follows: Event 1-Approximately 980 pounds of ammonia were released with 490 pounds of nitrogen oxides after a power failure caused by Hurricane Katrina at a nitrogen fertilizer manufacturer. The power failure resulted in a loss of refrigeration to the ammonia storage tanks. This caused an emergency release of ammonia to a flare. The ammonia was only partially combusted by the flare and thus formed the nitrogen oxides. Event 2-Seven hundred fifty nine pounds of zinc bromide were released from storage tanks that were washed away from an oil and gas support operation during Hurricane Katrina. Event 3-Ten pounds of nitrogen oxide and 500 pounds of sulfur dioxide were released when a petroleum refinery shut down its plant in preparation for Hurricane Katrina and there was a release to the stack. Event 4-A chemical product and preparation manufacturer released 270 pounds of ammonia and 75 pounds of nitrogen oxides when the ammonia storage tank routed to the flare after the compressors were shut down in preparation for Hurricane Katrina, thus causing a release of nitrogen oxides as a combustion product. Event 5-Seven hundred eighty pounds of ammonia were released from a nitrogen fertilizer manufacturer when a power outage caused the loss of key monitoring equipment. The flare on the ammonia tank was blown out by the high winds sustained during Hurricane Rita. Events 1-5 occurred in industrial areas with no nearby residences, nursing homes, schools, or daycare facilities. Event 6-One thousand eighty two pounds of chlorine were released from an alkali and chlorine manufacturing plant when a power failure due to Hurricane Rita caused excess pressure in the chlorine tank. The tank had to be manually vented to reduce pressure and protect the tank integrity. Approximately 493 persons lived within ¼ mile, and a licensed daycare center was within ¼ mile of the release; no information was available about whether people were in the homes or daycare center when the release occurred.
HSEES victims
There were 160 victims in 62 hurricane-related events (25% of all hurricane-related events). Florida reported 116 of the victims, Louisiana reported 7, and Texas reported 37. Ninetynine additional people in 49 events were observed at a medical facility but did not require treatment, so are not counted as victims.
Although manufacturing events accounted for 65% of the hurricane-related events, less than 1% of the events in this industry category resulted in victims. Most of the victims were injured in carbon monoxide events in private households (n=103, 64%). All 59 of the carbon monoxide events were in Florida and all had victims. A bus accident and fire involving nursing home residents and their oxygen tanks in Texas resulted in 36 (23%) victims. The industry/location was unknown for 18 (11%) victims, and there was 1 victim each in retail, manufacturing and the postal service. Most (n=139, 87%) victims were members of the general public, followed by employees (n=11, 7%). However there were also 10 (6%) responder victims as follows: 1. Six career firefighters suffered carbon monoxide-related symptoms while helping with post-Katrina recovery efforts. Generator exhaust fumes from a nearby motor home entered the camper in which they were sleeping. 2. One responder was injured when a barrel containing a mixture of hydrocarbon, hydrogen sulfide, and water leaked during post-Katrina orphan drum recovery operations. 3. Three fire rescue responders were among six people injured by carbon monoxide exposure as a result of improper generator use in a power outage after hurricane Wilma. One non-responder in this event died.
The ratio of male to female victims was approximately 50/50. The victims ranged in age from less than 1 year old to 100 years old, but most victims were 20-44 years old (41%) (median age was 34 years). The most frequent injuries/symptoms were dizziness/central nervous system effects (38%), headache (22%), gastrointestinal problems (15%), and thermal burns (11%) ( Table 3). Twenty-nine persons died on scene or after arrival at a hospital. Treatment information was available for 115 of the remaining 131 victims. These victims were treated at a hospital and released (n=62), admitted to a hospital (n=19), received first aid only (n=4), or were observed at the hospital (n=1). Only one victim was decontaminated. Two victims wore personal protective equipment. One wore Level C personal protective equipment which requires a full-face or half-mask, air-purifying respirator and chemical resistant hooded clothing, inner and outer gloves, and steel toe boots. The other wore Level D personal protective equipment which is the lowest level of protection and requires coveralls and safety shoes/boots. Personal protective equipment levels are determined by the U.S. Occupational Safety and Health Administration's Standard (29 Code of Federal Regulations 1910.120 (q)).
HSEES events with fatalities
Twenty-three nursing home residents died from thermal burns in a charter bus fire and medical oxygen tank explosion in Texas during an evacuation in anticipation of Hurricane Rita. There were six deaths among members of the general public in five events in Florida from carbon monoxide exposure. One event occured in July after Hurricane Dennis, one in August after Hurricane Katrina, and three in October after Hurricane Wilma. Five events were due to generator misuse and one event was due to charcoal grill misuse because of hurricane-related power loss.
Case studies 4.1 Case study background
HSEES collected a broad array of incidents and was not specifically tailored to collect data on chemical emergencies during hurricanes or other natural disasters. Therefore, the case study section reviews the literature on three incidents of chemical emergencies during hurricanes for issues that were broader than the typical HSEES data to elucidate lessons learned. These case studies were chosen to bring up issues that HSEES does not focus data collection on, including vulnerable populations, petroleum emergency clean-up, and chemical waste disposal issues. These were three very prominent cases that arose from the 2005 hurricane season.
Case study 1
On September 23, 2005, at 3 p.m. a nursing home began evacuating prior to Hurricane Rita. A charter bus was secured when the regular bus was unavailable. The nursing home did not have time to carefully select a bus and had to take the first available bus. The bus was carrying 44 nursing home residents and nursing staff from Bellaire, Texas to Dallas, Texas. Although the bus had originally been going to a much closer shelter, the shelter was full and they had to go to a much farther one. Early the next morning, a motorist alerted the bus driver that the right-rear tire hub was glowing red. Upon stopping the bus, the driver and nursing staff observed flames emanating from the right-rear wheel well and they began to evacuate the bus.
The bus was quickly engulfed in flames when the residents' oxygen tanks exploded and further fueled the fire. Twenty-three passengers died, 2 passengers were seriously injured and 19 received minor injuries. The bus driver also received minor injuries (National Transportation Safety Board, 2007). This incident was also contained in the HSEES database.
Contributing factors for this event were the charter bus company's failure to conduct proper vehicle maintenance, failure to do pre-trip inspections, and previous violations of several United States safety regulations pertaining to its drivers and vehicles; the lack of fire retardant construction materials on the bus exterior and adjacent to the wheel well; and no guidance for emergency transportation of medical oxygen cylinders on a bus. Release valves on the cylinders were designed to release during a fire only if they were fully pressurized. Therefore, cylinders that were in use and partially pressurized exploded and became dangerous projectiles during the fire. Additionally, emergency responders had difficulty rescuing passengers because of window height and top-hinge design window exits that are not optimally designed for elderly or children; the latches are difficult to open and the drop to the ground was too far (National Transportation Safety Board, 2007).
Case study 2
Storm surge from Hurricane Katrina placed an oil tank facility and surrounding neighborhoods in St. Bernard Parish, Louisiana under water for several days. When the water receded, it was discovered that the hurricane had dislodged a 250,000 barrel aboveground storage tank containing about 65,000 barrels of mixed crude oil (Agency for Toxic Substances and Disease Registry, 2005b). Approximately 25,110 barrels (slightly over a million gallons) of oil spilled from the ruptured tank. The initial response was delayed because of high water, debris, barricades by the National Guard or local police, downed telephone lines, and lack of satellite phones. When the area was accessible, the United States Environmental Protection Agency on-scene commander directed the facility to secure the tank, identify the extent of the release, and begin recovery operations. The facility immediately began pumping out the containment canals and recovered approximately 72% of the oil.
In October 2005, long-term remediation was initiated with oversight by the United States Environmental Protection Agency and the Louisiana Department of Environmental Quality, including clean up on land, residential areas, and non-commercial waterways. Approximately 1,800 affected properties in an area of about one square mile were identified through a house-to-house visual survey conducted from the street. The Environmental Protection Agency classified contamination on 114 properties as heavy (more than 50% of the yard, sidewalks, and home were covered with oil), 286 properties as medium (about 50% of the yard and sidewalks were covered in oil), and the balance as light to oil line only (small percent of oil was visible on horizontal surfaces or a "bathtub ring" of visible product band approximately 3 to 6 inches wide was seen on the residence, with no visible oil on the yard, sidewalks, and home (Agency for Toxic Substances and Disease Registry, 2005c). However, some affected properties may have been missed because properties that were not visible from the street or public sidewalk were not surveyed due to legal access requirements. The more heavily affected areas were immediately to the west of the facility. The 25 month long clean-up of the contaminated properties within the impacted area began with the facility removing oil-stained sediment and soil. After removal, the remaining soil was analyzed to ensure that the Louisiana Department of Environmental Quality risk evaluation/corrective action program residential soil standards for High Public Use Areas were met. If the standards were not met, additional soil was removed and the process was repeated. Residential clean-up was complex and involved two phases. In Phase 1, property owners requested clean-up from the facility and granted them access to the property, and the facility obtained wipe and sediment samples (10% of the samples were split with the Environmental Protection Agency) and washed home exteriors. In Phase 2, the homeowner was responsible for gutting the house to the studs, and the facility removed the oiled part of the debris and transported it to an industrial landfill. The homeowner then requested an interior cleaning from the facility and granted them a second access to the property which involved the facility power washing the home's interior and exterior and replacing the yard. Reoccupation of the property was determined by the parish based on results of a final air sample (U.S. Environmental Protection Agency Region 6, 2006). Several factors impeded residential clean-up, including class actions lawsuits filed against the facility that restricted homeowner contact with the facility and therefore barred remediation by the facility; temporary or permanent relocation of many residents after the spill; and lack of funds to complete the clean-up because the facility was only responsible for the oil-damaged part of the cleanup. This resulted in a less efficient clean-up and an "island effect" where oiled homes were next to cleaned homes because crews could not clean up whole contiguous blocks of neighborhoods (U.S. Environmental Protection Agency Region 6, 2006). The Environmental Protection Agency shared the results of more than 800 sediment/soil samples collected from properties between September 19 and November 8, 2005 with the Agency for Toxic Substances and Disease Registry and requested an assessment of potential health hazards posed by the contamination. In December 2005, the Agency for Toxic Substances and Disease Registry released a health consult for the site which concluded that for the properties sampled, there were no short or long term risks from oil-related chemicals in sediment and soil for most properties. Recommendations were made that properties should be evaluated and remediated if necessary for other potential health hazards, such as indoor mold and structural damage, prior to re-occupancy. The recommendation was also made that properties which exceeded recommended soil standards should be remediated to b e p r o t e c t i v e o f p u b l i c h e a l t h f o r r e -o c c u p a n c y . A d d i t i o n a l l y , t h e h e a l t h c o n s u l t recommended that residents avoid bare skin contact with sediment, soil, and indoor surfaces with visible oil contamination and that homes with visible indoor oil contamination or noticeable petroleum odors be tested to determine if concentrations of chemicals in indoor air were of health concern prior to re-occupancy (Agency for Toxic Substances and Disease Registry, 2005b). Oil companies were not required to plan to withstand storm surges that resulted from Hurricane Katrina. However, in 2007, a buyout program to create a buffer around the facility was approved to minimize the threat of future spills. In August 2009, the Louisiana Department of Environmental Quality determined that the area's shallow groundwater was unaffected by the spill and concluded that the area impacted by the spill had been remediated to acceptable levels.
Recovery and response related to this spill were complicated by competing priorities (local, state, and federal) and high background levels of contamination in that area of Louisiana which interfered with the sampling. Communication issues were also a major barrier and included delays in the public receiving information about contaminant levels which was often on a website that was not accessible to the affected population; difficulties in interpreting data and comparing it with general drinking water or ambient air quality standards which were not appropriate for an acute exposure event; and adequately conveying re-occupancy policies because federal agencies had a different opinion than the parish who ultimately made the decision (Manuel, 2006;Johnson, et al. 2005).
Case study 3
As a result of Hurricane Katrina, the city of New Orleans needed to quickly find a disposal site for the approximately 55 million cubic yards of hurricane-related debris and waste contaminated with hazardous substances that were created. In response, the Mayor of New Orleans issued an executive decree which reopened three closed landfills and granted a conditional permit to urgently convert the Chef Menteur site from a light industrial zone to a landfill. The Chef Menteur landfill began accepting waste in April 2006 (Choi et al., 2006). The approximately 100 acre Chef Menteur site was created about 40 years ago by construction companies for use as a local source of sand for building. In July 1994, an application for a permit to begin construction of a landfill at this site was submitted to the Louisiana Department of Environmental Quality. Because of extensive resistance and pressure from the local community and the proximity of the site to the nation's largest urban wildlife refuge, the city council denied the permit in March 1997 (Choi et al., 2006). The Chef Menteur Landfill is located in the Versailles neighborhood and is next to the 23,000 acre Bayou Sauvage Wildlife Refuge. It is less than a mile from the center of a mostly Vietnamese-and African-American community of approximately 15,000 residents (Choi et al., 2006). The unlined landfill has a large storage capacity and was approved to accept 6.5 million cubic yards (about 2.6 million tons) of construction and demolition debris (Louisiana Department of Environmental Quality, 2006). Besides trees and sheetrock, the site also accepted asbestos-containing materials and the moldy contents of gutted homes, including household pesticides, electronics, personal-care products, cleaning solutions, paint, and bleach.
Although the landfill lacked the liners required to stop leachate from leaking into the water table and contaminating surface water and groundwater, the state department of environmental quality stated that the landfill had a 10 foot clay bed, which they maintained was enough to stop leachate from reaching the community and wildlife refuge. Opponents of the landfill argued that this was not the industry standard, nor the environmental standard, which calls for multiple layers of both composite and clay (Colby, 2008). Other environmental and public health concerns included that landfill leachate would affect the watering source for the community gardens and locally caught seafood, as well as exposure to air pollution from the over 1,000 trash carrying trucks per day that drove along the main road leading into the community (Colby, 2008; Citizens for a Strong New Orleans East, 2006). The landfill was supposed to be capped with two feet of clay and six inches of topsoil. This type of cap may not be effective in preventing precipitation from entering the landfill, mixing with wastes, and forming leachate, especially in southern Louisiana, which receives approximately 60-80 inches of rain per year (Choi et al., 2006) Additionally, the landfill is surrounded by permeable soil, and much of the site is a wetlands area where the water table is at or near the surface of the ground. Soil borings showed that groundwater is located between 1.6 and approximately 12 feet below the surface (Choi et al., 2006). Several air and water samples taken by the state department of environmental quality in May and June 2006 shortly after the landfill opened showed that contaminants were below levels of concern (Choi et al., 2006). However, the community is concerned about potential long-term effects. Amid mounting pressure and lawsuits by the community and environmental groups, the Mayor of New Orleans did not renew the landfill's permit, and the landfill was closed in August 2006.
HSEES incident distribution
HSEES recorded 245 hurricane-related events in 2005. All of the Hurricane Dennis and Hurricane Wilma events were reported by Florida because this was the only U.S. state affected by these hurricanes. Texas did not have any Hurricane Katrina-related events because of the path of the hurricane. Texas reported most of the Hurricane Rita-related events because Beaumont and Port Arthur, as well as several other areas in the state, shut down their plants in preparation for Hurricane Rita. Louisiana reported fewer events from Hurricane Rita because of the path of the hurricane and many of their plants were still not operating when Hurricane Rita struck. Florida did not report any events associated with Hurricane Rita, likely because of the mandatory evacuation orders for the Florida keys and because the Florida Panhandle escaped most of the land effects from the hurricane (National Oceanic and Atmospheric Administration, 2005b).
HSEES limitations
The HSEES system had several limitations. As in any disaster, local and state public health and emergency response infrastructure was severely disrupted by the hurricanes. Because of this some events may not have been reported and some data may not have been captured during follow-up. In Louisiana, agencies were only notified about the most severe releases that resulted from Hurricane Katrina, so some events may not have been included in HSEES. Furthermore, HSEES collected information on acute, not chronic, releases, and releases of only petroleum were excluded. Lastly, since HSEES is a surveillance system for a variety of hazardous material releases, it is not specifically tailored for hurricanes and therefore does not capture detailed information on these incidents. Carbon monoxide exposure was traditionally underreported in the HSEES system because of the lack of existing reporting mechanisms. A case review of medical charts and emergency medical services records coded as carbon monoxide poisoning found 41 nonfatal cases from 11 incidents and 10 deaths from 4 incidents in Texas that were not reported to HSEES while analysis of hyperbaric oxygen facilities reports detected 16 non-fatal and 5 fatal cases among Louisiana residents that were not reported to HSEES (Centers for Disease Control and Prevention, 2006b; Centers for Disease Control and Prevention, 2005d). Additionally, after data closeout, Florida identified 31 cases of carbon monoxide exposure through hospitals and medical examiners reports.
Manufacturing industry
Most events involved the manufacturing industry (65%), however, only one injured person was associated with this industry. This is likely because there were few people in the area to be harmed, most facilities had already shut down and were operating with reduced crews and many residents evacuated the areas before the hurricanes hit (Knabb et al, 2005a;Knabb et al, 2005b). The immediate contributing causal factor in over half of the events was system start-up or shut down. Most releases were air releases (88%), and about a third of the events involved the release of mixtures of chemicals. About a quarter of the events were caused when complex industrial processes were shut down in preparation for the hurricanes. The shutdowns that occurred while preparing for Hurricanes Katrina and Rita were more massive and involved numerous simultaneous activities and rapidly changing process conditions compared with one process or unit during normal shutdowns. Additionally, these large massive shutdowns had not been done before. There is a need for different shutdown procedures that involve massive shutdowns of entire plants, such as those that occur during hurricanes. One lesson learned from Hurricanes Katrina and Rita is that it is critical for chemical facilities to better coordinate with state and local emergency preparedness agencies, especially for decisions concerning mandatory evacuation orders which can directly impact plant shutdown sequence and timing (Challener, 2006). The U.S. Environmental Protection Agency advises that all industry sectors review past events associated with shutdowns during hazardous weather conditions and make administrative/procedural, operational/process equipment and hardware/software safety improvements as needed (U. S. Environmental Protection Agency, 2010). Chemical facilities should also establish staff responsibilities and procedures to shut down process operations safely (U. S. Department of Energy, 2008). Almost a quarter of the events were caused when major industrial processes started up after the hurricanes. The start-ups that occurred following the massive shutdowns in preparation for the hurricanes were also large-scale. Many plants used this opportunity to conduct massive maintenance or repairs on the idled plants, as equipment in some facilities in Texas dates back to the 1940s. The maintenance resulted in releases. Additionally, releases are more likely to occur when processes are shut down for more than one day (U.S. Chemical Safety and Hazard Investigation Board, 2005.) The U.S. Chemical Safety and Hazard Investigation Board issued a safety bulletin for precautions needed during oil and chemical facility start-up following hurricanes. The Chemical Safety and Hazard Investigation Board recommends that as facilities resume operations, established and up-to-date start-up procedures and checklists should be followed and pre-start-up safety reviews should be carefully performed. Specific recommendations include using appropriate management-of-change processes before making any modifications; having adequate staffing and expertise available before starting up; and evacuating nonessential personnel from nearby process units that are starting up. The Chemical Safety and Hazard Investigation Board also recommends that equipment, tanks, and instrumentation be thoroughly evaluated for damage. Particular attention should be given to examining large bulk storage tanks and pressure vessels for evidence of floating displacement or damage, and examining sewers, drains, furnace systems, electric motors and drives, switchgear, conduits, electrical boxes, electronic and pneumatic instrumentation, emergency warning systems, emergency equipment, and insulation systems for piping, vessels, and tanks for trapped floodwater and debris-impact damage (U.S. Chemical Safety and Hazard Investigation Board, 2005.) Industrial releases resulting from power failures may benefit from improved backup power generation (Ruckart et al., 2004). Generators and backup lights should be tested in preparation for a hurricane, extra fuel should be on-hand, and generators should be located in areas of the facility that are not likely to be flooded (U. S. Environmental Protection Agency, 2006). Other efforts include filling all storage tanks to prevent floating or falling during hurricane-force winds, adequately securing equipment and piping to withstand high winds, and properly labeling all chemical bulk storage tanks to aid identification if these items are washed or wind-blown away (U. S. Environmental Protection Agency, 2006).
Carbon monoxide incidents
Carbon monoxide was the most frequently released single hazardous substance. Carbon monoxide poisoning, particularly from misuse of generators in residences, was the largest source of the morbidity and mortality; over two-thirds of the 160 victims were injured in carbon monoxide-related events. In another study of 27 incidents of carbon monoxide poisoning that resulted in 10 fatal and 78 nonfatal cases in Alabama and Texas after Hurricane Katrina and Rita, the majority of incidents were caused by portable generators. Most of the generators were placed outside but close to the home in order to power window air conditioners or to connect to central electric panels. Interviews of 18 of the 27 incident households showed that only 6 (33%) had a carbon monoxide detector present, but only one alarm went off. The other 4 had dead batteries and one sent a signal to a remote security system that was unable to alert the household by telephone. The Centers for Disease Control and Prevention recommends that because there has shown to be no safe distance for generator placement, there should be functional carbon monoxide detectors in all households (Centers for Disease Control and Prevention, 2006b). Our data show that responders were injured by carbon monoxide while sleeping in temporary housing (camper). Therefore this recommendation should be expanded to temporary housing as well.
In January 2007, the Consumer Product Safety Commission required manufacturers to place a danger label on all new generators and the generator packaging. The commission began to explore various strategies to reduce consumers' exposure to carbon monoxide including generator engines with substantially reduced carbon monoxide emissions, and interlocking or automatic shutoff devices (Consumer Product Safety Commission, 2009). These measures could potentially reduce harm in the future.
Case studies
Lessons learned in case study 1 are the importance in the haste of preparing for an oncoming hurricane or responding after one occurs of adhering to important health and safety measures. Almost 25% of the 2005 hurricane victims reported to HSEES were injured in a single bus accident when oxygen tanks exploded on a bus carrying senior citizens who were being evacuated prior to Hurricane Rita. They were placed on a bus that had numerous safety violations in the haste of trying to secure a bus when their dedicated bus was not available. Hospital and nursing home administrators face several challenges during hurricanes, including deciding whether to evacuate or shelter inside their facilities until the outside danger is over. According to a the National Transportation Safety Bord, the charter bus fire incident showed how ill prepared the United States was to evacuate those who are most vulnerable, particularly the elderly, disabled, and those in hospitals and nursing homes. The loss of life was exacerbated by the frailty of the passengers. This incident highlights the need for special plans to be developed for nursing home residents (as well as other institutionalized residents, such as hospitalized persons or prisoners) because they are not able to quickly escape from hazardous substance events and have special considerations. Motor coaches were used for this evacuation and will be used again should a similar emergency arise. Following this incident, the National Transportation Safety Board issued several recommendations to other agencies to protect the travelling public that should be urgently implemented (National Transportation Safety Board, 2007). The U.S. National Transportation Safety Board also investigated the risk that medical oxygen tanks posed to rescuers during the fire. Three days after the accident, the Department of Transportation issued "Guidance for the Safe Transportation of Medical Oxygen for Personal Use on Buses and Trains" which recommends that medical oxygen be securely stored upright and be limited to one canister per patient in the passenger compartment (U.S. Department of Transportation, 2006). There are several lessons learned from case study 2, when an oil spill contaminated a large residential area. One lesson is the need for improved and more rapid identification of environmental hazards and their communication to emergency responders and the public. Another lesson learned is the need during a hurricane and aftermath for greater collaboration among Federal, State, and local officials, as well as an enhanced public communication program. All of these measures could have improved the effectiveness of the Federal response (Manuel, 2006;The White House, 2006) Case Study 3 highlights that in the haste of rebuilding, proper health and safety measures for handling the hurricane hazardous substance contaminated debris were not followed. The "hub and spoke" method of debris handling is supported by the U.S. Environmental Protection Agency and the Army Corps of Engineers. It involves collecting debris on the curbside and taking it to temporary staging sites in more central locations in the day time, followed by night-time hauling to permitted landfills outside the city (Citizens for a Strong New Orleans East, 2006). This would have avoided the use of the unpermitted landfill that was not designed properly to handle hazardous waste.
Conclusion
Because preventing hurricanes is not possible, increased attention must be focused on preventing and minimizing acute releases of hazardous substances during hurricanes and preplanning in case releases do occur. Because of the urgency of a hurricane, preplanning for those most vulnerable and increased diligence to health and safety during and after a hurricane are called for. Many of the incidents occurred due to power interruption. Industries, particularly in hurricane prone areas, can take steps to minimize their risks for chemical releases in future power outages. Additionally, public health campaigns that emphasize placement of generators as far away from the home as possible should continue. Since no safe distance has been determined, the use of battery operated, functional carbon monoxide detectors should be stressed for all sleeping quarters, temporary or permanent. This book represents recent research on tropical cyclones and their impact, and a wide range of topics are covered. An updated global climatology is presented, including the global occurrence of tropical cyclones and the terrestrial factors that may contribute to the variability and long-term trends in their occurrence. Research also examines long term trends in tropical cyclone occurrences and intensity as related to solar activity, while other research discusses the impact climate change may have on these storms. The dynamics and structure of tropical cyclones are studied, with traditional diagnostics employed to examine these as well as more modern approaches in examining their thermodynamics. The book aptly demonstrates how new research into short-range forecasting of tropical cyclone tracks and intensities using satellite information has led to significant improvements. In looking at societal and ecological risks, and damage assessment, authors investigate the use of technology for anticipating, and later evaluating, the amount of damage that is done to human society, watersheds, and forests by land-falling storms. The economic and ecological vulnerability of coastal regions are also studied and are supported by case studies which examine the potential hazards related to the evacuation of populated areas, including medical facilities. These studies provide decision makers with a potential basis for developing improved evacuation techniques.
How to reference
In order to correctly reference this scholarly work, feel free to copy and paste the following: Perri Zeitz Ruckart and Maureen F. Orr (2011). Case Studies and Lessons Learned in Chemical Emergencies Related to Hurricanes, Recent Hurricane Research -Climate, Dynamics, and Societal Impacts, Prof. Anthony Lupo (Ed.), ISBN: 978-953-307-238-8, InTech, Available from: http://www.intechopen.com/books/recenthurricane-research-climate-dynamics-and-societal-impacts/case-studies-and-lessons-learned-in-chemicalemergencies-related-to-hurricanes © 2011 The Author(s). Licensee IntechOpen. This chapter is distributed under the terms of the Creative Commons Attribution-NonCommercial-ShareAlike-3.0 License, which permits use, distribution and reproduction for non-commercial purposes, provided the original is properly cited and derivative works building on this content are distributed under the same license. | 2019-04-24T13:05:16.093Z | 2011-04-19T00:00:00.000 | {
"year": 2011,
"sha1": "99a908b13949476ac499e5ecf5c31636debee91d",
"oa_license": "CCBYNCSA",
"oa_url": "https://www.intechopen.com/citation-pdf-url/15344",
"oa_status": "HYBRID",
"pdf_src": "ScienceParsePlus",
"pdf_hash": "51c65e3ff957c0ad7e0e89fbbc65b80438c09fa9",
"s2fieldsofstudy": [
"Environmental Science",
"Chemistry"
],
"extfieldsofstudy": [
"Geography"
]
} |
122067324 | pes2o/s2orc | v3-fos-license | Dynamic polarization random walk model and fishbone-like instability for self-organized critical systems
We study the phenomenon of self-organized criticality (SOC) as a transport problem for electrically charged particles. A model for SOC based on the idea of a dynamic polarization response with random walks of the charge carriers gives critical exponents consistent with the results of numerical simulations of the traditional ‘sandpile’ SOC models, and stability properties, associated with the scaling of the control parameter versus distance to criticality. Relaxations of a supercritical system to SOC are stretched-exponential similar to the typically observed properties of non-Debye relaxation in disordered amorphous dielectrics. Overdriving the system near self-organized criticality is shown to have a destabilizing effect on the SOC state. This instability of the critical state constitutes a fascinating nonlinear system in which SOC and nonlocal properties can appear on an equal footing. The instability cycle is qualitatively similar to the internal kink (‘fishbone’) mode in a magnetically confined toroidal plasma where beams of energetic particles are injected at high power, and has serious implications for the functioning of complex systems. Theoretical analyses, presented here, are the basis for addressing the various patterns of self-organized critical behavior in connection with the strength of the driving. The results of this work also suggest a type of mixed behavior in which the typical multi-scale features due to SOC can coexist along with the global or coherent features as a consequence of the instability present. An example of this coexistence is speculated for the solar wind–magnetosphere interaction.
Introduction
In 1982, the challenge to understand 1/ f 'noise' inspired Montroll and Shlesinger [1] to suggest that distributions with long tails should be a consequence of a simple generic stochastic process in much the same way as Gaussian, or normal, distribution in the theory of probability is a consequence of the central limit theorem. It took five more years before Bak, Tang and Wiesenfeld (BTW) [2] introduced, in 1987, the paradigmatic concept of self-organized criticality, or SOC, operating on systems natural attraction to fractal structures. The claim was that irreversible dynamics of systems with many coupled degrees of freedom ('complex' systems) would intrinsically generate self-organization into a critical state without fine tuning of external parameters. This conjecture initiated a burst of research activity in experiment, theory and simulation. Diverse formulations of SOC have been proposed, emphasizing the strengths and weaknesses of particular approaches [3]. The phenomenon was demonstrated on a number of automated lattice models, or 'sandpiles', displaying avalanche dynamics and scale invariance [4]- [6]. An impressive list of publications 2 has been produced in the attempt to prove or disprove the SOC hypothesis for the various systems. The notion of SOC can be thought of as belonging to the nascent 'science of complexity', which addresses the commonalities between apparently dissimilar natural, technological and socio-economic phenomena, e.g. market crashes and climate disruptions [7]. Despite its promising performance, 3 the SOC hypothesis is a subject of strong debate in the literature, and many issues related to it remain controversial or demand further investigation.
In this paper, we analyze the SOC problem as a transport problem for electric charges. We provide analytical methods to calculate the various critical exponents by employing the formalism of dielectric-relaxation theory. Our theoretical findings are in good agreement with computer simulations of the traditional sandpiles [4,5]. The lattice dynamics are described in terms of random walk hopping of charged particles on a self-adjusting percolation system. We show that the control parameter must vanish faster than a certain scaling law to give rise to a stable SOC state. Overdriving the system near self-organized criticality leads to the excitation of unstable modes characterized by sharp periodic outbreaks in the particle loss current. This instability of the critical state constitutes a fascinating nonlinear system in which SOC and nonlocal properties can appear on an equal footing. The instability cycle is qualitatively similar to the internal kink ('fishbone') mode in tokamaks with high-power beam injection [8]. Relaxations to the critical state are found to be stretched-exponential similar to the typically observed properties of dielectric relaxation in disordered solids and glasses [9,10]. The results presented in this work suggest a type of mixed behavior in which the typical multiscale features due to SOC can coexist along with the global or coherent features. One example of this coexistence is speculated for the coupled solar wind-magnetosphere-ionosphere system. A short account of some of these investigations has been reported recently [11].
The paper is organized as follows. The model, dubbed the dynamic polarization random walk (DPRW) model, is introduced in section 2. Following this is a presentation of analytical theory of SOC dynamics (section 3). A discussion of the various aspects of the DPRW approach to SOC is given in section 4. Instabilities of the SOC state are considered in section 5, where a condition on the strength of the driving is also obtained. We summarize our findings in section 6.
Description of the model
The model is motivated by the well-known problem [12]- [14] that by increasing the probability of site occupancy a lattice is brought to a critical point characterized by fractal geometry of the threshold percolation, i.e. self-similar distribution of arbitrarily large finite clusters, each presenting the same fractal geometry as the infinite cluster. We consider a hypercubic d-dimensional (d 1) lattice confined between two opposite (d − 1)-dimensional hyperplanes, which form a parallel-plate 'capacitor' as shown in figure 1. The plate on the right-hand side is earthed. Free charges are built by external forces on the capacitor's left plate. When a unit free charge is added to the capacitor, the lattice responds by burning a unit 'polarization' charge, which is an occupied site added at random to the lattice. When a unit free charge is removed from the capacitor, a randomly chosen occupied site is converted into a 'hole' site (missing occupied site). A hole will be deleted from the system (converted into an empty site) if/when the corresponding free charge has reached (or been moved to) the ground.
There is a limit, Q max , on the amount of free charges the capacitor can store, and this is defined as Q max = ep c N , where e is the elementary charge (e = −1), p c is the percolation threshold and N is the total number of sites across the lattice. Thus, the ability of the capacitor to store electric charges is limited to the occurrence of the infinite cluster at the percolation point. If, at any time, the above limit is exceeded, a double amount of free charges in excess of Q max will be removed from the capacitor, and will be distributed between the sites of the infinite cluster with equal probability. The implication is that the capacitor leaks electric charges above the percolation point. This property reflects the onset of the dc conduction at the threshold of percolation. The doubling of the amount of electric charges to be removed from the capacitor mimics nonzero inductance of the conduction process.
When a hole appears on the infinite cluster, it causes an activation event with the following consequence: one of the nearest-neighbor occupied sites, which is a random choice, will deliver its charge content to the hole. The hole that has just received the polarization charge becomes an ordinary occupied site, while the donor site becomes a hole. The newborn hole, in turn, will cause a further activation event at the location where it has occurred, thus triggering a chain reaction of redistribution of polarization charges. The chain reaction continues until the hole reaches the earthed plate where it is absorbed (converted into an empty site). When a hole appears on a finite cluster, it causes a chain reaction of activation events in much the same way as on the infinite cluster but with one modification regarding the ending of the activation. The chain reaction stops if (i) similar to the infinite cluster case the hole reaches the earthed plate where it is converted into an empty site, or if (ii) there is no more activity going on in the infinite cluster. In the latter case, the finite cluster freezes in a 'glassy' state with the holes still in it until either a new hole appears on the infinite cluster or one or more occupied sites are added to the lattice by external forces.
Random walk hopping process
Essentially, the holes interchange their position with the nearest-neighbor occupied sites, and it appears reasonable to model this process as an interchange hopping process [15]. In what follows we assume, following [16], that there is a characteristic microscopic hopping time, 5 which is taken to be unity, but more general hopping models can be obtained by introducing a distribution of waiting times between consecutive steps of the hopping motion (i.e. continuous time random walks) [17,18]. With the above assumption that the site acting as donor is a random choice, the transport model is defined as a random walk hopping model. Similarly to the hole case, the free charges are assumed to behave as random walkers after their re-injection into the infinite cluster. They will hop at a constant rate between the nearest-neighbor occupied sites in random directions on the cluster on which they are initially placed until they reach the earthed plate where they sink in the ground circuit (see figure 1). The holes act as conducting sites for the motion of free charges. The charged plate acts as a perfectly reflecting boundary. Hops to empty sites are forbidden. The latter condition limits random walks to fractal geometry of the threshold percolation.
Dynamical geometry of threshold percolation
Overall, one can see that the system responds by chain reactions of random walk hopping processes when it becomes slightly supercritical and is quiescent otherwise. Excess free charges dissipating at the earthed plate provide a feedback mechanism by which the system returns to the percolation point. There will be a slowly (as compared to hopping motions) evolving dynamical geometry of the threshold percolation resulting from competition between the addition of occupied sites to the lattice and the charge-releasing chain reactions. Based on the quantitative analysis below, we identify this state as a SOC state. This general picture based on the idea of a dynamic polarization response with random walk hopping of the charge carriers might be called the dynamic polarization random walk (DPRW) model.
In the DPRW SOC model, chain reactions of the hopping motion acquire the role of 'avalanches' in traditional sandpiles. In this work, we are interested in obtaining the critical exponents of the DPRW model by means of analytical theory. Numerical simulation of the DPRW dynamics is under way for comparison with analytical predictions. At the time of writing, the characteristic signatures of the multi-scale conductivity response of the dynamical system at criticality are being confirmed in the computer simulation model. In figure 2, we illustrate the existence of relaxation events of various sizes due to hole hopping on a 10 × 10 square lattice with random distribution of the conducting nodes and the probability of site occupancy so as to mimic the percolation threshold and conjectured SOC activities.
Dynamics and orderings
Starting from an empty lattice (no potential difference between the plates), by randomly adding occupied sites to it, one builds the fractal geometry of random, or uncorrelated, percolation, characterized by three percolation critical exponents β, ν and µ (connected clusters have fractal dimensionality d f = d − β/ν) [12]- [14]. Note that the infinite percolation cluster, in the true sense of the wording, exists only in the thermodynamic limit when the lattice itself is infinite. This limit arises because of the need to model the system-sized conducting clusters in terms of fractal geometry. In the absence of holes this percolation geometry is static (polarization charges can only move by exchanging their position with a hole), but when the holes appear on the lattice they cause local rearrangements in the distribution of conducting sites. As a consequence, the conducting clusters on which the transport processes concentrate change their shape and their position in real space. In the analysis of this section, we require that the average number density of holes is very small compared to the average number density of polarization charges. The implication is that the system remains near the percolation point despite the slow evolution of the conducting clusters. Note that the lattice rules are such as to preserve the properties of random percolation. In fact, no correlations are introduced in the distribution of the conducting sites at any step of the lattice update.
Frequency-dependent conductivity and power spectral density
Given an input electric driving field E(t, r), the polarization response of the system is defined by where the response function χ (t − t ) is identically zero for t < t as required by causality. We should stress that nonlocal integration over the space variable is not needed here in view of the local (nearest-neighbor) character of the lattice interactions. In a model in which the assumption of locality is relaxed as for instance in models permitting particle Lévy flights, integration over the space variable is expected to produce a physically nontrivial effect. We do not consider such models here. A Fourier transformed χ(t) defines the frequency-dependent complex susceptibility of the system, χ(ω). In a basic theory of polarization response, one also introduces the frequency-dependent complex ac conductivity, σ ac (ω), which is related to χ(ω) by the Kramers-Kronig integral, see equation (6). The dependence of the ac conductivity on frequency, specialized to the random walks on percolation systems [19,20], is given by the 7 scaling relation σ ac (ω) ∝ ω η , where the power exponent η (0 η 1) is expressible in terms of the percolation indices β, ν and µ as η = µ/(2ν + µ − β). A derivation of this result is due to Gefen et al [21]. Note that the scaling σ ac (ω) ∝ ω η incorporates conductivity responses from all clusters at percolation, including those that are finite. In the DPRW model, these implications are matched by the mechanism of the hole conduction permitting the polarization current on both infinite and finite clusters. The general linear-response theory expression [22] for conductivity σ ac (ω) in terms of the mean-square displacement from the origin r 2 (t) is with n d a constant depending on the dimensionality of the lattice and n and e the density and charge of the carriers, respectively. The function D(ω) has the sense of the frequencydependent diffusion coefficient [23]. It is required that the frequency ω be large compared to the characteristic evolution frequency in the distribution of the conducting sites. In what follows, we denote this frequency by ω * . We have ω ω * . Consistently with the above definitions, the inverse frequency 1/ω * is ordered as the characteristic diffusion time on the infinite cluster, T d ξ 2 /D(ω * ). Note that this time will depend on ω * in accordance with equation (2). Hence, the pair connectedness) length, p (0 p 1) is the probability of site occupancy and p c is the percolation threshold. Noting that D(ω * ) ∝ (ω * ) η , one obtains ω * ∝ | p − p c | 2ν/(1−η) . Observe that ω * → 0 for p → p c . Recalling that there is a microscopic hopping time, which is taken to be unity, we assess the Kubo number [24,25] in the vicinity of the SOC state as One sees that Q * → ∞ for p → p c . The Kubo number is a suitable dimensionless parameter that quantifies how the evolution processes in the lattice compare with the microscopic hopping motions. The divergency of the Kubo number at criticality implies that there is a time scale separation: fast hopping motions versus infinitesimal evolution change. For Q * → ∞, we also find that We identify this scaling law with low-frequency 'percolation' scaling [13,26] for particle diffusion on a time evolving fractal structure. By applying the Kramers-Kronig relations Im χ(ω) ∝ σ ac (ω)/ω and it is found that χ (ω) ∝ ω −γ , with γ = 1 − η. A Fourier transformed equation (1) reads as P(ω, r) = χ (ω)E(ω, r). One can see that the power spectral density, S(ω), of the system response to a white-noise perturbation, E(ω, r) = 1, will be proportional to |χ(ω) . Hence, the power spectral density in the DPRW model is given by an inverse power-law distribution, with the α value depending on scaling properties of the ac conductivity response.
Exponent of stretched-exponential relaxation and distribution of relaxation times
Next we obtain the distribution of relaxation times self-consistently. For this, assume that the system is slightly supercritical, then consider a charge density perturbation, δρ(t, r), caused by the presence of either free charges or holes on the conducting clusters. 'Slightly supercritical' means that the dependence of the ac conductivity response on frequency can, with good accuracy, be taken in the power-law form σ ac (ω) ∝ ω 1−γ discussed above. The implication is that on adding δρ(t, r) to the conducting system at percolation we neglect the departure of the system's geometric properties from pure self-similarity. Without loss in generality, we assume that the perturbation δρ(t, r) is created instantaneously at time t = 0. This means that the function δρ(t, r) ≡ 0 for t < 0 for all r. The perturbation δρ(t, r) generates an electric field inhomogeneity δE(t, r) in accordance with Maxwell's equation ∇ · δE(t, r) = 4πδρ(t, r). Consistently with the above discussion, we consider that for t > 0 the decay of δρ(t, r) is due to the spreading of charge-carrying particles (electrons and/or holes) via random walks on the underlying fractal distribution. The polarization response to δE(t, r) is given by where, as usual, The density of relaxation currents is defined as the time derivative of δP(t, r), i.e.
The continuity implies that ∂ ∂t δρ(t, r) Taking ∇· under the integral sign, and then eliminating δE(t, r) by means of Maxwell's equation ∇ · δE(t, r) = 4π δρ(t, r), we find, with the self-consistent charge density, In writing equations (9) and (10), we have also assumed that t > 0. Upon integrating over t, equation (10) becomes Here, the function φ(r) is an arbitrary function of the position vector r, which appears in the derivation as the constant of integration over time. Under the conditions χ(t − t ) ≡ 0 for t < t and δρ(t, r) ≡ 0 for t < 0 for all r, equation (11) reduces to If we allow t → +0, we find that for γ > 0 the integral term on the left-hand side goes to zero (as ∝ t γ ): 9 from which it is clear that φ(r) = lim t→+0 δρ(t, r). We consider this last condition as the initial condition for the relaxation problem. Essentially the same condition holds in the limit γ → 0, provided that lim t→+0 is taken first. A Fourier transformed equation (12) reads as where k is the position vector in reciprocal space and φ(k) is the Fourier image of φ(r). Writing the susceptibility as The quantity T λ has the sense of lifetime of a perturbation with wavelength λ. We expect that T λ ∝ λ z at criticality, where z is a scaling exponent. A derivation of this scaling relation will be given shortly. Separating the variables we write δρ(ω, is the Mittag-Leffler function [27]. The latter is the solution [28,29] of the fractional relaxation equation is the so-called Riemann-Liouville derivative [30]. Partial cases of this derivative are the unity operator for γ → 1, and ∂/∂t for γ → 0. One concludes that the relaxation to SOC of a slightly supercritical state is described by the Mittag-Leffler function E γ [ − (t/T λ ) γ ], and not by a simple exponential function as for standard relaxation. It is noticed, following [29], that the Mittag-Leffler function E γ [ − (t/T λ ) γ ] describes the relaxation toward equilibrium of particles governed by the fractional diffusion equation where (t, r) is the probability density of finding a particle (random walker) at time t at point r, and the Laplacian operator stands for the local (nearest-neighbor) character of the lattice interactions. Being intrinsically nonergodic [31], the fractional diffusion equation, equation (19), aims to incorporate the trapping effect [32] caused by cycles and dead ends of the fractal structure on which the transport processes concentrate (i.e. the 'nodes-links-blobs' model) ( [14] and figure 1 therein). This trapping effect appears in the fact that for γ < 1, the mean squared displacement of charge carriers grows slower-than-linear with time: If we recall that γ = 1 − η, then, with the aid of the expression η = µ/(2ν + µ − β), we find This scaling behavior reiterates one known result [21] for the random walks on percolation systems. A subtle point here is that · · · incorporates the system average over all clusters at percolation including the finite clusters conformally with the implication of the average ac conductivity response, σ ac (ω) ∝ ω η . Note that it is in this 'system-average' sense that the above fractional diffusion and relaxation equations apply to percolation. For times t T λ , the Mittag-Leffler function, equation (16), can be approximated [28] by a stretched-exponential, the so-called Kohlrausch-Williams-Watts (KWW) relaxation function [33] The KWW function, in turn, can conveniently be considered [9] as a weighted average of the ordinary exponential functions, each corresponding to a single relaxation event in the system: The weighting function w γ ( t) is given by equations (51d) and (55) of [9], where one replaces the exponent α with γ , the time constant T with T λ and the variable µ with T λ / t. In our notation where L γ ,−1 is the Lévy distribution function with skewness −1 (e.g. [34]). Assuming a longwavelength perturbation (i.e. the parameter λ being much longer than the microscopic lattice distance: λ 1), and setting T λ / t 1, we can further approximate the Lévy distribution L γ ,−1 by the Pareto inverse-power distribution. This gives L γ , leading to a pure power-law distribution of relaxation times, consistently with the wisdom of SOC: Our conclusion so far is that the relaxations are multi-scale, in accordance with equation (21), and their durations are power-law distributed. The distribution is heavy-tailed in the sense that , the limit T λ → ∞ can only be satisfied for p → p c implying that the system must be sufficiently close to the ideal SOC state. Distributions of the form of equation (23) have been earlier postulated [4] for SOC.
Consistency check
In a basic theory of dielectric relaxation, one writes the frequency-dependent complex dielectric parameter as [9,35] ( where ψ(t) is the relaxation function that describes the decay of polarization after the polarizing electric field has been stepped down or removed instantaneously. In the DPRW model, a stepdown-type electric field occurs as a consequence of re-injection of the free charges to the infinite cluster. The ensuing relaxation dynamics are mimicked by the chain reactions of hole hopping that act to suitably redistribute the polarization charges across the lattice. Based on the above analysis, we identify the relaxation function in equation (24) (24), after simple algebra one obtains where s = ωT λ is dimensionless frequency, and Q(s) and V (s) are the Lèvy definite integrals: In the parameter range of multi-scale relaxation response, T λ / t 1, ωT λ 1, the following series expansions of the Lèvy integrals apply [9]: From equations (28) and (29), one can see that the expansion of Q(s) starts from a term that is proportional to s −(1+γ ) , and so does the expansion of V (s) − 1/s. Hence, up to higher-order terms, (ω) − 1 ∝ s −γ . Given this, one applies the Kramers-Kronig relations sQ(s) ∝ σ ac (ω)/ω and to find the frequency scaling of the ac conduction coefficient to be σ ac (ω) ∝ ω 1−γ . By comparing this with the above expression σ ac (ω) ∝ ω η , one reiterates that consistently with the distribution of durations of relaxation events, equation (23).
Dispersion-relation exponent, Hurst exponent and the τ -exponent
In sandpile SOC models, one is interested in how the lifetime of an activation cluster scales with its size [5]. In the DPRW model, by activation cluster one means a connected cluster of activated sites. An occupied site is said to be 'activated' if it has become a hole or if it contains a free charge. Clearly, activation clusters can only exist above the percolation threshold. Note that activation clusters are subsets of the underlying conducting cluster of polarization charges. The notion of activation cluster is but a visualization of the charge density inhomogeneity δρ(t, r) in terms of a connected distribution of activated sites. Activation clusters decay because the constituent charged particles (holes and/or free charges) diffuse away via the random walks.
Consider an isotropic activation cluster composed of free particles. (The nature of the particles does not matter here-the hole case is similar.) It is assumed for convenience, without loss of generality, that each site of the activation cluster contains only one particle. Thus, the number density of the free particles inside the activation area is equal to one. It steps [14].
Exponent
Expression down to zero just outside. If the microscopic lattice distance is a (a = 1), then there is a unit density gradient across the boundary of the activation cluster looking inside. Because of this gradient, the activation cluster will be losing particles on average. A particle that has crossed the boundary against the direction of the gradient is considered lost from the cluster. As the particles dissipate, the location of the density pedestal shifts inward with speed u. The local flux density of those particles leaving the activation area per second is just the gradient times the local diffusion coefficient. The latter depends on the frequency of the relaxation process as D(ω) ∝ ω η , in accordance with equation (2). If l is the current size of the cluster, then the corresponding relaxation frequency is ω u/l. Using this, the frequency dependence of the diffusion coefficient can be translated into the corresponding l-dependence, the result being D(l) ∝ l −η . Balancing the rate of decay of the cluster with the outward flux of the particles, we write dl/dt ∝ −l −η . Integrating this simple equation over time from t = 0 to t = T λ and over l from l = λ to l = 0, one finds the dispersion relation T λ ∝ λ z with z = 1 + η = 2 − γ . The persistency [36] of relaxation is measured by the Hurst exponent H , which is related to our z via H = 1/z. Lastly, the τ -exponent, which defines the distribution of particle flows caused by a single chain reaction, is obtained from equation (5) of [4], where one replaces φ with α and the fractal dimension D with the fractal dimension of the infinite percolation cluster, d f = d − β/ν. The end result is τ = 3 − αz/d f . Note that the τ values in [4] and [5] differ by 1. Using known estimates [12]- [14] of the percolation indices β, ν and µ, we could evaluate the critical exponents of the DPRW model in all ambient dimensions d 1. The results of this evaluation, summarized in table 1, are in good agreement with the reported numerical values from the traditional sandpiles (for d = 2, z ≈ 1.29, τ ≈ 2.0; for d = 3, z ≈ 1.7, τ ≈ 2.33) [4] and earlier theoretical predictions (for d = 2, z = 4/3, τ = 2; for d = 3, z = 5/3, τ = 7/3) [5]. We consider this conformity as a manifestation of the universality class of the model. For d = ∞, the model reproduces the exponents of mean-field SOC [3].
Equation (5), when account is taken of the η value at criticality, table 1, yields a lowfrequency percolation scaling for particle diffusion on a time-varying fractal structure for all d 1. As a partial case d = 2, it confirms the behavior D(ω * ) ∝ ω * Q 2/3 * predicted in [37,38] (up to the small deviation between 2/3 and ≈ 0.66). More so, the DPRW model gives a Hurst exponent (for d = 2, H ≈ 0.75; for d = 3, H ≈ 0.6) consistently with the reported narrow range of variation of H as observed in different magnetic confinement systems (Hurst exponent varying between H ≈ 0.62 and 0.75) [39]- [41]. In this respect, we also note that some connection between anomalous transport by plasma turbulence, fractional kinetics and SOC has been addressed on a phenomenological level by Carreras et al [42]. After all, the 13 DPRW model gives a value of the exponent γ (for d = 2, γ ≈ 0.66; for d = 3, γ ≈ 0.4) in good agreement with the typically found values (between 0.3 and 0.8) [9,10,43,44] from dielectric-relaxation phenomena in disordered amorphous materials. This last observation supports the hypothesis [32] that dielectrics exhibiting stretched-exponential relaxations are in a self-organized critical state.
Discussion
Apart from the details of the mathematical formalism, the DPRW model is actually quite simple. The main points are as follows. A lattice site can be either empty or occupied. An occupied site is interpreted as a polarization charge. The equilibrium concentration of the polarization charges depends on the potential difference between the plates. When the potential difference changes, the lattice occupancy parameter adjusts. A dynamical mechanism for this uses holes. The holes are just missing polarization charges. They are important key elements of the model as they provide a mechanism for the polarization current in the system. Beside holes, free charges are introduced. The free charges, too, carry electric current whose very specific role in the model is just to control the potential difference between the plates. The changing amount of free charges in the system has an effect on the lattice occupancy parameter. Nonlinearly, it affects the conductivity of the lattice. This nonlinear twist provides a dynamical feedback by which the system is stabilized at the state of critical percolation. The existence of such feedback proves to be an essential ingredient of the SOC phenomenon [45,46]. In many ways the proposed model is but a simple lattice model for dielectric relaxation in a self-adjusting disordered medium. It is perhaps the simplest model that accounts for the whole set of relaxation processes including hole conduction.
It is worth assessing the advantages and disadvantages of the DPRW approach to SOC. In terms of advantages, the electric nature of the model greatly facilitates analytical theory: not only does it permit quantification of the microscopic lattice rules in terms of the frequencydependent complex ac conductivity, the use of the Kramers-Kronig relation in equation (6) makes it possible to directly obtain the susceptibility function by integrating the conductivity response. As a result, exponents z, γ , α and H are expressible in terms of only one parameter, the exponent of ac conduction η. The latter is obtained as a simple function of the percolation indices β, ν and µ.
With respect to disadvantages, the model is seemingly different from traditional approaches to SOC based on cellular automation (CA), and its integration in the existing family of SOC models might be a matter of debate. Even so, the idea of random walks on a self-organized percolation system as a simplified yet relevant model for SOC has significant appeal. Firstly, it relies on the established mathematical formalism of random walks [19,20,28,47] whose advance on the SOC problem is theoretically very beneficial. Secondly, it offers a clear connection to studies outside the conventional SOC paradigm such as, for instance, to transport of mass and charge in disordered media [15,22,23,48]. Instead, the traditional CA-type models are complicated by a poor analytical description of the microscopic transport mechanisms and their basic physics appreciation is at times uneasy.
It is theoretically important to note that the dielectric context of the considered model, apart from offering a convenient platform for analytical theory, is not however essential for the SOC phenomenon. Indeed the DPRW model could be defined in terms of diffusion processes for neutral particles of different kinds. A formal reason for this is the equivalence [22,23] of 14 the frequency-dependent electrical conductivity problem and the frequency-dependent diffusion problem, specific to hopping conduction. The crucial element of the model is the assumption of random walks, not the nature of the particles.
In the DPRW model, the critical percolation state is made self-organized via the mechanisms of hole hopping by which the system responds to the fluctuating potential difference on the capacitor. We should stress that the holes redistribute the polarization charges so as to preserve the properties of the random percolation. They change the shape and the folding of the percolation clusters in the ambient configuration space, but not the random character in the distribution of the conducting sites. This random character contrasts those models of SOC based on invasion percolation (see [3] for a brief review). Note that invasion percolation [49] has different implications as it connects to phenomena with an extremum property such as for instance to self-organized growth phenomena and diffusion-limited aggregation [20]. We do not consider those models here.
The possible generalizations of the DPRW model correspond to biased random walks of free charges in the direction of the potential drop and/or inclusion of a second critical threshold p cc p c above which the random walk dynamics might change to a biased motion. We consider those generalizations obvious as they mainly intend to modify the value of the exponent η in a certain parameter range, while the basic physical picture of SOC will remain essentially the same.
The observed connection to the fractional diffusion equation, equation (19), may be interpreted, with the aid of the proposed SOC model, in favor of considering SOC as one important case for fractional kinetics [29,50]. It is understood that, in the DPRW model, correlations due to the fractional time derivative 0 D 1−γ t coexist along with the essentially random nature of microscopic particle motions. In many ways, the correlations build up, because the random walks of particles are squeezed to a low-dimensional structure with fractal geometry. Some discussion of these properties can be found in [38], where an approach based on pseudochaotic Hamiltonian dynamics has been suggested.
The final point to be addressed here concerns the issue of universality class. We take notice of the fact that the DPRW SOC model uses the charitable redistribution rule [51] to propagate the activities, similar to the traditional BTW sandpile [2] or others [4,5]. This means that an active site always loses its content to the neighbors. The charitable rule is to be distinguished from the neutral rule, when each of 2d + 1 sites involved in redistribution gets an unbiased random share of the transported quantity. Models using the neutral rule often fall in the universality class of directed percolation (DP) and are characterized by appreciably larger values of the dynamic exponent z (for d = 2, z ≈ 1.73 ± 0.05) [51]. Based on this evidence, we suggest that the DPRW model belongs to the same universality class as the BTW sandpile, and not to the DP universality class, consistent with the values of the critical exponents collected in table 1.
A model for the instability cycle
As the probability of site occupancy p approaches the percolation threshold p c , the pair connectedness length (i.e. the size of the largest cluster) diverges as ξ ∝ | p − p c | −ν . For p > p c , the longest relaxation time in the system is T ξ ∝ ( p − p c ) −zν and the dynamic susceptibility goes to infinity as χ ∝ ( p − p c ) −zνγ . In the DPRW model, p as a function of time is determined dynamically from the competing charge deposition and loss processes. That is, d p/dt = Z + − Z − , where Z + is the net deposition rate of free charges on the capacitor's left plate and Z − is the particle loss rate. The net deposition rate, or the driving rate, is the control parameter of the model: it takes a given value. The particle loss rate is obtained as electric current in the ground circuit, i.e. Z − = I θ( p − p min ), where p min is the lower limit of variation of p. Note that Z − is due to the free particles leaving the system through the earthed plate. The Heaviside θ function indicates that the lattice can release charges only when/if p p min . We expect that p min lies close to, although somewhat lower than, the percolation threshold p c . This is because the conducting cluster can still lose its charge content to the ground circuit even in the absence of a connecting path to the charged plate. The dynamics of I can be estimated from dI /dt = ±I /T ξ , where the upper sign corresponds to the relaxation process in the lattice. Putting all the various pieces together, we write where sign( p − p c ) = +1 (−1) for p > p c ( p < p c ) is the sign-function, and W is a numerical coefficient. Equations (32) and (33) define a simple system of equations for two cross-talking variables, the lattice occupancy per site and the particle loss current. These model equations are perhaps the simplest nonlinear equations describing the generic fueling-storage-release cycle in driven, dissipative, thresholded dynamical systems. An examination of these equations shows that the dynamics are periodic (auto-oscillatory), with the peak value of electric current Here, p max ( p max > p c ) is the upper limit of the p variation. Note that I max → 0 for p max → p c as expected. The auto-oscillatory motions signify that the pure SOC state is destabilized and that the system's phase trajectories enter the supercritical parameter range. When p max → 1, the periodic dynamics acquires a sharp, bursting character. The bursts half-duration (or half-width) equals (1/W )( p max − p c ) −zν . Eliminating the distance to the critical state, one obtains the scaling relation I max ∝ −ζ , where ζ = (zν + 1)/zν. The period between the bursts is found to be b ( p c − p min )/Z + . A pure SOC state with no superimposed periodic bursts arises when b → ∞. This implies that Z + → 0 for p → p c . Thus, criticality requires the vanishing of Z + , in agreement with the result of [3]. The critical state is stable when b T ξ . We have This is satisfied when Z + → 0 faster than This limit exceeded, the system starts to auto-oscillate around the percolation point with a period dictated by the net deposition rate of the polarization charges. The physical origin of this autooscillatory motion lies in the fact that the changing number of free particles provides feedback to the lattice occupancy parameter. It is because of this feedback relation that the DPRW system operates as a self-adjusting, intrinsically nonlinear dynamical system. Whether or not this feedback will excite the instability depends on how the characteristic driving time compares to the characteristic relaxation time. Indeed, focus on the stability condition in equation (34). The system being stable at the percolation point requires that the relaxation time due to the random walks T ξ be short compared to the characteristic driving time, 1/Z + . In this parameter range, any occasional charge density perturbations will dissipate via random walks before input conducting sites are again introduced. When the percolation point is approached, because the time scale T ξ ∝ | p − p c | −zν diverges, it is essential that the system be driven infinitesimally slowly to remain in a pure SOC state. Instability occurs when the relaxation processes operate on a longer time scale than the driving processes. In this regime, the system accumulates the polarization charges 3 , whereas to remain at criticality it should get rid of them. The accumulation of the polarization charges has a direct effect on the conductivity between the plates, which steps up with the lattice overshooting the percolation threshold. When p max → 1, the system can be thought of as facing the typical conditions of electrostatic discharge in the regime of short circuit. It should be emphasized that the feedback mechanism does a twofold job: (i) it stabilizes the system at the state of critical percolation in a regime when the driving rate is infinitesimal and (ii) it excites cross-talk between the conductivity and the lattice occupancy parameters when the driving rate is faster than the relaxation rate.
In the parameter range in which the strength of the driving vanishes, the multi-scale geometry of the critical percolation is dominant in providing the major transport characteristics for the DPRW lattice. The situation changes drastically when the strength of the driving increases above some level. With the system's departure away from the percolation point, the multi-scale features will soon be lost substituted by the bulk-average nonlinearities. The fact that equations (32) and (33) above are formulated in terms of the system-average parameters, p and I , merely reflects that the system is allowed to appreciably depart from the state of marginal stability, or the SOC state (this means that p max can be rather closer to 1 than to p c ), and that the effect of overdriving readily calls for global features to come into play. It is noted that, in general, the multi-scale properties due to SOC can coexist along with the global or coherent features. We illustrate this type of mixed SOC-coherent behavior on an example below: substorm behavior of the dynamic magnetosphere.
The end result of the discussion above is that the strength of the driving plays a crucial role in dictating both linear and nonlinear behavior in the DPRW model. To obtain a pure SOC state, the driving rate should go to zero fast enough as the critical point is approached. The main effect that overdriving has on the DPRW dynamics is to excite unstable modes associated with periodic bursts in the particle loss current. Accordingly, the system auto-oscillates between a subcritical ( p min < p c ) and a supercritical ( p max > p c ) state in response to external forcing. These dynamical properties are summarized in figure 3.
The transition to auto-oscillatory dynamics signifies the increased role of global and nonlinear behavior in the strongly driven DPRW system as compared to a pure SOC system. The borderline between the two regimes corresponds to T ξ ( p c − p min )/Z + . The stability condition T ξ ( p c − p min )/Z + has serious implications for the achievable SOC regimes. It imposes one important restriction on the net deposition rate of free particles against the longest relaxation time on the incipient percolation cluster.
Why fishbone-like instability
To help judge the result obtained, let the critical exponents take their mean-field values: z = 2, ν = 1/2. In this limit, equations (32) and (33) above reproduce, up to a change of variables, equations (13) and (14) of [8]. The latter set of equations appear in the basic theory of Alfvèn instabilities as a simple model for the coupled kink-mode and trapped-particle system in a magnetically confined toroidal plasma where beams of energetic particles are injected at high power. The mode dubbed 'fishbone' is characterized by large-amplitude, periodic bursts of magnetohydrodynamic (MHD) fluctuations, which are found to correlate with significant losses of energetic beam ions [8]. By comparing the two sets of equations, one can see that the lattice occupancy per site p corresponds to the effective resonant beam-particle normalized pressure within the q = 1 surface (q is the familiar safety factor used in tokamak research), p c corresponds to the mode excitation threshold, and the particle loss current I corresponds to the amplitude of fishbone. This direct correspondence between the two models suggests considering the instability in equations (32) and (33) as analog 'fishbone' instability for SOC dynamics.
This correspondence is not really surprising. Mathematically, it stems from the resonant character of fishbone excitation [52], implying that the energetic particle scattering process is directly proportional to the amplitude of fishbone [8]. This resonant property dictates a specific nonlinear twist to the fishbone cycle, differentiating it from other bursting instabilities in magnetically confined plasmas. It is this 'resonant' twist observed in the DPRW model system that identifies the analog 'fishbone' mode for SOC. The instability is global in that it involves oscillations of system-average parameters as a result of the lattice conductivity properties nonlinearly changing with the varying strength of the driving and is accompanied by a transition from weak, subdiffusive to strong, ballistic transport on the lattice, similar to the plasma confinement case [53,54].
We reiterate that the DPRW instability cycle exhibits characteristic aspects of strongly driven MHD instabilities in magnetic confinement devices [52,54], and its occurrence is a reminder of the basic physics of resonant fishbone excitation. Conversely, the nonlinear science of fishbone [55] might be thought of as being connected with those properties generic to strongly driven, interaction-dominated, thresholded dynamical systems such as the DPRW supercritical system, rather than unique to toroidally confined fusion plasmas.
Coexistence of self-organized criticality and coherent features: substorm behavior of the dynamic magnetosphere
The idea of fishbone-like instability in self-organized critical dynamics is very appealing as it addresses a type of behavior in which multi-scale features due to SOC coexist along with global or coherent features. One example of this coexistence can be found in the solar wind−magnetosphere interaction. It has been discussed by a few authors [56]- [58] that the coupled solar wind-magnetosphere-ionosphere system operates as an avalanching system and that there is a significant SOC component in the dynamics of magnetospheric storms and substorms, along with a coherent component [59] that evolves predictably through a sequence of clearly recognizable phases [60]. Here, we advocate a way of thinking [61] in which the SOC component is associated with the properties of self-organization of electric currents and magnetic field fluctuations in the plasma sheet of the Earth's magnetotail [62], whereas the coherent component is subordinate to the global instability of the SOC current system and the phenomenon of tail current disruption [63]. This means that the dynamic magnetosphere survives through a mixed SOC-coherent behavior.
In this spirit, we expect the input power due to magnetic reconnection at the Earth's dayside magnetopause to self-consistently control the system's departure from the state of marginal stability, or the SOC state, with stronger departures favoring coherent features. The magnetotail current system being close to SOC implies that the power spectral density of the magnetic fluctuation field is, in the parameter range of sufficiently low frequencies, given by a power-law distribution S(ω) ∝ ω −α , with α ≈ 1.3 (see table 1). The latter value falls within the typical range of variation of α as found from in situ satellite measurements [63]- [65]. With the input power going above a certain level, the dynamical system at SOC fails to accommodate the increased potential difference between the two flanks of the magnetotail. At this point, a portion of the cross-tail electric current will be redirected to the ionosphere (here considered as the analog 'ground circuit'), thus triggering a magnetospheric disturbance or a substorm. These processes are schematically illustrated in figure 4. The phenomenon of substorm bears signatures [62] enabling it to be considered as a second-order phase transition in the presence of a coexisting nonlocal symmetry [61], consistent with the description in terms of the fractional Ginzburg-Landau equation [66].
We should stress that we associate magnetospheric disturbances with instabilities on the top of the underlying SOC state, and not with the SOC behavior itself, contrary to the implication of [58]. In the above coupled system of equations, equations (32) and (33), we identify the driving rate, Z + , with the magnetic reconnection rate at the Earth's dayside magnetopause; the lattice occupancy parameter, p, with the average normalized energy density of magnetic fluctuation field in the magnetotail current sheet; and the particle loss current, I , with electric current in the ionosphere. Note that the period between the fishbone events (major magnetospheric disturbances) may vary with varying Z + . This is going to be the case in the realistic magnetosphere owing to the solar wind variability.
Summary and final remarks
In this paper, we have proposed a simple model of self-organized criticality, the DPRW model, which addresses the SOC problem as a transport problem for electric charges. The novel concepts of our study are (i) a theory of self-organized criticality based on the analogy with dielectric-relaxation phenomena in self-adjusting random media, and (ii) prediction of a 'resonant' instability of SOC due to the nonlinearities present. The system adjusts itself to remain at the critical point via the mechanisms of hole hopping associated with the random walk-like motion of lattice defects on a self-consistently evolving percolation cluster. The relaxation to SOC of a slightly supercritical state is described by the Mittag-Leffler function E γ [ − (t/T λ ) γ ], this being the solution [28] to the fractional relaxation equation, and not by a simple exponential function as for standard relaxation. The durations of relaxation events are power-law distributed, with diverging upper cut-off relaxation scale. The ideal SOC state requires that the driving rate goes to zero faster than a certain scaling law as the percolation point is approached. The model belongs to the same universality class as the BTW sandpile, and should be distinguished from the DP-like SOC models.
In a narrower sense, the DPRW approach to SOC offers a simple yet relevant lattice model for dielectric-relaxation phenomena in systems with spatial disorder. One by-product of this approach is the case for stretched-exponential, the so-called KWW relaxation function, equation (20), which is often found empirically in various amorphous materials such as for instance in many polymers and glass-like materials near the glass transition temperature (for reviews, see [10] and [67], and references therein). In this connection, we reiterate that the DPRW SOC model gives an exponent of the stretched-exponential relaxation (for d = 2, γ ≈ 0.66; for d = 3, γ ≈ 0.4) in good agreement with the experimentally observed values [9,10,44]. Overdriving the DPRW system near self-organized criticality has a destabilizing effect on the SOC state. The fundamental physics of this instability consists in the following. Because of rapid, in the sense that Z + Z +max ∝ | p − p c | zν , accumulation of the conducting sites, the system departs from the percolation point, and its geometry changes from fractal-like to crystalline-like. This means that the average conductivity inside the capacitor has greatly increased. As the lattice conducts more electricity, losses increase in the ground circuit. However, because the particle loss current has feedback on the lattice occupancy parameter, cross-talk is excited between the system's average conductivity response and the distance to the critical state. We have observed that the instability cycle is qualitatively similar to the excitation of the internal kink ('fishbone') mode in tokamaks with high-power beam injection (the lattice occupancy per site p corresponds to the effective resonant beam-particle normalized pressure within the q = 1 surface, p c corresponds to the mode excitation threshold and the particle loss current I corresponds to the amplitude of fishbone). The instability is 'resonant' in that the particle loss process is directly proportional to I . This resonant property dictates a specific nonlinear twist to the fishbone cycle, differentiating it from other bursting instabilities in magnetically confined plasmas.
We have discussed that the resonant instability, the fishbone, may have serious implications for the functioning of complex systems. The existence of an instability on the top of SOC dynamics conforms with the results of [68] in which the traditional (sandpile) SOC model has been modified by adding diffusivity, giving rise to periodic relaxation-type events as a function of the system drive, while a pure SOC state requires a vanishing drive 4 .
The excitation of fishbone-like instability in SOC systems leads to a type of behavior in which the multi-scale features due to SOC coexist along with the global or coherent features (i.e. mixed SOC-coherent behavior). One example of this coexistence is found in the solar wind-magnetosphere interaction. We expect the concept of mixed SOC-coherent behavior to be a plausible statistical picture for thresholded, dissipative, nonlinear dynamical systems in the parameter range of nonvanishing external forcing. In this respect, we speculate that some of the 'extreme' events, or system-scale responses, observed in complex natural and social systems [7,69] may, in fact, be the fishbone-like instabilities of SOC predicted by the present theory. It will be interesting to investigate if such phenomena as, for instance, the El Niño South Pacific Oscillation or the Atlantic Multi-decadal Oscillation might be envisaged as fishbone-like events in the Earth's SOC climate system as a consequence of ocean-atmosphere coupling [70,71]. In the same spirit, the periodic occurrences of glacial periods on Earth might perhaps be considered as globally induced unstable climate modes stemming from cross-talk between air temperature and dust concentration. Support for this suggestion can be found in Antarctic ice records as discussed in [72].
All in all, the fishbone phenomenon warns of repeating severe events being virtually unavoidable in driven systems. | 2019-04-20T13:14:46.993Z | 2011-04-01T00:00:00.000 | {
"year": 2011,
"sha1": "0cb3a2c70cf2fcdaca6a3e0cdace5f3200d98fbb",
"oa_license": "CCBY",
"oa_url": "https://doi.org/10.1088/1367-2630/13/4/043034",
"oa_status": "GOLD",
"pdf_src": "IOP",
"pdf_hash": "dbbe79da500aa3b1d5f638c83bdea5b57a5de5e7",
"s2fieldsofstudy": [
"Physics"
],
"extfieldsofstudy": [
"Physics"
]
} |
5583033 | pes2o/s2orc | v3-fos-license | Matrix metalloproteinase-9 gene polymorphisms in nasal polyposis
Background Matrix metalloproteinase (MMP) is involved in the upper airway remodeling process. We hypothesized that genetic variants of the MMP-9 gene are associated with cases of chronic rhinosinusitis with nasal polyposis. Methods We conducted a case-control study where 203 cases of chronic rhinosinusitis with nasal polyposis and 730 controls were enrolled. Three tagging single nucleotide polymorphisms (SNPs) and one promoter functional SNP rs3918242 were selected. Hardy-Weinberg equilibrium (HWE) was tested for each SNP, and genetic effects were evaluated according to three inheritance modes. Haplotype analysis was also performed. Permutation was used to adjust for multiple testing. Results All four SNPs were in HWE. The T allele of promoter SNP rs3918242 was associated with chronic rhinosinusitis with nasal polyposis under the dominant (nominal p = 0.023, empirical p = 0.022, OR = 1.62) and additive models (nominal p= 0.012, empirical p = 0.011, OR = 1.60). The A allele of rs2274756 has a nominal p value of 0.034 under the dominant model and 0.020 under the additive model. Haplotype analysis including the four SNPs showed a global p value of 0.015 and the most significant haplotype had a p value of 0.0045. We did not see any SNP that was more significant in the recurrent cases. Conclusions We concluded that MMP-9 gene polymorphisms may influence susceptibility to the development of chronic rhinosinusitis with nasal polyposis in Chinese population.
Background
Tissue remodeling has gained increasing interest in the pathogenesis of chronic upper and lower airway disease. It is a dynamic process that involves extracellular matrix (ECM) production and degradation leading to either a normal reconstruction process or a pathological structure [1]. Nasal polyposis is a chronic inflammatory disease in the upper airway. The histological appearance of nasal polyposis is characterized by inflammatory cell infiltration, modifications of epithelial cell differentiation, and tissue remodeling that includes basement membrane thickening, gland modifications, ECM accumulation and edema. Previous studies suggested that polyposis formation involved ECM protrusion through an initial localized epithelial defect [2]. Interactions between epithelial, stromal, and inflammatory cells could then perpetuate further polyposis growth. Although inflammatory cells, especially eosinophils, are thought to play an important role in nasal polyposis, the sequence of events for the development of polyposis is still controversial.
Matrix metalloproteinases (MMPs) are a family of zinc and calcium-dependent endopeptidases that are known to be important to remodel the ECM. MMP-9 (92 kD type IV Collagenase, gelatinase B) cleaves type IV collagen which is the major structural component of basement membrane. Studies also suggested that MMP-9 may play a crucial role in airway remodeling in asthma [3]. In transgenic animal studies, an elevated level of MMP-9 was associated with defects in bronchial architecture [4]. Upregulation of MMP-9 in the nasal polyposis may damage the collagen of basement membrane of epithelia and blood vessels, causing increases in permeability and edema in the stroma. Elevated levels of MMP-9 protein [5][6][7][8] and MMP-9 mRNA [6,7] were found in the cases of nasal polyposis. Elevated plasma MMP-9 level was also reported recently in patients with allergic nasal polyps [9]. Patients with poor-healing reaction after sinus surgery had more severe edematous and fibrotic changes [10], and higher amounts of MMP-9 in both nasal secretions [11] and connective tissue [12] when compared with good healers. Therefore MMP-9 may play a role in the pathogenesis or recurrence of the nasal polyposis. To our knowledge, there is no published data regarding the relationship between MMP-9 genetic polymorphisms and the risk of the chronic rhinosinusitis with nasal polyposis. We conducted a case-control study to systematically investigate the role of MMP-9 tagging single nucleotide polymorphisms (tSNPs) and a promoter functional polymorphism in the development of chronic rhinosinusitis with nasal polyposis in a Chinese population residing in Taiwan.
Subjects
We recruited 203 patients of bilateral chronic rhinosinusitis with nasal polyposis at the Kaohsiung Medical University Hospital between October 2005 and June 2007. The diagnosis of chronic rhinosinusitis with nasal polyposis was based on history, physical examination, nasal endoscope, and sinus CT scan. Patients with malignancies or asthma were excluded from the study. All the patients were followed up at least three months after surgery. Recurrence was defined as a patient with newly developed pedunculated nasal polyps (instead of cobblestone or polypoid mucosa) fully occupying the middle meatus three months after surgery using either anterior rhinoscope or nasal endoscope. Recurrence was identified in twenty six patients for whom revision surgery was performed at our institution. A total of 730 control subjects were used in the present study and they were recruited from the general populations who volunteered to participate in our study while receiving a health screening examination at our Hospital. None of the controls reported major disable diseases upon enrollment. Information on demographic characteristics was collected. The study was proved by the Institutional Review Board (IRB) of Kaohsiung Medical University Hospital and written informed consent was given by each subject.
SNP selection and Genotyping
Genomic DNA was extracted from peripheral blood by a standard method. Three tSNPs were selected from the HapMap Project (International HapMap Consortium) and all of them have the minor allele frequency ≥ 10% in the Han Chinese population. The three tSNPs are: the non-synonymous SNP rs2664538 (Gln/Arg) at exon 6 (this SNP was merged to rs17576), intronic SNP rs3787268 at intron 8 and non-synonymous SNP rs2274756 (Gln/Arg) at exon 12 (this SNP was merged to rs17577). In addition, we also chose one commonly studied promoter functional SNP (rs3918242, i.e. -1562 C/T). Genotyping for the three tSNPs was carried out by using the TaqMan 5' nuclease assay (Applied Biosystems, Foster City, USA). Briefly, PCR primers and TaqMan Minor Groove Binder (MGB) probes were designed and reactions were performed in 96-well microplates with ABI 9700 thermal cyclers (Applied Biosystems, Foster City, USA). Fluorescence was measured with an ABI 7500 Real Time PCR System (Applied Biosystems, Foster City, USA) and analyzed with its System SDS software version 1.2.3. Genotyping for promoter SNP rs3918242 (-1562C/T) was determined by the polymerase chain reaction (PCR)restriction fragment length polymorphism (RFLP) method. The forward and reverse primers were 5' GCCT-GGCACATAGTAGGCCC 3' and 5' CTTCCTAGCCA-GCCGGCATC 3', respectively [13]. The restriction site was detected by SphI [13].
Statistical analysis
Continuous variables were analyzed by independent t test and were presented as mean ± SD. The allele frequency was obtained by direct gene counting. Hardy-Weinberg equilibrium (HWE) was checked in controls by using the x 2 test. Multiple logistic regression analysis was performed to assess the genetic effect with adjustment for age and sex. We examined the effect of the minor allele of each SNP in three genetic models (dominant, additive, and recessive). We also performed a subset analysis by stratifying the cases according to recurrence of the disease. SPSS 13.0 version for Windows (Chicago, IL, USA) was used for statistical analysis. Linkage disequilibrium (LD) was assessed for any pair of SNPs and haplotype blocks were defined using the default setting of the Haploview software [14]. We used the Hap-Clustering program [15] to evaluate haplotype-phenotype association. To adjust for multiple testing, we also presented empirical p values by 10,000 permutations. Table 1 shows the baseline characteristics of the subjects. The mean age (years) was 43.8 ± 16.5 in cases and 54.4 ± 11.9 in controls. The sex distribution in cases was 2.1:1 (male to female) which is similar to the ratio reported previously [16,17], and was 1:1.4 in controls. Sixty-four patients (31.5%) had recurrent nasal polyps.
Single SNP Results
The distribution of MMP-9 genotypes was in HWE among controls (all p values > 0.05) for all SNPs. The genotyping call rates ranged from 92% to 98%. In the analysis of overall data (Table 2), the multivariate logistic regression model showed that the T allele of promoter SNP rs3918242 was associated with chronic rhinosinusitis with nasal polyposis under the dominant (nominal p = 0.023, empirical p = 0.022, OR = 1.62) and additive Table 2).
LD and Haplotype Analysis
The four SNPs formed one haplotype block ( Figure 1). Haplotype analysis demonstrated a global p value of 0.015 ( Table 3). The most significant haplotype TGGA (rs3918242/rs2664538/rs3787268/rs2274756) had the frequency of 14.7% in cases and 8.5% in controls (haplotype specific p value of 0.0045).
Discussion
We systematically investigated four SNPs at MMP-9 in relation to chronic rhinosinusitis with nasal polyposis in a Chinese population residing in Taiwan. The results showed that three SNPs were associated with the development of chronic rhinosinusitis with nasal polyposis. The most significant result was from the promoter polymorphism of the MMP-9 gene (rs3918242, i.e. -1562 C/ T), which indicates that the rare T allele would significantly increase the risk for nasal polyposis. We did not find more significant results in the recurrent subjects. As a matter of fact, the frequencies of the risk T allele at rs3918242 and the risk A allele at rs2274756 are similar between the non-recurrent cases and recurrent cases.
Using the haplotype analysis, the result gave us a stronger statistical support for MMP-9 as a genetic marker. Therefore, the current data showed that MMP-9 polymorphisms might influence susceptibility to the development of chronic rhinosinusitis with nasal polyposis but might not increase the risk for recurrence. To our knowledge, this is the first report to demonstrate the potential contribution of the MMP-9 genetic variants to the development of chronic rhinosinusitis with nasal polyposis. Regulation of MMPs is complex. It is considered that regulation of MMP activity occurs at three levels: gene transcription, activation of the secreted proenzyme and inhibition by specific and non-specific inhibitors [18]. The regression analysis suggested that the two significant SNPs were not independent and the promoter SNP remained significant after including the rs2274756 at exon 12 into the model. Therefore, it is likely that the genetic effect of MMP-9 was mainly from the promoter SNP. Furthermore, MMP-9 protein [5][6][7][8] and mRNA [6,7] levels were reported to be different between patients with the chronic rhinosinusitis with nasal polyposis and controls. Zhang et al. [19] also reported that the T allele of promoter SNP rs3918242 at MMP-9 had a higher promoter activity which would lead to a higher production of MMP-9, although a recent study did not find any differential expression between the T and C alleles [20].Taken together, the studies using different approaches consistently imply that the high activity T allele is likely to increase a risk for chronic rhinosinusitis with nasal polyposis.
Our study design has limitations. The sample size in the present study was moderate, and type I error is always possible. However, based on the result of functional SNP rs3918242, the study population still provided a power of 85% to detect the risk allelic effect. Further studies to replicate our results are necessary. The follow-up period is limited in our study. Because nasal polyps may recur several years after surgery, extended follow up is required prior do drawing further conclusions. Our controls did not receive a nasal examination to exclude the possibility of nasal polyposis. Because the prevalence of nasal polyps is usually less than 4% [21,22], we did not expect a significant number of our controls to have chronic rhinosinusitis with nasal polyposis. Furthermore, if any controls had chronic rhinosinusitis with nasal polyposis, the effect of such misclassification of disease status would only decrease the statistical power and reduce the significance. Therefore, our conclusion is valid even though we did not examine the controls. Although the sex distribution was different between cases and controls, the genotype distributions between male controls and female controls are very similar. Accordingly, our results were not confounded by the sex effect.
Conclusions
In conclusion, our study provides novel evidence to support genetic polymorphisms in the MMP9 gene can influence the risk for chronic rhinosinusitis with nasal polyposis. Furthermore, the T allele of the functional promoter SNP rs3918242 that has been shown to increase MMP-9 expression is also a risk allele for chronic rhinosinusitis with nasal polyposis. However, the role of MMP-9 polymorphisms in the recurrence of the disease requires further investigation. | 2014-10-01T00:00:00.000Z | 2010-06-09T00:00:00.000 | {
"year": 2010,
"sha1": "099b7bf2696e80f1b475460143e1c7fe639b2b2b",
"oa_license": "CCBY",
"oa_url": "https://bmcmedgenet.biomedcentral.com/track/pdf/10.1186/1471-2350-11-85",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "db1c3f4ee1b4246820f7af55be085a2cc0fc7029",
"s2fieldsofstudy": [
"Medicine"
],
"extfieldsofstudy": [
"Medicine",
"Biology"
]
} |
212738382 | pes2o/s2orc | v3-fos-license | Evolutionary Trends in the Mitochondrial Genome of Archaeplastida: How Does the GC Bias Affect the Transition from Water to Land?
Among the most intriguing mysteries in the evolutionary biology of photosynthetic organisms are the genesis and consequences of the dramatic increase in the mitochondrial and nuclear genome sizes, together with the concomitant evolution of the three genetic compartments, particularly during the transition from water to land. To clarify the evolutionary trends in the mitochondrial genome of Archaeplastida, we analyzed the sequences from 37 complete genomes. Therefore, we utilized mitochondrial, plastidial and nuclear ribosomal DNA molecular markers on 100 species of Streptophyta for each subunit. Hierarchical models of sequence evolution were fitted to test the heterogeneity in the base composition. The best resulting phylogenies were used for reconstructing the ancestral Guanine-Cytosine (GC) content and equilibrium GC frequency (GC*) using non-homogeneous and non-stationary models fitted with a maximum likelihood approach. The mitochondrial genome length was strongly related to repetitive sequences across Archaeplastida evolution; however, the length seemed not to be linked to the other studied variables, as different lineages showed diverse evolutionary patterns. In contrast, Streptophyta exhibited a powerful positive relationship between the GC content, non-coding DNA, and repetitive sequences, while the evolution of Chlorophyta reflected a strong positive linear relationship between the genome length and the number of genes.
Introduction
Mitochondrial, plastidial and nuclear genome lengths (GL) increased dramatically due to the addition of non-coding DNA (%NC) during the evolution of green plants, particularly during the transition from water to terrestrial life. This phenomenon occurred in parallel with an increase in Guanine-Cytosine (GC) content (%GC) and organism complexity [1]. The interactions among GL, %NC, the number of repeated sequences (NRS), their total length (RSL), and the %GC are not yet well understood for any of the three genetic compartments; however, plastids appear to be the most evolutionarily stable, with few changes in GL or %GC [2]. Evolutionary changes in the aforementioned variables may have occurred concurrently at two or three genetic compartments, and the factors determining this concurrence or the lack thereof are among the most intriguing puzzles in the evolutionary biology of primary photosynthetic eukaryotes.
Archaeplastida include Glaucophyta, Rhodophyta (red algae), and Viridiplantae (green plants), although the monophyly of this group is not exempt from controversy [3,4]. Green plants are further divided into two main clades: Chlorophyta, including most unicellular and marine algae; Streptophyta, including most freshwater algae and land plants [5]. All these lineages have three genetic compartments with well-coordinated working biochemical machinery, and they differ significantly in their architecture and evolution [6]. We can see that lateral transfer among the three compartments occurred, especially from the organelles to the nucleus and from the plastid to the mitochondrion, whereas transfer from the mitochondrion to the plastid was almost non-existent [7]. Meaningful differences can be reflected in some of the mitochondrial genome (mtDNA) characteristics in the Archaeplastida lineages, such as the GLs, genetic code, codon usage, gene content, and the degree of ribosomal gene fragmentation [8]. Throughout the transition from water to land, terrestrial plants acquired some peculiar features in their mtDNA, including large genomes with a high %NC, editing at the transcriptional level, genomic recombination, trans-splicing introns, foreign DNA insertions, lateral gene transfer, and gene duplications. These features are not yet widely studied in streptophyte green algae and early land plants [9]. Selection was tested in both Viridiplantae lineages as the driving force that increases mtDNA and %GC. Two very different patterns arose: strong selection likely affected the codon usage in Chlorophyta and mutation, and genetic drift appeared to be the major evolutionary driving forces for Streptophyta [10,11].
These patterns in Streptophyta mtDNA, produced essentially by non-adaptive forces, depended on the effective population size, generation times and the differences between unicellularity and multicellularity, which was consistent with the previous findings in plastids [12]. If strong selection was excluded, the challenge was to determine which other evolutionary force can explain the %GC increases throughout Streptophyta. Other possible explanations proposed for the GC bias, apart from selection, were the mutational bias and GC-biased gene conversion (gBGC) [13][14][15]. The ribosomal DNA (rDNA) genes are among the most conserved sequences in the three genetic compartments and have very specific evolutionary dynamics, including their long-term high recombination rate due to a concerted evolution.
They are also some of the most widely available sequences from eukaryotes in genomic databases. Therefore, they were considered useful genetic markers when analyzing the %GC variation throughout the entire genome. Based on the rDNA polymorphism data, the variation in nuclear rDNA %GC throughout the phylogenetic trees of angiosperms and vertebrates was observed [15] with a strong SNP (single nucleotide polymorphism) excess, for which either G or C was the majority allele. This was inconsistent with the mutational bias hypothesis and supported the GC-biased gene conversion (gBGC)/selection-driven evolution hypothesis.
The most outstanding aspect of this biased gene conversion was its impact on %GC, which affect the functional components of the genome and impeded natural selection (the Achilles' heel hypothesis) [14]. The gBGC appeared to play a significant role in the evolution of the genetic systems (e.g., sexual reproduction and recombination, inbreeding avoidance mechanisms, and ploidy cycles) and the development of the senescence and degeneration of non-recombining regions [16]. Despite the importance of this evolutionary force, the phylogenetic gBGC distribution in Streptophyta was not yet studied in association with the transition from water to land at the three genetic compartments.
Is Archaeplastida mitochondrial %GC related to the GL, %NC, NRS, gene number (GN) or coding sequences? If so, how are they linked and what role does this play in different lineages? Is there heterogeneity in the base composition through the streptophytes phylogeny? How is the %GC distributed in the three genetic compartments throughout the Streptophyta tree? Is the %GC increase concomitant for the three genome compartments? To answer these questions, our aims were the following: (i) To analyze the genomic variables with phylogenetically independent linear models in order to depict the evolutionary patterns of the mitochondrial genome in Archaeplastida. (ii) To examine the heterogeneity in base composition among the branches of the Streptophyta tree, using non-homogeneous models of sequence evolution, taking into account the phylogenetic relationships. (iii) To implement a reconstruction of the ancestral GC content and GC* (equilibrium GC frequency) [17] in the three genetic compartments throughout the Streptophyta phylogenetic tree to compare their evolution.
Genome Features
Across Archaeplastida evolution (Figure 1), taking all clades and studied variables into account (Supplementary Table S1), the only significant relationship found was the one between the NRS and GL, with both logarithms transformed for linearity (Supplementary Materials Figure S1A). These two variables were also linearly related to %NC; however, these relationships became unclear when the Streptophyta lineage was removed from the analyses, and became more significant when the Chlorophyta lineage was the one excluded (Supplementary Materials Figure S1B,C).
Plants 2020, 9,358 3 of 14 non-homogeneous models of sequence evolution, taking into account the phylogenetic relationships.
(iii) To implement a reconstruction of the ancestral GC content and GC* (equilibrium GC frequency) [17] in the three genetic compartments throughout the Streptophyta phylogenetic tree to compare their evolution.
Genome Features
Across Archaeplastida evolution (Figure 1), taking all clades and studied variables into account (Supplementary Table S1), the only significant relationship found was the one between the NRS and GL, with both logarithms transformed for linearity (Supplementary Materials Figure S1A). These two variables were also linearly related to %NC; however, these relationships became unclear when the Streptophyta lineage was removed from the analyses, and became more significant when the Chlorophyta lineage was the one excluded (Supplementary Materials Figure S1B,C). Figure S1D,E). The effect of this last variable was mediated by the %NC (Figure 2c), and therefore, it lost its significance when the %NC effect was accounted for. In other words, the NRS affected the %NC and this explained 46% of the variance for the %GC. The GL was not directly related to the %GC, however it was clearly affected by the %NC, which explains 82 % of its variance (Figure 2d). The effect of the NRS over the GL was also mediated by the %NC. On the other hand, when the Streptophyta clade Figure S1D,E). The effect of this last variable was mediated by the %NC (Figure 2c), and therefore, it lost its significance when the %NC effect was accounted for. In other words, the NRS affected the %NC and this explained 46% of the variance for the %GC. The GL was not directly related to the %GC, however it was clearly affected by the %NC, which explains 82 % of its variance (Figure 2d). The effect of the NRS over the GL was also mediated by the %NC. On the other hand, when the Streptophyta clade was excluded from the analyses, the GL appeared to be related directly to the number of protein-coding genes (NPG) (Supplementary Materials Figure S1F).
Figure 2.
Phylogenetically independent contrasts (crunch) for the following relationships: Guanine-Cytosine (GC) content (%GC) with the number of repeated sequences (NRS) log transformed (a), and with the non-coding sequences (%NC) (b); between the %NC and NRS (c); and genome length (GL) with the %NC (d). All models were done excluding the species of Chlorophyta. For the complete set of contrasts, see the complementary material (Supplementary Materials Figure S1).
Meanwhile, across Archaeplastida, the number of genes ranged greatly, both with lineages and between them. The main pattern was the gene number reduction in the Chlorophyceae family, along with an extreme reduction in the mitochondrial genome. The genes maintained in most species were those for tRNA, rRNA and ribosomal proteins and those involved in respiration and oxidative phosphorylation (Supplementary Materials Figure S2 and Table S2). Of these, only six genes were preserved in the mtDNA in all the studied species (cob, cox1, nad1, nad4, nad5 and nad6). Therefore, mitochondria lost most of the original bacterial genes and conserved only those associated with their principal cellular functions, respiration and oxidative phosphorylation, and those involved in the genetic machinery (rRNA and tRNA), but not in all cases. Two hornworts (Nothoceros aenigmaticus and Phaeoceros laevis) lost most of the ribosomal protein-coding genes [18,19], as did Selaginella, which also lost all tRNA genes [20].
In addition, Chlorophyceae did not maintain any genes for ribosomal proteins, and the tRNAs were reduced considerably (Scenedesmus obliquus retained more tRNA genes but lost all ribosomal genes: rRNA and ribosomal protein-coding genes).
Heterogeneity in the Base Composition of Ribosomal Subunits in Streptophyta and the Reconstruction of the Ancestral %GC and GC* Content
The likelihood ratio test of non-homogeneous models showed the heterogeneity in the base composition among the clades studied. The non-homogeneous "terminal clades" hierarchical model fit the sequence heterogeneity better for the mtLSU, the cpLSU and the cpSSU (Table 1). Through the evolution of the streptophytes, from unicellular algae to angiosperms, through charophytes, mosses, ferns, and other lineages, the mtSSU increased its %GC. However, this was not in a progressive manner, but was variable according to the clades (Figure 3). This process exhibited a clear pattern for the mtSSU, with differences among the seed plants, ferns, and club mosses, as well as the other lineages. The ribosomal nSSU resembled this augmentation pattern, but it was much less clear for the cpSSU. In all cases, the mtSSU contained noticeably less %GC than the cpSSU and the nSSU. The nSSU %GC ranged from 57.16% in the Coleochaetophyceae to 62.17% in the Angiosperms and, regarding the mtSSU, from 45.24% in the Coleochaetophyceae to 51.36% in the Angiosperms (a 6% increase). The cpSSU also demonstrated a %GC augmentation pattern, ranging from the Klebsormidiophyta at 57.28% to 65.46% in the Monilophyta (an 8.3% increase), with a high %GC for the Zygnematophyceae (63.37%) and a low %GC for the Charophyceae (58.49%) and Coleochaetophyceae (59.64%). Although not concurrently, these patterns are similar for the LSU, for both organelles studied (Supplementary Materials Figure S3).
A notable result was the low GC* in the mitochondrial Anthocerotophyta ribosomal subunits (19.04%). This very low value seemed to be unrelated to any of the biological or genomic characteristics studied; thus, we had no explanation for this phenomenon, despite the peculiarly high amount of editing in this phylum. The large %GC and GC* in the Charophyceae nSSU were also surprising. This pattern was maintained when the analyses were repeated without invariant sites.
Furthermore, we thoroughly looked for concomitancy of the patterns of the changes in the %GC, between the genetic compartments associated with the species and clades. We did not find any clear relation apart from the one between the large and small plastidial subunits, and only when coincident species were used. Therefore, we can ensure that although being present in the three genetic compartments, the increase in %GC followed a distinct pattern for each one, not concurrently through the lineages.
Archaeplastida Mitochondrial Trends
The Archaeplastida mitochondrial genomes demonstrated evidence that the evolution of these genomes followed differentiated paths between the major lineages, as the evolutionary process is unpredictable due to many chance events. In the framework of population genetics theory, the mutational-hazard hypothesis predicted a favorable environment for the proliferation of non-coding mtDNA with a low product between the effective gene number per locus in the population (Ng) and the mutation rate (µ). The small Ngµ value in the mtDNA of green plants (Ngµ << 1) compared to animals (Ngµ >> 1) intensified the genetic drift, making it easier for alleles with high mutation rates to behave neutrally, and thereby encouraging their fixation in the population [21]. This phenomenon could explain the difference between the small size and high mutation rate of animal mitochondrial genomes and the dramatic accumulation of non-coding mtDNA and the low mutation rate through the evolution of green plants.
In Streptophyta, an increase in the NRS incremented the %NC, which in turn facilitated the recombination and consequently raised the %GC through biased gene conversion. The best explanations for this observed gain in the GL and %GC appeared to be the mutational-hazard and the GC-biased gene conversion (gBGC) hypotheses, respectively. The promoted recombination was followed by an increase in %GC through biased gene conversion [13,21].
These non-adaptive forces occurring along the transition from water to land represented an exaptive genomic platform that explained some of the observed directional trends in the evolution of the land plant genomes: the recombination extension and the expansion of the genomic regulatory areas provoked a progressive accumulation of certain protein families correlated with the cell type number, which was essentially caused by gene duplication, concomitantly with the increase in organism complexity [22].
The chlorophytes are generally unicellular and sometimes parasites. They exhibit a GL reduction when the gene number decreases. By contrast, the Streptophyta evolution led to multicellular land plants, with a combination of striking features: organismal complexity together with a dramatic increase of the GL and %GC on the mitochondria [23].
The obtained models from our results were very consistent despite the relatively low taxon sampling. These observations supported the hypothesis that the changes in these variables were caused by the same factors. Thus, genome enlargement could provide a favorable environment for an increase in the recombination rate; therefore, a greater number of mismatches (that should be repaired by specific enzymes) would be produced, raising the probability for the incorrect nucleotides being replaced by G or C via gene conversion bias. Consequently, the %GC could be used as a fingerprint for the amount of recombination in these genomes. These mtDNA recombination mechanisms (surveillance and repair recombination-induced DNA damage, including mismatch repair) probably needed the selective pressure for efficient repair to be relaxed, raising the mutation rate [13]. The mitochondrial complexity augmentation throughout evolution did not occur in the mtDNA of multicellular animal lineages, although it did in their nuclear genome [24].
In Archaeplastida, the three genetic compartments must interact in coordination; consequently, a strong interactive adaptation between the organelles and the nucleus was necessary during evolution to multicellularity [25]. Hence, the three genomes increased their biological complexity in land plants, which permitted them to perform different functions in different tissues. Generally, species with a high %GC cpDNA also have %GC mtDNA but there are many exceptions [26,27]. In fact, evolutionary factors acting on the organism are the same in all three genetic compartments and consequently the differences indicated their variate idiosyncrasy and origin.
The Distribution of GC Content and GC* in the Three Genetic Compartments for Streptophyta
Basal Streptophyta algae evidenced a different pattern for %GC in their genetic compartments than Charophyceae and the terrestrial lineages. The %GC increased, especially from early land plants to angiosperms, yet there were intriguing exceptions to this pattern, such as the Charophyceae high %GC and GC* on the nSSU and mtSSU and the low value on cpSSU.
In the most basal streptophytes algae, the disparity in the %GC between the ribosomal subunits of the three genetic compartments resulted from strong selection, rather than from the gBGC, following a similar pattern to that observed in Chlorophyta [28]. Some aspects of the population biology of these lineages may have also been indirectly responsible for the %GC, because, as mentioned above, the population size and mutation rate determined that GL. Chlamydomonas is a Chlorophyceae algae living in freshwater habitats, with a similar ecology to Mesostigma; therefore, these genera are expected to have similar, very large populations.
The Streptophyta algae are multicellular (except for the Desmidiales), though very small (except for the Charophyceae), and, consequently, they may maintain large populations. In contrast, the Charophyceae have a large body size, with few individuals living in ponds or lagoons, and, hence, a very small effective population size, although they are multinucleated. Terrestrial plants also have small populations compared to those of the basal Streptophyta algae. In fact, Harholt et al. [29] hypothesized that streptophyte algae lived on land before the emergence of embryophytes, which was verified, recently, by Wang et al. [3], in the common ancestor of Mesostigma viride and Chlorokybus atmophyticus, where the development of traits reflected adaptations to a subaerial/terrestrial habitat.
The reasons for the differences in the recombinations and the gBGC among genomes, species and lineages remains unknown. Nevertheless, in the Charophyceae, these high %GC and GC* in the nucleus can be explained by the large C value (2C nuclear DNA content) (10-50 pg), which was higher than for the rest of the streptophytes algae (0.7-5 pg) and all mosses studied (0.9-5 pg) [30].
Archaeplastida Mitochondrial Genome-Wide Characteristics
All 35 Archaeplastida mitochondrial genomes present in the NCBI database were selected. Two outgroups were also chosen for their unique characteristics. The protozoan Reclinomonas americana (Excavata) was the shortest mitochondrial genome from eukaryotes and probably the one that best reflected the ancestral state [31]. Additionally, the parasite Rickettsia prowazekii (α-proteobacteria) is one of the closer derivatives from the eubacterial organisms, where the mitochondrial organelles originated [32]. The samples represented the major Archaeplastida clades, including Glaucophyta (n = 2), Rhodophyta (n = 4), Prasinophyta (n = 4), basal Chlorophyta (n = 5), Chlorophyceae (n = 6), basal Streptophyta (n = 3), Charophyceae (n = 2), including Nitella hyalina (GenBank database code JF810595), basal Embryophyta (n = 5), and Spermatophyta (n = 4). Certain features of the genome, including the %GC, were extracted from the databases at NCBI, using Artemis software [33]. The following variables were also obtained from all species: GL, %GC, %NC, NPG, NRS and RSL. The complete list of species, accession numbers and genome characteristics are available in the supplementary materials (Supplementary Materials Table S1).
To find the repeated sequences, the complete mtDNA sequences (forward, reverse, complement and reverse complement) from selected species were analyzed with REPuter [34]. The minimal repeat size was limited to 20 nucleotides (nt) for general analyses, 50 nt for less repetitive mitochondrial sequences and 100 nt in more repetitive sequences, when using the largest repeats in each genome. The %GC within repetitions was determined using Emboss [35].
Linear model analyses were used to determine the shape and significance of the relationship between each pair of variables. These linear models were implemented in three different ways to test for consistency: The classic linear regression and two different phylogenetically independent contrasts: "the Phylogenetic Generalized Least Squares (PGLS) method and "crunch" method, both from the "caper" package in R [36]. PGLS was a powerful method to estimate the adaptive optima using continuous data [37]. This method assumed that the analyzed trait evolved by Brownian motion and thus trait covariance between any pair of taxa decreased linearly with time (branch length) since their divergence. The methods were originally provided in the programs CAIC [38] and MacroCAIC [39]. Both programs calculated phylogenetically independent contrasts in a set of variables and then used linear models of those contrasts to test for evolutionary relationships. All contrast model functions enforced regression through the origin. Mediation tests were also performed to determine whether the relationships between pairs of variables were mediated by a third variable.
All these analyses were carried out on three different data sets: (i) all the studied species without the outgroups, (ii) eliminating the Chlorophyta, and (iii) eliminating the Streptophyta. These two lineages evolved by different evolutionary paths, and therefore represented different non-comparable patterns.
The assumptions of normality and homoscedasticity of the residuals were evaluated to verify the appropriateness of the linear modeling. A Bonferroni correction was carried out to correct the results from all these multiple tests and therefore, linear relationships were considered significant only when p < 0.005.
The phylogenetic tree used for the independent contrasts was built from the only six protein-coding genes that were shared by the 37 species studied (cob, cox-1, nad1, nad4, nad5 and nad6). These genes sequences were translated into amino acids with TranslatorX [40], aligned with Muscle [41] and trimmed with GBLOCKs [42] with the default parameters, resulting in a concatenated matrix with 1725 amino acids. The sequences matrix for each gene was subjected to ProtTest to find the best fit evolutionary model [43]. In order to test the phylogenetic signal TREE-PUZZLE was used [44]. For the maximum-likelihood (ML) analyses, the concatenated protein matrix was analyzed with RAxML v. 7.2.8 [45] using the WAG model [46] and a bootstrap analysis with 1000 replicates.
Bayesian analyses were implemented with Mr. Bayes V.2.1.0 [47]. The concatenated protein matrix was analyzed using three model partitions: two fixed cpREV models [48] of amino acid substitution, with inv-gamma and gamma distributions of rates, and the third model with a fixed Jones model with gamma distribution of rates. The analyses, in all cases, consisted of three million generations, four independent runs and four Markov chains. The trees were sampled every 1000 generations; stationarity was assessed by examining the standard deviation of the split frequencies and by plotting the -ln L per generation using Tracer v1.4 [49], and the trees were generated before the stationarity was discarded. Further Bayesian analysis with a fixed-cpREV model for the six coincident mitochondrial protein-coding genes was consistent with the tree obtained with ML and proved to be the best-fitting tree to most accepted phylogenies. Therefore, this tree was utilized for the statistical phylogenetically independent analyses.
Analyses of Heterogeneity in Base Composition
Mitochondrial ribosomal small and large subunits (mtSSU and mtLSU), chloroplast ribosomal subunits (cpSSU and cpLSU), and nuclear ribosomal small subunit (nSSU) sequences were downloaded from the SILVA database [50]. A hundred species for each ribosomal subunit were selected in order to cover all available lineages across the Streptophyta tree, discarding short, incomplete or highly gapped sequences. The species names for each subunit and genetic compartment are available in the supplementary material (Supplementary Materials Table S3).
Zygnematophyceae were not represented in the mitochondrial subunits, nor were Klebsormidiophyceae in the mtSSU, as there were no available sequences in the databases. Homologous rRNAs for each subunit and genetic compartment were aligned using ClustalW [51]. Escobar et al. [15] eliminated the hypermutable CpG sites to prevent slippage in the angiosperm %GC calculation. However, as Archaeplastida lineages diverged very early, hypermutable CpG site detection cannot be performed in a reliable way, so this step was omitted from the analyses [52]. The length of the sequences used had approximated sizes of 1289 (mtSSU), 1384 (mtLSU), 1272 (nSSU), 1162 (cpSSU) and 2155 (cpLSU) base pairs.
Phylogenetic trees were inferred with PhyML.3.0 [53] in the five RNA markers used. The models of sequence evolution were obtained with the program JModeltest2 [54] using the general time reversible model (GTR) + I + G, with four categories for the gamma distribution, parsimony starting trees and SPR (sub-tree pruning and regrafting) branch swapping. In general, terminal clades were grouped consistently with the most current Streptophyta phylogeny, although deep clades were less resolved. Therefore, some branches of the trees resulting from the above analyses were relocated with Baobab software [55], in order to adjust them to the accepted phylogenies [56]. We thensubsequently re-optimize the branch length with PhyML. The ML analyses of non-homogeneous models presented here were implemented with the modified trees. However, analyses with unmodified trees were performed to test for robustness.
The heterogeneity in %GC was tested with four non-homogeneous models of sequence evolution. These models were fitted with the software BppML [57] and NHML [58], which utilize an ML approach that is not stationary (ancestral and current %GC may differ) and with no homogeneity (the branches may have different %GC) in the base composition across the phylogeny. Thus, the %GC at the nodes and the GC* at branches of the phylogenetic tree were estimated. These hierarchical models were fitted to test whether the branches underwent similar evolution in their base composition or not, using the fixed trees from PhyML. Nucleotide substitution models based on Galtier and Gouy [58] were implemented for all BppML analyses, using four gamma categories and two parameters: theta (GC*) and kappa (Ts/Tv).
One was used per branch, where each branch had its own %GC and GC*. The likelihood ratio test (LRT) was used to assess whether more complex nested models provided a significantly improved fit compared with simpler models.
Streptophyta Ribosomal GC Content and GC*
The %GC and GC* were estimated for all species and nodes across the Streptophyta phylogenetic tree, using the same method that Escobar et al. [15] used. Therefore, the GC* was defined as: where AT→GC refers to the substitution rate from A or T to G or C bases and GC→AT holds for the inverse. The GC* was considered a more appropriate estimator for evolutionary dynamics than the %GC, because GC* reflects the relative contribution of changes from AT to GC independently from the total number of mutations [59]. NHML software [58] was used to implement a ML approach for the non-stationary probabilistic and non-homogeneous model. The objective set was to reconstruct the ancestral %GC and GC* distributions optimizing parameters on the model "terminal clades" over the trees obtained from BppML, for each one of the five ribosomal subunits. The analyses were performed with and without invariable sites to check the consistency. Sequence gaps were removed in all cases. The results from these analyses were two phylogenetic trees for each of the ribosomal subunits analyzed, with pseudo-bootstrap values for %GC and GC*. The GC* is the theta parameter estimated in this ML framework [58].
Concomitant Evolution between Genetic Compartments and Clades
With the aforementioned trees, phylogenetically independent contrasts (PGLS and "crunch") of %GC were carried out to check for concomitant evolution. The following contrasts were made: between the small and the large subunits for each genetic compartment; between the large subunits of the mitochondrion and plastid; between the small subunits of the three different genetic compartments. These tests were performed in two different ways: in the first instance, using the %GC in the shared species between subunits pairs (Supplementary Materials, Table S3), and second, using the node values (NHML) in the Streptophyta clades, despite the species not being the same. In both cases, each analysis was implemented with three different tree topologies: the phylogenetic tree of the two subunits studied and the tree with a fixed topology (all branch lengths equal to one). These analyses were also conducted with and without invariant sites and with the Bonferroni corrected for multiple (six) comparisons, considering significant differences only when p < 0.008.
Supplementary Materials:
The following are available online at http://www.mdpi.com/2223-7747/9/3/358/s1, Table S1: Genomic variables studied: Species, Genbank accession number, clade, genome length (GL), GC content (%GC), non-coding DNA (%NC), gene number (GN), number of protein-coding genes (NPG), number of repeated sequences (NRS) and repeated sequences total length (RSL), Table S2: Presence of mitochondrial genes (1). Number of genes/species and number of studied species/gene are given, Table S3: GC content in the ribosomal subunits (RSU) from the 100 species of Streptophyta used in the analysis at each subunit. List of species used to estimate the phylogeny of each ribosomal subunit, Figure S1: Linear regression (1.dashed lines), Phylogenetic Generalized Least Squares (PGLS) method (1. continuous lines) and "crunch" method (2) for the relationships between genomic variables, with all species (black), excluding the species of Chlorophyta (red) and excluding the species of Streptophyta (green): A) The effect of the log transformed number of repeated sequences (NRS), on non-coding genome (%NC); B) Effect of %NC on log transformed Genome length (GL); C) Effect of log(NRS) on GC content (%GC); D) Effect of %NC on %GC; E) Effect of the number of protein-coding genes (NPG) on the log(GL), Figure S2: Mitochondrial gene classes across Archaeplastida. Average gene number for each function in each clade, Figure S3: Evolution of GC content and GC* across the phylogeny of Streptophyta: A) Mitochondrial ribosomal LSU; B) Plastidial ribosomal LSU. Values correspond to ancestral GC content at nodes, or GC* (equilibrium GC content) in parentheses. Colors in terminal branches represent average GC content (blue: lowest GC content; red: highest GC content). The color scale is relative to the data set in each tree and is not directly comparale between them. List of species and GC content at terminal branches of each ribosomal subunit is available in Supplementary Table S3. | 2020-03-18T13:04:26.407Z | 2020-03-01T00:00:00.000 | {
"year": 2020,
"sha1": "fe4aa571285406d9399f582c395d3f9c3bb1c5bf",
"oa_license": "CCBY",
"oa_url": "https://www.mdpi.com/2223-7747/9/3/358/pdf",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "862c8f9dea4c7dfbd64b0ac1c8c226f7df34c879",
"s2fieldsofstudy": [
"Biology",
"Environmental Science"
],
"extfieldsofstudy": [
"Biology",
"Medicine"
]
} |
81724639 | pes2o/s2orc | v3-fos-license | Giant cell tumour of the bone treated with denosumab: How has the blood supply and oncological prognosis of the tumour changed?
Background Denosumab is gradually applied to refractory or unresectable giant cell tumour of the bone. Whether denosumab can effectively reduce the blood supply of tumour and bring benefit is worthy of study. The aim of the study is to evaluate the related changes after treatment: blood supply, surgical plan downstaging, surgical difficulty and oncological prognosis. Methods A self-case–control study was performed from June 2014 to November 2016, and 18 patients were enrolled. Patients received subcutaneous denosumab 120 mg every 4 weeks preoperatively, with additional doses administered on Days 8 and 15 during the first month of therapy. The initial treatment duration was 12 weeks. After 12 weeks treatment, enhanced CT examination was performed for evaluating whether surgical treatment was practicable. The patients received preoperative denosumab treatment for 5 (median 3, range 3–12) months in average. The microvessel density of tumour samples was calculated for evaluating tumour blood supply. The computed tomography (CT) enhancement rate was compared before and after treatment. The related changes of parameters were recorded as the following: clinical benefits, serious side effects, enhancement rate of CT, surgical plans, intraoperative blood loss, operative time, surgical difficulty, histological changes and local recurrence. The patients were followed up every 3 months postoperatively. Results The average CT enhancement rate of lesions was 2.08 and 1.40 before and after treatment (p = 0.000), respectively. The unenhanced CT value was significantly increased after treatment (p = 0.038). The CT enhancement rate changed more significantly in pelvic or sacral lesions than that in limb lesions (p = 0.024). Sixteen cases underwent final surgery, and surgical plan was downstaged. The histological examination showed tumour cells were significantly reduced or even disappeared after treatment. The microvessel density decreased significantly after treatment. The mean postoperative follow-up was 18.8 (10–31) months, and five patients had local recurrence. The high local recurrence rate (4/6) in sacral tumours may be related to the increased difficulty of curettage. Conclusion Denosumab treatment can reduce the blood supply of giant cell tumour. The sacral or pelvic lesions changed more significantly than limb lesions. The surgical plan downstaging can also be achieved. The clear margin after denosumab treatment facilitated tumour resection but, increased difficult in curettage surgery, and high recurrence rate of sacral tumour is being concerned. The Translational Impact of this Article Denosumab is a new type of humanized monoclonal antibody which showed some effect in the treatment giant cell tumor of bone. Pre-operative treatment with denosamub can reduce intra-operative blood loss and down-stage surgical plan in suitable cases.
Results: The average CT enhancement rate of lesions was 2.08 and 1.40 before and after treatment (p Z 0.000), respectively. The unenhanced CT value was significantly increased after treatment (p Z 0.038). The CT enhancement rate changed more significantly in pelvic or sacral lesions than that in limb lesions (p Z 0.024). Sixteen cases underwent final surgery, and surgical plan was downstaged. The histological examination showed tumour cells were significantly reduced or even disappeared after treatment. The microvessel density decreased significantly after treatment. The mean postoperative follow-up was 18.8 (10e31) months, and five patients had local recurrence. The high local recurrence rate (4/6) in sacral tumours may be related to the increased difficulty of curettage. Conclusion: Denosumab treatment can reduce the blood supply of giant cell tumour. The sacral or pelvic lesions changed more significantly than limb lesions. The surgical plan downstaging can also be achieved. The clear margin after denosumab treatment facilitated tumour resection but, increased difficult in curettage surgery, and high recurrence rate of sacral tumour is being concerned. The Translational Impact of this Article: Denosumab is a new type of humanized monoclonal antibody which showed some effect in the treatment giant cell tumor of bone. Pre-operative treatment with denosamub can reduce intra-operative blood loss and down-stage surgical plan in suitable cases. Giant cell tumour of the bone (GCTB) is a primary local aggressive bone tumour with which accounted approximately 20% of primary benign bone tumours and 5% of all primary bone tumours [1]. Histologically, GCTB is composed of sheets of neoplastic mononuclear cells interspersed with uniformly distributed large osteoclast-like multinucleated giant cells [2]. The multinucleated giant cells are similar to osteoclasts in morphology, ultrastructure, histochemistry and function of bone resorption [3].
Denosumab is a new type of humanized receptor activator of nuclear factor-k B ligand (RANKL) monoclonal antibody. It can combined with RANKL specifically and block the RANKL-receptor activator of nuclear factor-k B (RANK) pathway. Thus, it can interfere with the survival and differentiation of osteoclasts and inhibit osteoclastmediated bone destruction [4]. Denosumab was gradually applied in the treatment of refractory bone giant cell tumour in recent years [5e7]. These clinical studies showed that the neoadjuvant treatment with denosumab may make the tumour margin clear and reduce tumour size. Therefore, it can transform unresectable lesions into resectable lesions and provide the opportunity of curing patients. The application of denosumab may bring new effective treatment for cases with excessive morbidity because of the complex anatomical location [8,9].
After treatment with denosumab, the changing of lesion reported include cortical and subchondral bone thickening, marginal sclerosis, new bone formation and pathological fracture healing [10,11]. The treatment of denosumab may downstage surgery so that tumour becomes operatable or it may avoid resections. The surgical plan can be changed to less morbid procedures or joint preservation after denosumab treatment [6,10]. Whether the difficulty in intralesional curettage was increased because of tumour ossification with septae formation is noteworthy. If the ossification lesion cannot be completely removed, whether the local recurrence rate may increase but not decrease should be studied and analysed. But when GCTB is large or located in the pelvis and sacrum, intraoperative blood loss and related risk is large. Therefore, we want to evaluate whether denosumab can reduce the tumour blood supply. The microvessel density (MVD) of the tumour samples were calculated for evaluating tumour blood supply. Enhanced computed tomography (CT) examination can reflect the blood supply through the CT value. In theory, if the blood vessels and blood supply decreased, the enhanced CT value should also decrease [12e14]. The aim of the study is to answer the above questions and evaluate the related changes after treatment as following: blood supply, surgical plan downstaging, surgical difficulty and oncological prognosis.
Patients
This study was a prospective case self-controlled study of patients with GCTB in our hospital. The ethics committee of our institution had reviewed and approved this study. The inclusive criteria included adults or skeletally mature adolescents (!12 years of age), histologically confirmed GCTB, active and measurable bone lesions that can be evaluated, Stage 3 tumour (as per Campanacci staging system [2]), unresectable lesion/joint or important structure that cannot be retained. Exclusion criteria included the lesion that underwent arterial embolization and radiotherapy or any other treatment which may affect the blood supply, history of osteonecrosis or osteomyelitis and pregnancy. From June 2014 to November 2016, a total of 18 patients were enrolled (Table 1). Two experienced pathologists in our hospital reviewed the pathologic examinations independently. There was no objection to the pathological diagnosis of GCTB ( Fig. 1). Eighteen patients received denosumab therapy and underwent enhanced CT examinations. The mean age was 31.3 (18e48) years. There were 9 men and 9 women. Nine lesions were located in pelvis or sacrum (7 sacrum, 1 ilium and 1 ischium), and nine lesions were located in limbs (3 tibia, 2 radius, 1 humerus, 1 femur, 1 ulna and 1 fibula).
Treatment regime and CT evaluation procedures
Enhanced CT examination of tumour was performed before denosumab therapy. Patients received subcutaneous denosumab 120 mg every 4 weeks preoperatively, with additional doses administered on Day 8 and 15 during the first month of therapy only. The initial treatment duration was 12 weeks. After the 12-week treatment, enhanced CT examination was performed again. If the lesion had been resectable/joint or important structure can be retained, denosumab treatment was stopped and the patient received operation. If the above standard was not achieved, the patient continued to receive denosumab treatment. The patients received preoperative denosumab treatment for 5 (median 3, range 3e12) months in average.
Based on the size of bone lesion, foureeight levels were selected for measurement. The CT value of the same lesion areas were measured before and after treatment in the same CT machine. The injection volume and speed of contrast and CT scan times were the same before and after treatment. The main vascular of the same level was also measured (Fig. 2). The CT enhancement rate was calculated as the ratio of enhanced CT value and unenhanced CT value. With the reference of vascular enhancement rate, the lesion was comparable before and after treatment.
Immunohistochemistry
Before and after treatment, the immunohistochemical staining and MVD calculation of the tumour samples were performed. To identify MVD, immunohistochemical staining 1 Female 28 Distal radius 3 Resection 6 No 2 Female 24 Sacrum 3 Curettage 9 Yes 3 Female 21 Sacrum 3 Curettage 12 Yes 4 Female 46 Proximal tibia 3 Curettage 3 No 5 Male 26 Distal radius 3 Resection 3 No 6 Male 28 Proximal tibia 3 Curettage 9 No 7 Female 62 Sacrum 3 Curettage 3 Yes 8 Female 29 Ilium 3 Curettage 3 No 9 Male was performed using antihuman CD34 antibody. MVD was evaluated by two independent pathologists in a blinded manner, as described previously [15,16]. The slides were scanned at 40Â and 100Â magnifications ( positive staining of vascular endothelial cells was separated by the adjacent vessels, tumour cells or connective tissue were counted as a single vessel. Even if these vessels may be different sections of the adjacent vessels, they should be counted separately. The presence or absence of red blood cells cannot be used to determine blood vessels. At last, the average value of the blood vessel density in three hot spots was calculated. In histology, mononuclear cells are identified as short spindle-shaped and oval cells with single nucleus; osteoclast-like giant cells are identified as multinucleated cells with large size with even hundreds of nuclei (Fig. 1).
Data record and follow-up
The related changes of parameters were recorded before and/or after denosumab treatment as following: clinical benefits, serious side effects, enhancement rate of CT, surgical plans, intraoperative blood loss, operative time; surgical difficulty, histological changes and local recurrence. The patients were followed up every 3 months postoperatively. The clinical examination, X-ray and CT scan of the primary site were performed every 3 months. The bone scan and chest CT scan were performed every 6 months. Postoperative local recurrence was defined as postoperative imaging examinations which showed that the lesion appeared at the site of primary tumour again. The local recurrence time period was defined as the time interval from the operation time to the time at which the lesion appeared again in imaging examinations.
Statistical analysis
The data analysis was performed with SPSS software (version 19.0; IBM Corp., Armonk, NY, USA). Continuous variables were compared by the t-test, and categorical variables were compared by the chi-square test or Fisher's exact test. A p value of 0.05 was considered statistically significant.
Results
After treatment, all patients had clinical benefits such as pain reduction and increased mobility and function. There were no serious side effects, and all patients tolerated well. Before treatment, the mean unenhanced CT value and enhanced CT value of the main vascular were 44.0 (38e51) and 136.1 (94e170), respectively. The mean enhancement rate was 3.13 (1.92e4.56). After treatment, the mean unenhanced CT value and enhanced CT value of the main vascular were 43.4 (35e55) and 138.6 (93e158), respectively. The mean enhancement rate was 3.28 (1.90e4.77). There was no significant difference before and after treatment (p Z 0.669, t Z 0.187). The CT examinations were comparable before and after treatment.
Before treatment, the mean unenhanced CT value and enhanced CT value of the lesion were 45.7 (33e65) and 92.5 (50e150), respectively. The mean enhancement rate was 2.08 (1.22e4.05). After treatment, the mean unenhanced CT value and enhanced CT value of lesion were 83.9 (32e357) and 105.8 (37e380), respectively. The mean enhancement rate was 1.40 (1.02e2.31). The enhancement rate had significant changing after treatment (p Z 0.000, t Z 17.664). The unenhanced CT value had significantly elevated after treatment (p Z 0.038, t Z 4.650), and this suggested the increased sclerosis of the lesion and new bone formation (Figs. 4 and 5).
The mean enhancement rate of sacral or pelvic lesion before and after treatment was 2.51 and 1.48, respectively (p Z 0.001, F Z 18.650). The mean enhancement rate of limb lesions before and after treatment was 1.66 and 1.25, respectively (p Z 0.042, F Z 4.909). The enhancement rate of sacral or pelvic lesions had more significant decrease than that of limb lesions (p Z 0.024, F Z 6.268) (Fig. 6).
The average value of MVD was 224.4 (68e324) before treatment and 106.8 (21e197) after treatment (p Z 0.000, F Z 42.437). After treatment, the histopathology showed disappearance of osteoclast-like giant cells and significant decrease of mononuclears (Fig. 3). Gross specimens showed that tumour became obviously stiff and firm which were totally distinct from the typical GCTB.
The primary surgical plans before treatment in nine patients with limb tumours were as following: two patients cannot receive operation (unclear tumour range); five patients were planned for joint/prosthesis replacement; two patients were planned for amputation. After denosumab treatment, all nine patients were managed with changed surgical plan: 2 patients with unresectable tumours received tumour resection; 5 patients with planned joint/ prosthesis replacement received curettage with joint sparing and 2 patients with planned amputation received limb salvage tumour resection. The sacral/pelvic tumours with large bone destruction and very thin/no bone shell received denosumab treatment for the following reasons: decreasing intraoperative blood loss; marginal sclerosis; bone shell formation to avoid collapse of pelvic ring and severe incapability. After treatment, seven sacral/pelvic tumours received curettage, and all patients were avoided from receiving high morbidity procedures. After denosumab treatment, the tumours showed significant sclerosis and intralesional new bone formation. These changes may increase difficulty in intralesional curettage because of tumour ossification with septae formation, especially in sacral tumours. But at the same, the sclerosis tumour with clear margin can facilitate tumour resection because of consolidated tumour, especially in limb tumours with diffuse destruction and unclear margin (Fig. 7).
The mean intraoperative blood loss and operative time in patients with limb tumours were 300 (30e800) ml and 184 (150e240) minutes, respectively. The mean intraoperative blood loss and operative time in patients with sacral/pelvic tumours were 1943 (600e3000) ml and 256 (180e360) minutes, respectively.
The mean postoperative follow-up time was 18.8 (10e31) months. Five patients had local recurrence, and the mean time was 7.8 (5e12) months postoperatively. The recurrent tumours contained four primary sacral tumours and one recurrent limb tumour with multiple soft tissue recurrence. Four sacral cases received operation again, and no recurrence was found again. The limb recurrence was a patient with multiple unclear soft tissue recurrence after distal ulna resection. After 6 months of denosumab treatment, the clinical symptom was relieved, and nine soft tissue lesions were clearly identified by CT. So, we performed multiple lesions resection (Fig. 7). But multiple extensive soft tissue recurrence was found soon after surgery. The patient received amputation, and the histological result showed no malignant transformation. The multiple lung metastases also showed significant progression after discontinued treatment of denosumab (Fig. 8).
Discussion
With the finding and related research of RANKL and RANK [17,18], the formation and modular mechanism of osteoclast-like multinuclear giant cell in GCTB has become clear. Denosumab specifically combined with RANKL block RANKL-RANK pathway, thereby the bone destruction mediated by osteoclast can be inhibited [4,19]. The aim of the study is to evaluate the related changes after treatment: blood supply, surgical plan downstaging, surgical difficulty and oncological prognosis.
All patients in our study showed clinical benefits such as pain reduction and increased mobility and function. In a Phase 2 clinical trial [10], denosumab was used for 35 cases of recurrent or unresectable treatment of GCTB. After 25 weeks of treatment, 86% of the cases achieved effective response and clinical benefit which included pain relief and functional improvement. The clinical study confirmed the inhibition effect of denosumab on osteoclast formation and activation in GCTB. In 2016, Dubory et al [20] reported eight spinal GCTB with good response to denosumab treatment more than 6 months. The pain and neurological symptoms were relieved.
When the bone destruction and soft tissue mass are large, especially in the sacrum and pelvis, surgical treatment is very difficult. The intraoperative blood loss is usually huge, so the surgical management and the perioperative safety may be seriously interfered. If the blood supply of tumour and blood loss can be reduced, the operation will be more calm and safe. The CT enhancement rate can reflect on blood supply of tumour and even suggest the characteristic and prognosis of some tumours [21,22]. Figure 4 The changing of enhancement rate of lesion before and after treatment (each point corresponds to enhancement rate in each patient). CT Z computed tomography. Figure 5 The compartment of changing of enhancement rate of lesion and vascular before and after treatment. CT Z computed tomography. Figure 6 The compartment of enhancement rate of the sacral or pelvic lesion and limb lesion before and after treatment. CT Z computed tomography.
The CT value of the same lesion areas were measured before and after treatment in the same CT machine. The injection volume and speed of contrast and CT scan times were also the same before and after treatment. The vascular enhancement rate before and after treatment was similar, which indicated that the condition of CT examinations was comparable before and after treatment. Therefore, the method of this study is reliable.
Our results showed that the enhancement rate of the tumour was significantly decreased after treatment. This change suggested the decreasing blood supply of tumour. This result was supported by the MVD calculation of tumour. After treatment, the histopathology also showed disappearance of osteoclast-like giant cells and significant decrease of mononuclears. Further analysis showed that the enhancement rate of pelvic and sacral tumours changed more significantly. The mean intraoperative blood loss of sacral/pelvic tumours was 1943 (600e3000) ml. We can obviously feel that the intraoperative bleeding were not as turbulent as previous cases without denosumab treatment. Therefore, preoperative denosumab treatment is more useful in decreasing blood loss in the sacrum/pelvis. Although the CT enhancement rate and MVD calculation of tumour showed decreasing blood supply of tumour, the related mechanism of denosumab-inhibited blood supply is still unclear. We think it may be related with the disappearance of osteoclast-like giant cells and significant decrease of mononuclears. Further study of the mechanism needs to be carried out.
After treatment, the histopathology showed disappearance of osteoclast-like giant cells and significant decrease of mononuclears. Another Phase 2 clinical trial [7] on recurrent or unresectable GCTB also obtained good results of imaging and histology. The osteoclast-like giant cells decreased more than 90% while the tumour stromal cells also decreased. Isabella et al [11] reported significant imaging and histological changes after 6 months of denosumab treatment. The giant cells and expression of RANKL almost completely disappeared; the images after treatment showed that the cortical bone was significantly thickened, and the lesions could be treated by curettage without resection.
A Phase 2 clinical trial [6] which included 222 patients showed that after a median of 19.5 months of treatment, Figure 7 The patient with tumour recurrence of the distal ulna received preoperative denosumab treatment and tumour resection. The recurrence tumours were not clearly shown before treatment (A) and the tumour ranges were clearly showed after treatment (B). The multiple lesions were marked before operation (C), and then all visible lesions were excised (D).
96% of the original plan for joint replacement and 86% of the original plan for joint arthrodesis changed to operation which can retain the joint. Denosumab treatment may also downstage surgical plan in pelvic tumour [9]. After denosumab treatment, the patients in our study were managed with downstaging surgery: two patients with unresectable tumours received tumour resection; five patients with planned joint/prosthesis replacement received curettage with joint-sparing and two patients with planned amputation received limb salvage tumour resection. In limb tumours, surgical treatment was downstaged, and local recurrence rate was low. The clear tumour range facilitated resection, and the thickened cortical bone facilitated curettage. The sacral/pelvic tumours received curettage, and high morbidity procedure was avoided. Although small blood loss and operative time were showed, we found that such curettage actually reduced the tumour range in original surgical plan. This kind of downstaging surgery did not remove sclerosis of the lesion, which may lay a hidden danger for postoperative recurrence.
The significant elevated CT value of tumour suggested the increasing sclerosis of the lesion. These changes may facilitate tumour resection because of clear margin but increase difficulty in intralesional curettage because of tumour ossification with septae formation, especially in sacral tumours. Although the new bone shell formations facilitate tumour curettage, this sclerotic bone which contains stromal cells of GCTB may be the source of tumour recurrence after discontinuation of denosumab. A recent study [23] examined the viability and osteoclastogenic capabilities of neoplastic stromal cells of GCTB. It showed that the stromal cells are quiescent during denosumab treatment, but the neoplastic cells remain proliferative once the microenvironment is free of denosumab.
Sometimes, it was difficult to distinguish tumour ossification form the normal bone. Separating the sclerosis lesion from the nerve or vascular was also a difficult and challenging work. The previous reports showed the recurrence rate of sacral GCTB after intralesional curettage as about 20e40% [24e27], but the local recurrence rate in our study was 66.7% (4/6). The high recurrence rate of sacral tumours after denosumab treatment should be related with these surgical difficulties. A prospective study of GCTB [14] showed the local recurrence rate was 17% with preoperative denosumab treatment. The authors considered that the thickened cortical bone and osseous tumour matrix increased the difficulty of determining tumour range [14]. In sacral tumours, it is impossible to achieve such extended curettage as we performed in limb tumours, so we found more recurrences in sacral tumours and little recurrences in limb tumours. The difficulty of resection was decreased after medication, because tumour border became clearer and we can obtain the originally planned surgical margin.
After analysis of the only one limb recurrence in our study, we found that the recurrence was very rapid and severe after discontinuation. The patient had to receive amputation. Although the uncontrolled local recurrence and rapid progression of lung metastasis were found, the histological result still showed no malignant transformation. As nine cases of malignant transformation of GCT after denosumab therapy have been reported [6,10,28e30], we should be alert to the safety of denosumab. But the new bone formation after denosumab therapy should not be misinterpreted as malignant transformation.
There were some limitations in our study. First, it was a retrospective study. Second, we reported a relatively small sample size series because the incidence of GCTB is relatively low and only cases eligible for entry were selected for inclusion in this study. Third, we tended to include Stage 3 or more aggressive tumours which may lead to selection bias. This bias may be related to the high recurrence rate of sacral tumours. It may be because of selection bias as we tend to use denosumab in higher grade or more aggressive tumours with more difficult intralesional curettage. The role of intraoperative navigation to increase accuracy can be investigated in future studies.
Our study showed significant effects of reducing blood supply of tumour and intraoperative blood loss. The surgical plan downstaging can also be achieved. The clear margin after denosumab treatment facilitated tumour resection. But the increased difficult in curettage surgery and high recurrence rate of sacral tumour are being concerned. Surgical plan designed based of the tumour range before treatment may be useful in decreasing local recurrence. The best preoperative treatment duration and the concern about withdrawal rebound phenomenon are still need to be solved. | 2019-03-18T14:04:09.859Z | 2018-11-07T00:00:00.000 | {
"year": 2018,
"sha1": "d3ad44fe64071fdc80d409bcd1b773f6271f7310",
"oa_license": "CCBYNCND",
"oa_url": "https://doi.org/10.1016/j.jot.2018.10.003",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "b55fc2b00c7bb637f0ce764d5e5e88c37a3e69ee",
"s2fieldsofstudy": [
"Medicine"
],
"extfieldsofstudy": [
"Medicine"
]
} |
210040989 | pes2o/s2orc | v3-fos-license | Improved kinetic behaviour of Mg(NH2)2-2LiH doped with nanostructured K-modified-LixTiyOz for hydrogen storage
The system Mg(NH2)2 + 2LiH is considered as an interesting solid-state hydrogen storage material owing to its low thermodynamic stability of ca. 40 kJ/mol H2 and high gravimetric hydrogen capacity of 5.6 wt.%. However, high kinetic barriers lead to slow absorption/desorption rates even at relatively high temperatures (>180 °C). In this work, we investigate the effects of the addition of K-modified LixTiyOz on the absorption/desorption behaviour of the Mg(NH2)2 + 2LiH system. In comparison with the pristine Mg(NH2)2 + 2LiH, the system containing a tiny amount of nanostructured K-modified LixTiyOz shows enhanced absorption/desorption behaviour. The doped material presents a sensibly reduced (∼30 °C) desorption onset temperature, notably shorter hydrogen absorption/desorption times and reversible hydrogen capacity of about 3 wt.% H2 upon cycling. Studies on the absorption/desorption processes and micro/nanostructural characterizations of the Mg(NH2)2 + 2LiH + K-modified LixTiyOz system hint to the fact that the presence of in situ formed nanostructure K2TiO3 is the main responsible for the observed improved kinetic behaviour.
One of the limiting factors for the implementation of hydrogen in stationary and mobile applications is the lack of an efficient and safe storage system. For mobile applications, a fuel cell equipped electrical car requires about 5 kg of hydrogen in order to achieve a driving range of ca. 500 km 1 . However, storing 5 kg of hydrogen in a high-pressure (700 bar) tank requires an internal volume of 122 litres 2 . In order to improve the volumetric hydrogen storage capacity, solid-state storage in metal hydrides is considered as an effective approach [3][4][5][6][7][8] . As an example, excluding the volume of the tank material, 5 kg of hydrogen can be stored in magnesium hydride (MgH 2 ) occupying a volume of only 46 litres 7 . However, due to its high desorption enthalpy (ΔH des = 74 kJ mol −1 ), MgH 2 requires high dehydrogenation temperatures (>300 °C) 9 . By the reaction of alkali metals (i.e. Na, K) with gaseous ammonia, metal amides (i.e. NaNH 2 and KNH 2 ) were firstly discovered at the beginning of 1800s 10,11 . During the last 50 years, metal amides were not considered as potential hydrogen storage materials since the detected main gaseous product from their thermal decomposition was ammonia 12,13 . In 2002, Chen et al. reported for the first time that a material composed of LiNH 2 and LiH was able to reversibly store 6.5 wt. % of H 2 at 255 °C 14 . Thereafter, a focus was given to synthesize and understand the mechanisms in amide-imide systems for hydrogen storage 15,16 . Replacing LiNH 2 with Mg(NH 2 ) 2 , a reversible H 2 storage capacity of 5.5 wt. % at operating temperatures of 200 °C is obtained according to reactions (1) and (2) (ΔH des = 40 kJ mol −1 ) 17 . Additive synthesis. All reagents utilized in this work were Mg(NH 2 ) 2 (described in the following subsection), LiH (Alfa Aesar, 97 % purity), anatase TiO 2 (Sigma Aldrich, >99 % purity, -325 mesh) and KH (Sigma Aldrich, suspension 35% in mineral oil). The investigated additives were obtained by milling LiH, TiO 2 and KH in different stoichiometric ratios under argon atmosphere for two hours and then annealing them under Ar atmosphere at 600 °C for 8 hours. The stoichiometry of the reagent utilized to synthesize the additives were: 1) 0.5LiH + TiO 2 and 2) 0.5LiH + TiO 2 + 0.25KH. In addition to the prepared additives, KH alone was also used as an additive. In order to separate mineral oil from KH, three washing cycles in hexane were carried out. After that, hexane was removed by applying dynamic vacuum.
Material synthesis. Mg(NH 2 ) 2 (95 % purity) was in-house synthesized by ball milling MgH 2 under NH 3 atmosphere, followed by annealing at 300 °C under NH 3 atmosphere. The details of the synthesis were described in our previous study 50 . The Mg(NH 2 ) 2 was mixed with LiH (Alfa Aesar, 97 % purity) and 1.0, 2.5 or 5 mol. % of additives (Section 2.2). All materials were milled in a Fritsch P6 Planetary ball miller for 5 hours with ball to powder ratio 60:1 under 50 bar of H 2 pressure. The sample names used to identify the prepared specimens are listed in Table 1.
Characterization techniques.
Ex situ powder X-ray diffraction method (PXD) was applied for the identification of crystalline phases, by using a Bruker D8 Discover diffractometer equipped with Cu X-ray source (λ = 1.54 Å) operating at 50 kV and 1000 mA and a 2D VANTEC detector. Diffraction patterns were collected in the 2θ range 20° to 80°. A sample holder sealed with a polymethylmethacrylate (PMMA) dome was utilized to prevent the material oxidation during PXD measurements.
In situ synchrotron radiation powder X-ray diffraction (SR-PXD) technique was applied using a special designed cell 51 . This cell with sapphire capillary allows performing measurements under controlled gas atmosphere in a pressure range from 0.01 to 200 bar. Measurements were performed at Deutsches Elektronen-Synchrotron (DESY) in the P02.1 beamline. The beamline is equipped with A Perkin Elmer XRD1621 area detector and 60 keV X-ray source (λ = 0.207 Å). Mg-Li-5LTOK sample was heated from room temperature (RT) to 300 °C with a heating rate of 5 °C/min under 1 bar of H 2 pressure. Every 10 seconds a two-dimensional SR-PXD pattern was collected. Collected data were integrated to one-dimensional diffraction pattern using Fit2D software 52,53 .
Differential scanning calorimetry (DSC) measurements were performed in a Netsch DSC 204 HP calorimeter located inside an argon-filled glovebox (H 2 O and O 2 levels below 1 ppm). Before starting the DSC measurements, the residual argon gas inside the chamber was removed by first evacuation and then flushing the chamber with hydrogen. A mass flow-meter was used to limit the deviation of the hydrogen pressure to ±0.2 bar of H 2 during heating up and cooling down. In order to measure apparent activation energies of the 1 st and 2 nd desorption, about 10-15 mg of each sample were placed in a Al 2 O 3 crucible and then heated from room temperature (RT) up to 300 °C under 1 bar of H 2 pressure with heating rates of 1 °C, 3 °C, 5 °C and 10 °C/min. For the 2 nd desorption, as-milled samples were first desorbed by heating from RT to 220 °C under 1 bar of H 2 pressure. Following this step, the materials were reabsorbed by heating them from RT to 180 °C under 100 bar of H 2 pressure. In order to evaluate the effectiveness of the additives on the material kinetic behaviour, the apparent activation energies (E a ) of the 1 st and 2 nd desorption reactions were calculated via Kissinger method 54 . This method is suitable for the samples that exhibit multi-step reactions and it allows us to determine E a of a reaction process without assuming a specific kinetic model, i.e. without determining the rate-limiting step of the reaction. The equation for the E a calculation is shown in Eq. 3; where A is the pre-exponential factor and R is the gas constant. The temperature for the maximum reaction rate (T m ) was obtained from DSC curves measured at measured heating rates (β) of 1 °C, 3 °C, 5 °C and 10 °C/min. Then, β ( ) ln T m 2 against 1/T m was plotted, E a (kJ/mol H 2 ) and A (1/s) was calculated from linear fitting. Goodness of fit was determined by the examining the correlation between the experimental and predicted values. In order to have a good fitting, R-square value should be near 1 50 .
In order to assess the rate-limiting steps of the absorption/desorption processes in the studied system, Sharp and Jones method was applied 55,56 . In this method, experimental data are expressed as following: where A is the rate constant, t 0 . 5 is the time at the reaction fraction α = 0.5. The fraction (α) is taken as the hydrogen capacity over the maximum reached capacity for each sample. By implementing different rate equations, several plots of are obtained. In this study, we applied this model to the 1 st , 2 nd and 5 th absorption/desorption curves between 0.1 and 0.8 fractions of the overall hydrogen capacity. The best fitting reaction rate model must obey the following rules; slope of the fitted line should be ∼1, intercept ∼0 and R 2 ∼ 1. Details related to the implemented rate equations are given in our previous work 50 . IR spectroscopy was performed with an Agilent Technologies Cary 630 FT-IR located in an argon filled glove box (H 2 O and O 2 levels below 1 ppm). Each measurement was acquired in the transmission mode in spectral range of 650 cm −1 -4000 cm −1 with a resolution of 4 cm −1 50 .
Evolved gases during the desorption reactions were analysed using a Hiden Analytical HAL 201 Mass-Spectrometer, which is coupled with a Netzsch STA 409 C Differential Thermal Analysis (DTA-MS). About 2 mg of sample was placed in a Al 2 O 3 crucible that was heated from room temperature up to 300 °C in the DTA apparatus, with a heating rate of 3 °C/min. Measurements were done under 50 ml/min Ar flow.
The absorption rates and gravimetric capacities were assessed using a Sieverts apparatus (HERA Hydrogen Storage Systems, Longueil, QC, Canada) operating on the differential pressure technique. The hydrogen gas used in the experiments had a purity of 99.999 % (5.0 H 2 ). The temperature and pressure conditions are provided in the figure caption for each experiment in the manuscript. The mass of sample for all the measurements was approximately 100 mg.
High resolution transmission electron microscopy (HR-TEM) observations, diffraction patterns (DP) and dark field (DF) were carried out using a Tecnai G2 microscope with an information limit of 0.12 nm and Schottky Emission gun operating at 300 kV. Samples after milling and after absorption/desorption conditions were observed. All samples were prepared into a glove box with controlled O 2 and H 2 O atmosphere (<1 ppm) by dispersing the powders onto carbon grids. In order to avoid the oxidation/hydrolysis of the material at the time to introduce the grids into the microscope column, the dispersed powder on the grid was covered with a special polymeric film which does not preclude the electron interactions with the sample 57 . Then, HR-TEM observations of the identified Fe zones were done. HR-TEM image processing was done with the following programs: Digital Micrograph (License no. 90294175), i-TEM (License no. A2382500, EMSIS GmbH, Münster, Germany).
X-ray absorption spectroscopy experiments at the XANES (X-ray absorption near edge structure) region of LiTi 2 O 4 , K-modified additive (10 wt.% K 1 . 04 O 16 Ti 8 , 17 wt.% LiTi 2 O 4 , 27 wt.% LiTiO 2 and 46 wt.% K 2 O 17 Ti 8 ), TiO 2 anatase, as as-milled Mg-Li-5LTO, as-milled Mg-Li-5LTOK, as-milled Mg-Li-2.5LTOK, desorbed Mg-Li-2.5LTOK and reabsorbed Mg-Li-2.5LTOK samples were carried out using a R-XAS looper "in house" spectrometer from Rigaku. The measurements were performed in transmission mode around the Ti K-edge (4966 eV) in the range of energy from 4950 eV to 5030 eV at ambient temperature. The optimum amount of material for the measurements was calculated by the program XAFSMAS (version 2012/04, ALBA synchrotron, Barcelona, Spain) 58 . The samples were prepared inside a glove box by mixing them with anhydrous boron nitride (powder, purity: 98 %; Sigma-Aldrich, St. Louis, Missouri, MO, USA,) in a mortar, and then pressing the mixture into pellets of 10 mm diameter. The pellets were sealed with Kapton tape (50 μm in thickness) to prevent the oxidation/hydrolysis of the samples. XAS data processing and fitting were performed by using the IFEFFIT software (version 1.2.11, University of Chicago, Chicago, IL, USA) package 59 .
Results
Results obtained from the thermal behaviour, mass spectroscopy, desorption activation energy, volumetric measurements, ex situ PXD, in situ SR-PXD and infrared spectroscopy for all the compositions are presented in this section. In Table 1, the starting stoichiometric compositions of all additives were shown and details regarding the additive synthesis were discussed in the experimental section. Rietveld refinement result of the PXD data for the 0.5LiH + TiO 2 stoichiometric composition indicates that the additive is just composed of the LiTi 2 O 4 after milling and annealing (ESI Fig. S1). In the case of the 0.5LiH + TiO 2 + 0.25KH stoichiometric composition, after the synthesis the additive is composed of 10 wt.% K 1 . 04 First desorption/absorption performance and apparent activation energies. The thermal behaviour of as-prepared samples is presented in Fig. 1A. The DTA curve of the additive-free sample Mg-Li exhibits two endothermic events between 170 °C and 230 °C. These two events are due to desorption reactions in accordance with Eqs. 1 and 2. Temperature of the desorption peak maximum is at 205 °C. Mg-Li-5LTO sample shows a desorption trend similar to that of Mg-Li. The presence of the additive does not lead to improvement neither the onset nor the peak maximum temperatures. MS analyses of the gases (H 2 and NH 3 ) evolving from the two samples upon heat treatment are almost identical (Fig. 1B,C). However, the sample containing K-modified additive, Mg-Li-5LTOK, shows a reduction of 30 °C on desorption onset temperature and the peak maximum of the main thermal event. Moreover, the release of NH 3 is suppressed until 220 °C. Similar positive effects of K-based additives on the amide-hydride systems were reported previously in the literature 40 . In order to understand the processes taking place upon desorption, evolution of the crystalline phases were studied by in-situ SR-PXD (Fig. 1D). The PXD pattern acquired at RT reveals that the reflections are ascribable to the presence of the additive (LiTi 2 O 4 ). However, due to the broadness of the observed diffraction peaks, we cannot exclude the presence of several phases having a general formula Li x Ti y O 4 (0.75 ≤ x ≤ 1, 1.9 ≤ y ≤ 2). This fact suggests that the additive´s composition changes upon milling. The presence of reflections belonging to Li 2 Mg(NH) 2 (orthorhombic phase) at around 170 °C indicates that the desorption reaction has already started, which is in good agreement with DTA analysis (Fig. 1A). The formation of the cubic Li 2 Mg(NH) 2 takes place at the temperatures higher than 220 °C. This transition is expected since the phase transformation of Li 2 MgN(H) 2 from the orthorhombic to the cubic structure occurs over 200 °C 60 . Unfortunately, in this analysis, it was not possible to identify any crystalline potassium compounds. This implies that potassium-containing phases are either in amorphous or nanocrystalline state.
First desorption kinetics of as-milled samples are presented in Fig. 2A. Desorption of Mg-Li starts at 180 °C and 4.5 wt.% of gas is released within 120 minutes. Mg-Li-5LTO displays a similar behaviour as Mg-Li, though with a reduced capacity to 2.3 wt.% due to the presence of significant amount of additive (26 wt.%). Modifying the additive with potassium (Mg-Li-5LTOK, Mg-Li-2.5LTOK and Mg-Li-1LTOK) leads to a notable reduction on desorption onset temperature from 180 °C to 150 °C. This temperature reduction to some extent changes with the amount of LTOK additive. It is possible to observe in the inset plot of Fig. 2A that higher additive amounts lead to slightly lower onset temperatures. Additionally, DSC analyses shows that the onset temperature of Mg-Li-2.5LTOK is about 15 °C lower than that of Mg-Li-1LTOK (ESI Fig. S3). Clearly, the decrement of the amount of LTOK lead to an increase in the desorbed gas amount. As seen in Fig. 2A, Mg-Li-5LTOK desorbs 3 wt.%, whereas Mg-Li-2.5LTOK and Mg-Li-1LTOK desorbs 3.8 wt.% and 4.3 wt.%, respectively. K-containing additives, especially KH, are known to improve reaction kinetics of Mg(NH 2 ) 2 + LiH system [40][41][42] . In order to compare our findings with the pure KH added system, Mg-Li-5K sample (Mg(NH 2 ) 2 + 2LiH + 0.05KH) was prepared. Despite the fact that the lowest onset temperature (135 °C) is obtained with this sample, its reaction rate is slower in comparison with the samples containing K-modified additive.
Reabsorption kinetics of Mg-Li and Mg-Li-5LTO are sluggish and require more than 10 hours to reach full capacity (Fig. 2B). On the contrary, K-modified samples absorb H 2 notably faster (Mg-Li-5LTOK within 1 hour, Mg-Li-2.5LTOK within 2.5 hours and Mg-Li-1LTOK within 2 hours). Therefore, the effect of K-modified additive on Mg-Li is clearly seen both in the absorption and desorption kinetic properties. Despite the fact that Mg-Li-1LTOK has fast H 2 absorption kinetic, the H 2 capacity is reduced from 4.3 to 3 wt. % after a single cycle. After cycling, the best sample that has a good H 2 absorption kinetic and cycling stability is Mg-Li-2.5LTOK. For this reason, we chose this sample to further investigate its cycling stability compared to Mg(NH 2 ) 2 + 2LiH system. Figure 3A,B show the Kissinger plots and values of E a . It is possible to see that the 1 st desorption reaction of Mg-Li has an activation energy of 183 ± 7 kJ/mol H 2 (Fig. 3A). The presence of the additives 5LTO and 5LTOK lowers E a down to 170 ± 3 kJ/mol H 2 and 173 ± 2 kJ/mol H 2 respectively, as well as the frequency factor (A). On the contrary, E a value rises to 211 ± 1 kJ/mol H 2 for the sample Mg-Li-2.5LTOK. It is worthy to note that frequency factor of this sample is considerably higher (A = 1.2 × 10 19 s −1 ) compared to the ones of the Mg-Li, Mg-Li-5LTO and Mg-Li-5LTOK samples.
The E a values calculated for the 2 nd desorption reactions (Fig. 3B) increase in comparison with the 1 st desorption, except for Mg-Li-2.5LTOK, which decreases by nearly 15 kJ/mol. It is worthy to note that the experiments were repeated in order to confirm this trend. Taking into account the error bands (ESI - Fig. S7), the E a values for Mg-Li, Mg-Li-5LTO and Mg-Li-2.5LTOK overlap. However, the frequency factor for Mg-Li-2.5LTOK is higher than the ones for Mg-Li and Mg-Li-5LTO. The highest values of E a and A were measured for Mg-Li-5LTOK. It is also noticed that K-containing additives reduce the desorption peak temperature both in the 1 st and 2 nd desorption.
Cycling stability. In Figs. 1 and 2, it was shown that LTOK additive improves the hydrogen storage properties of the Mg(NH 2 ) 2 + 2LiH hydride system, i.e. reduced desorption temperature, fast reabsorption kinetic. Mg-Li-2.5LTOK sample exhibited the highest reversible H 2 storage capacity of about 3.5 wt. % (Fig. 2). Hence, this subsection presents its cycling stability/reversibility in comparison with the sample without additive, i.e. Mg-Li. Figure 4 shows the cycling stability upon 5 absorption/desorption processes for Mg-Li and Mg-Li-2.5LTOK samples. During cycling, both desorption and absorption kinetics of Mg-Li-2.5LTOK are 2 and 5 times faster, respectively, than those of Mg-Li. In addition, the hydrogen storage capacity of Mg-Li is reduced by a half after 5 cycles, from 3.4 to 1.7 wt. %, whereas the cycling process reduces only in 10% the hydrogen capacity, i.e. from 3.1 to 2.75 wt. %, in the case of Mg-Li-2.5LTOK sample. From Fig. 4B, it is observed that measurement time of 12 hours is not enough for the complete absorption in Mg-Li sample, whereas Mg-Li-2.5LTOK reaches almost equilibrium at this time. www.nature.com/scientificreports www.nature.com/scientificreports/ Initial structural analysis. First overview to the structural analysis was done with PXD and FT-IR techniques ( Fig. 5A-D,E-H, respectively). PXD patterns of the samples after ball milling (Fig. 5A) exhibit broad peaks with low intensity, which can be attributed to the harsh milling conditions. As-milled Mg-Li contains cubic LiH structure with Fm3m(225) space group and a broad peak at 2θ = 30°, which corresponds to the tetragonal Mg(NH 2 ) 2 structure with I4 1 /acd(142) space group. Since Mg(NH 2 ) 2 is amorphous after intense ball milling, it can be hardly observed in PXD 61 . In contrast, it is more visible on the FT-IR pattern (Fig. 5E), where N-H stretching vibrations of Mg(NH 2 ) 2 are positioned at 3268 and 3324 cm −1 . When Mg-Li is half desorbed, LiNH 2 can be detected at 3257 and 3310 cm −1 (Fig. 5F). Fully desorbed sample contains small bumps at 3240 and 3197 cm −1 , which correspond to IR signals from MgNH (Fig. 5G) 62 . LiNH 2 and MgNH products from the desorption of the sample should have a solid-solid reaction to form a ternary imide: Li 2 Mg(NH) 2 63 . PXD reflections coming from the cubic Li 2 Mg(NH) 2 phase with iba2(45) space group are found in the half and fully desorbed samples (Fig. 5B,C). This imide is also observed by FT-IR at 3170 cm −1 (Fig. 5F,G). Absorption of the desorbed Mg-Li at 180 °C leads to recrystallization of Mg(NH 2 ) 2 (Fig. 5D).
Regarding the additives, PXD analyses ( Fig. 5A-D) reveal that in all cases Li x Ti y O 4 compounds with 0.75 ≤ x ≤ 1 and 1.9 ≤ y ≤ 2 are present. In the ICSD database, it is possible to find several crystal structures 64 . These formed phases are stable and their peaks positions do not change within desorption/absorption processes. The compositions of the as-synthesized additives were already presented in the introduction of the results section (ESI Figs. S1 and S2). However, it is observed that further mechanical milling of these additives with Mg(NH 2 ) 2 and LiH leads to some changes in the additives´ composition, which will be later discussed in the following section.
Discussion
In this work, microstructural and kinetic effects of Li x Ti y O z and K-modified Li x Ti y O z additives on the Mg(NH 2 ) 2 + 2LiH system were studied. K-modified additive not only plays a role on improving the reaction kinetic behaviour (Fig. 2) and cycling stabilities (Fig. 4), but also helps lowering the desorption onset and peak temperatures (Fig. 1A) in comparison to the pristine sample (Mg-Li). Mg-Li releases NH 3 at the cycling temperature of 180 °C, which is comparably lower respect to the release of H 2 . However, the suppression of NH 3 release at this temperature was achieved by the addition of LTOK (Fig. 1C). Then, the H 2 storage capacity was optimized by tuning the amount of additive. Thus, a reversible H 2 capacity of about 3 wt. % at 180 °C was achieved for Mg-Li-2.5LTOK upon cycling (Fig. 4). FT-IR analyses carried out for the sample Mg-Li and Mg-Li-2.5LTOK after milling, after desorption and absorption (Fig. 5E-H) confirmed that the reaction pathway described in reactions (1) and (2), section 1, is not altered.
As we reported in Fig. 5, the composition of the additive after milling with Mg(NH 2 ) 2 and LiH changes. XRD analyses of the as-milled materials (Fig. 5A,D) provided a hint about the presence of stable Li x Ti y O 4 compounds (0.75 ≤ x ≤ 1 and 1.9 ≤ y ≤ 2). Nevertheless, the composition of the additives in the LTOK after milling is not clear yet. Therefore, X-ray absorption spectroscopy near edge structure (XANES) technique was applied to Mg-Li-5LTO, Mg-Li-5LTOK and Mg-Li-2.5LTOK samples in order to investigate the oxidation state of Ti. The changes in the oxidation state of Ti were determined by the shift of the absorption edges of the samples. The results were compared with the measured XANES spectra of TiO 2 and LiTi 2 O 4 reference materials. In Fig. 6A, the spectra of the Mg-Li-2.5LTOK after milling, desorption and reabsorption are compared. It is possible to observe that all spectra are similar, thus the nature of the LTOK additive does not change upon hydrogen interaction. Fig. S9A), it is possible to observe a change in the position of the absorption edge towards higher energies for the Mg-Li-5LTO respect to LTO additive. Hence, this indicates that the valence state of Ti atoms in Mg-Li-5LTO is, on average, higher than +3.5 and lower than +4. A similar behaviour is observed for the Mg-Li-5LTOK sample (ESI Fig. S9A), with a slightly shift toward higher energies on the absorption edge respect to Mg-Li-5LTO. This fact suggests that a different titanium compound could be formed in the potassium-containing samples. If we compare two samples with different LTOK additive loads (Mg-Li-5LTOK and Mg-Li-2.5LTOK), the absorption edge of both samples seems to be similar, showing that the average Ti valence in this samples is very close (ESI Fig. S9B). Then, the results from the Fig. S9 show that the effective valence state of the Ti atoms in the samples slightly depends on the presence of the K-based additive. Based on the analysis above, it is possible to reproduce the Mg-Li-5LTOK spectrum with 76 % of Mg-Li-2.5LTOK and 24 % of LTO additive (LiTi 2 O 4 ) as shown in Fig. 6B. Thus, K-modified additive in the Mg-Li-5LTOK sample is composed of 24 % of LiTi 2 O 4 (Ti +3 . 5 ) and 76 % of other species, suggesting that the effective Ti valence state is slightly smaller than the presented by the Mg-Li-2.5LTOK sample.
TEM observations and analyses were performed to determine the nature of the formed additives upon milling. Figure 7 shows bright field TEM photos (BF), diffraction patterns (DP) and tables of possible phases based on the DP and dark field images (DF), for the as-milled Mg-Li-5LTO and Mg-Li-2.5LTOK samples. The DP of as-milled samples were taken in the region showed by the BF images. Reflections from the DP are related to the In order to verify the formation of such Li-Ti-O and K-Ti-O nanoparticles, HR-TEM observation, fast Fourier transform (FFT) and crystal structure simulation analyses were performed. Figure 8 shows the HR-TEM of the as-milled Mg-Li-5LTO and as-milled Mg-Li-2.5LTOK along with its FFTs calculated in each region, and compared to simulated diffraction patterns (DPs). In the as-milled Mg-Li-5LTO (Fig. 8A), the presence of nanoparticles of Li 0 . 07 TiO 2 (tetragonal) and LiTi 2 O 4 (cubic) are confirmed by the structure analyses of the HR-TEM photos. For the as-milled Mg-Li-2.5LTOK (Fig. 8B), nanoparticles of K 2 TiO 3 (orthorhombic) and LiTi 2 O 4 (cubic) are found. Based on the position of the absorption edge of the Mg-Li-2.5LTOK sample compared to the ones from references TiO 2 and LiTi 2 O 4 (Fig. S9B), we can attribute those titanium atoms in the sample has an average valence state higher than +3.5 and close to +4.
In terms of the observed improved kinetic behaviour (Fig. 2), and the calculated desorption E a for the first and second desorption reactions (Fig. 3), we can find some unexpected results. On one hand, the Mg-Li-2.5LTOK sample clearly shows reduced onset temperature upon the first desorption and faster kinetic during the first ( Fig. 2A), second and subsequent absorption/desorption cycles (Fig. 4) in comparison with the Mg-Li sample. Moreover, among the samples with additives, the Mg-Li-2.5LTOK sample exhibit higher capacity (~3 wt.%) and faster absorption kinetics (Fig. 4). On the other hand, the activation energy values are higher than the one of the material without additive, Mg-Li (Fig. 3). In order to shed light onto this fact, the kinetic constant (k) was calculated by the Arrhenius expression k = A · exp[−E a /RT] (1/s) at 180 °C, which is the cycling temperature (Fig. 4). Then, to take into account the effect of the capacity of each sample, k was multiplied by the capacity after reabsorption taken from Fig. 2B, which can be considered as the more realistic value (ESI - Table S1). As seen in Fig. 9, the desorption rate upon the first and second desorption reactions for the samples with the addition of LTOK is faster than the ones for the Mg-Li and Mg-Li-5LTO samples. However, the activation energies for LTOK containing samples are similar or higher. This behaviour can be mainly attributed to an increase in the frequency factor, making possible a more efficient contacting of the reactants on the interphase.
The faster rate of the sample with 5-mol. % LTOK during the first desorption is in agreement with the lower E a in comparison with sample with 2.5-mol. % LTOK, suggesting a better distribution of the additive. However, during the second desorption, the beneficial effects of the larger amount of additive is lost, hinting that the additive might have agglomerated and then acting as a barrier for the reactants interactions. Moreover, adding 5-mol. % LTOK leads to a notable drop in the desorbed gas amount.
In order to further investigate the role of the additive on the system, an analysis on the rate-limiting steps 66 of Mg-Li and Mg-Li-2.5LTOK samples is carried out for the 1 st ,2 nd and 5 th absorption/desorption kinetic curves from Fig. 4 (ESI Figs. S10-S21; Tables S2-S6). The results are summarized in Table 2. Desorption rates are limited by an interface controlled mechanism (F1: JMA, n = 1), while absorption rates are limited by a diffusion controlled mechanism. In the case of absorption reaction, D3 and D4 represent diffusion mechanisms as rate limiting step, but with different geometries of praticles (D3: spheres and D4: different forms). Therefore, K-modified www.nature.com/scientificreports www.nature.com/scientificreports/ additive does not change the rate-limiting step for desorption/absorption reactions. However, in both absorption and desorption mechanisms, the rate-limiting step is notably accelerated. In general, the results are in well agreement with the results obtained from our previous work, where amide/hydride molar ratio was 6/9 instead of 1/2 50 . Therefore, these outcomes suggest that the presence of the K 2 TiO 3 species account for the observed improvements in the kinetic behaviour and cycling stability of the Mg-Li-2.5LTOK. Kinetic enhancements from the alkali metals and their hydrides/hydroxides/amides are still discussed, whether they modify thermodynamics of the system or they have a catalytic effect on the system 32,37,38,41,67-70 . Catalytic activity of KH and RbH was explained www.nature.com/scientificreports www.nature.com/scientificreports/ via destabilization of N-H bond due to their high electronegativity 32 . KH firstly reacts with Mg(NH 2 ) 2 and later metathesizes with LiH to regenerate KH 41 . Based on our results from in situ SR-PXD contour plot (Fig. 1D) together with XANES spectra (Fig. 6B), we propose that K 2 TiO 3 does not take part in the reactions, but its presence can positively affect the reversible reactions of the Mg(NH 2 ) 2 + 2LiH system due to high electronegativity of K (0.82 eV), Ti (1.54 eV) and O (3.44 eV) elements. Therefore its acts as a catalyst rather than changing the thermodynamics of the system.
conclusions
In this work, microstructural and kinetic effects of Li x Ti y O z and K-modified Li x Ti y O z additives on the Mg(NH 2 ) 2 + 2LiH system were studied. 5 mol. % additive containing sample Mg-Li-5LTOK reduced the desorption peak temperature of pristine sample by 30 °C and suppressed NH 3 release until 220 °C. Although Mg-Li-2.5LTOK has comparably higher apparent activation energy (211 ± 1 kJ/mol) respect to Mg-Li (183 ± 7 kJ/mol), calculated rate constant (k) value was bigger during the first and second desorption reactions which is in agreement with reaction behaviour. Orthorhombic K 2 TiO 3 and cubic LiTi 2 O 4 phases were detected in HR-TEM observations, where oxidation state of Ti was in accordance with XANES analysis. Based on our results from in situ SR-PXD plot and XANES analysis, we propose that K 2 TiO 3 nanoparticles act as catalyst and they positively affect the reversible reactions of the Mg(NH 2 ) 2 + 2LiH system due to high electronegativity of K (0.82 eV), Ti (1.54 eV) and O (3.44 eV) elements.
Data availability
The datasets generated during and/or analysed during the current study are available from the corresponding authors on reasonable request. Table 2. Using Sharp and Jones method 55,56 , rate-limiting processes of samples, which were taken from isothermal cycling kinetic curves of Fig. 4. F1 : JMA, n = 1, Random nucleation, one-dimensional interface controlled growth. D3 : Three-dimensional diffusion, spherical particles. D4 : Three-dimensional diffusion, free geometry. | 2020-01-08T14:24:21.243Z | 2020-01-07T00:00:00.000 | {
"year": 2020,
"sha1": "acff64ae0925e64ebe94dc8bdaa8f15b0633486d",
"oa_license": "CCBY",
"oa_url": "https://www.nature.com/articles/s41598-019-55770-y.pdf",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "643edd84a7ae89c1232d2c2acd000d1031bcfd58",
"s2fieldsofstudy": [
"Materials Science",
"Chemistry"
],
"extfieldsofstudy": [
"Medicine",
"Materials Science"
]
} |
246460744 | pes2o/s2orc | v3-fos-license | Viability of Natural Populations of Hypancistrus zebra (Siluriformes, Loricariidae), Xingu, Brazil
Hypancistrus zebra is an endemic �sh species from the Middle Xingu River and recently included in the Critically Endangered (CR) category in the Red Book List of Endangered Brazilian Fauna that follows IUCN criteria. Given these impacts and the lack of information about the species, it is di�cult to assume the viability of natural populations. Thus, this work aims to evaluate the effect of the variation of intrinsic parameters on the population viability of H. zebra. For this, an Individual Based Model (MBI) was created of the type Agent Based Modeling (MBA). To create the model, we considered “individuals” as an entity and the following variables of interest: longevity, age of sexual maturity, annual reproduction number, instant birth rate (b0), instant mortality rate (d0), interference of each individual in population growth (b1), interference of each individual in population mortality (d1), time (t), radius, richness (S), tolerance (tol) and population size (N). The model showed that natural populations today, even not taking human impacts in account, are at the limit of viability. However, any factor that causes a reduction in resource availability, even if not evident, can lead to a steep population decline, affecting the viability of the populations.
Introduction
The construction of Hydroelectric Power Plants (HPP) has become one of the main threats to the maintenance of the global ichthyofauna [1][2][3] .The dam causes the blocking of migratory sh activities (e.g.reproductive migration of Salmo salar), physical changes (e.g.change from lotic to lentic environment and retention in sediment transport), chemical changes (e.g.physicochemical water characteristics) and affects community (e.g.causes the process of biotic past, present or projected population, based on habitat quality and/or reduction of the occupied area (AOO) and extent of occurrence (EOO) 44 .Hypancistrus zebra is a species of sh in the subfamily Hypostominae (Loricaridae, Siluriforme) known for having a black and white oblique stripe pattern on the body and ns, and a snout with an "E"-shaped "striped" pattern [45][46][47] .These characteristics allow it to be highly valued in the ornamental sh market, and the high value of acquiring the specimen has generated great demand in the clandestine market 56,48 .However, the impact caused by the Belo Monte HPP is the main concern regarding the viability of its populations 44 .Furthermore, there are no data, or even predictions, about the current situation of natural populations of H. zebra.Therefore, we seek to understand how the variation in the intrinsic parameters of the species H. zebra can affect its population viability.Additionally, we intend to answer the following questions: (I) What are the combinations between the values of birth and death rates necessary to keep the population viable?(II) What are the combinations between the level of in uence of habitat specialization and intraspeci c competition in the persistence of this species?(III) Based on the generated scenarios, what is the viability condition of natural populations?
Results
The algorithm that describes the effect of birth and mortality rates on the population viability of H. zebra (MetaZebra 01) presented 137 (31%) scenarios with 100% persistence of the populations (Table 1).The remaining scenarios, 304 (69%), had some chance of extinction, ranging from 0.020 to 1,000, with 201 (45%) of the scenarios having a probability greater than 50% of extinction and 162 (36.7%) of the scenarios had 100% chances of extinction (Table 1).Based on our results, the combinations between the values of birth and death rates necessary to keep the population viable must have a birth rate greater than 0.55 and a mortality rate lower than 0.25.Otherwise, the population starts to decline, however it does not inhibit the chances of the population re-establishing itself.
However, if a b is less than 0.55 and d greater than 0.6, the population goes into extinction.
Mortality rade (d)
As for the algorithm used to describe the in uence of the level of habitat specialization and intraspeci c competition on the species' persistence (MetaZebra 02), we had 219 (49.6%) scenarios with 100% persistence (Table 2).There were 345 (78.2%) scenarios with a greater than 50% of persistence.Our results showed that the combinations between the values of the rates of tolerance level and radius of competition necessary to maintain the persistence of the population owe the level of tolerance with a rate greater than 0.5 and a radius of competition with a rate less than 0.4.No scenario presented a 100% chance of extinction of the population (Table 2).As for the level of in uence of habitat specialization and competition, the results showed that tolerance has a greater in uence on population persistence than the effect generated by intraspeci c competition.The competition starts to act more conspicuously on the population from radius greater than 0.4.The tolerance level is more likely to keep populations viable with tolerance rates higher than 0.5, below these rates the population starts to in uence population decline.Based on the scenarios generated by the MetaZebra 01 algorithm (Table 1) and considering the birth rate of 0.6 and the intrinsic mortality rate of 0.3, natural populations have 100% probability of survival in nature.Considering this, the three natural populations would be viable, but sensitive changes in birth and mortality rates, as well as environmental changes.
Discussion
The 441 scenarios generated by the model showed that the relationship between the intrinsic birth rate above 0.6 with the intrinsic death rate below 0.5 allows populations to remain viable for a long time (Table 1).The model also showed that the number of scenarios with some extinction risk was higher (69% of the scenarios) than the number of scenarios with a 100% chance of persistence.This may be an indication of the vulnerability of H. zebra populations to events that may change the demographic stochastic process.
Stochastic birth-and-death process is probably the simplest modeling approach to predicting extinction 33,56 .In simple models they are independent rates, but they can show high correlation through the years in real populations, since the temporal variation can provide years with high or low resource availability, affecting the reproduction and survival of individuals 68 .However, not always that the value of R is negative (the birth rate lower the death rate) is a guarantee of short-term population persistence, just as a high birth rate and low mortality do not guarantee long-term persistence of time 68 .The high number of scenarios expands the view of stochastic variation and can make it easier to predict the actual population fate.The persistence of a population depends on stochastic or variation 69 .
Generally, the persistence of a population is ensured by large, connected, suitable and close to each other habitats, high population reproductive rate and environmental conditions with variation in balanced carrying capacity 70 .Hypancistru zebra is an endemic species, restricted to about 170 km from the middle Xingu river 44 , possibly its populations are closed and currently the environmental conditions and available resources are not favorable due to the dam.
The variation in habitat quality in the landscape (spatial variation) also affects population persistence 68 , therefore, the feasibility presented by the model does not guarantee the real maintenance of H. zebra populations.Generally, stochastic events are of concern especially for viability of small populations 33,71,72 , as they present greater chances of extinction due to their demographic uctuation, whether due to demographic stochasticity (internal mechanisms) or environmental stochasticity (external mechanisms) 30,72,73 .These events are linked to birth and death rates 74 .We consider that the three natural populations of H. zebra are represented by high population size, which could have a different result in viability if their populations go into decline, either by mortality or the removal of individuals with over shing.
In the model, it was observed that tolerance has a greater in uence on population persistence than the effect generated by intraspeci c competition (competition radius; Table 2).Since the radius is less than 0.5, the tolerance force is greater.However, in scenarios with a radius greater than 0.55, competition starts to have a greater effect on populations, regardless of the high tolerance rate.Therefore, even in scenarios with a high tolerance rate, the probability of population extinction will be greater when the competition radius reaches high values in the rate.However, if both rates are low, it is possible that the population will be maintained.This indicates that intraspeci c competition is an interaction that acts more visibly in the population from a radius greater than 0.4.On the other hand, the tolerance level is more likely to maintain viable populations with tolerance rates greater than 0.5, below these rates the population starts to present a greater risk of extinction.
We did not include effects of anthropogenic actions in the study.We emphasize that models that address the variation between individuals is essential to develop and study population dynamics and associate it with different life history tactics 75 .The development of modeling with intrinsic parameters allows testing uncertain impacts on the life history, evaluating the demography of species 76 , in addition to making it possible to identify predominant parameters in the system when well delimited 77 .
Data from the life history or population growth rate are used for the functioning of the PVA, this information serve as parameters in the model projecting the population dynamics 78 .As these rates vary, they in uence intrinsic processes (e.g.stochasticity, genetic drift, demographic, social structure) 79 , and given the uctuation in population size and over time, random (stochastic) variation occurs.The greater the amount of information about the population, the more detailed is an MBI 30 , which bene ts better results, but requires a more advanced computer system 80,81 .MBI is widely used to assess population dynamics through intrinsic rates 82 .Individual variation is the evolutionary basis existing in all populations and organisms and this individual heterogeneity occurs in practically all characteristics, including reproduction, physical conditioning and survival 75,[83][84][85] .
Our results show a 100% probability of population viability for the three natural populations of H. zebra, given the combination of values of birth rate (b = 0.6) and intrinsic mortality (d = 0.3).However, the models that generated the scenarios did not include shing pressure and habitat change caused by the Belo Monte HPP, which covers the entire area of occurrence of the species 56,57 .Due to the fact that these impacts already exist in the area of occurrence of the species and already promote changes in conditions and resources, we can suggest that populations are threatened.Since a decrease in the birth rate or an increase in mortality would cause species to move from the area green (100% probability) for areas with a lower probability of viability.
The model that investigates the relationship between birth and mortality rates showed that birth rates below 0.55 and mortality rates above 0.25 can lead to extinctions.Our model populations showed little difference in these thresholds, 0.6 for birth and 0.3 for mortality, reinforcing the idea that natural populations nowadays would not be in conditions of 100% viability.Females of Hypancistrus zebra guarantees breed in plots and have low fertility 43 .This can demonstrate that the natural behavior of the species prioritizes spending more energy for individual maintenance than for reproduction, such as spending looking for mates and investing in offspring.In the absence of a good amount of available resources, individuals tend to choose to spend more energy with reproduction (increasing the birth rate of newborns and reducing adult individuals) or avoid the risk of mortality (lower mortality of mature individuals, but low rate of newborns) 86 , in uencing a dynamic in population birth and mortality rates.Poor reproduction and high mortality (positive and/or negative covariance) also result in resource availability in the face of temporal variation.
Loricariidae show low tolerance in sections of reservoir formation 59 and species with characteristics adaptable to fast-owing water habitats are more vulnerable to dams 2 .In addition, the model showed that natural populations of H. zebra are viable, but it is possible to observe that it is an intrinsically sensitive species and may be vulnerable to human disturbances.So it is also with regard to the level of specialization of the habitat.In Table 2, no scenario showed 100% extinction, however, it is possible to observe that tolerance has a greater in uence on the intraspeci c competition radius, demonstrating the high specialization of the species.Although these is not small populations, it is an endemic species, with high removal of specimens from nature for illegal sale 48,49 and its habitat is practically all affected by the implementation of a hydroelectric plant 44 .In addition, paucity of data is likely a major limitation in assessing population viability 87 .
The reduction in the birth rate can be a process of response to increased mortality, fewer individuals to reproduce, or the removal of individuals from the population, either through migration or shing, in the case of sh.Another worrying factor in population decline is overexploitation of the species.Such a decline had already been reported due to the consequent history of exploitation and habitat loss caused by mining activities 88 and even after the shing prohibition, their specimens are still being collected due to their high value and di cult inspection 89 .In this way, it will promote a reduction in the population's birth rate, but the change in habitat does not only affect recruitment.Considering the paucity of studies on the niche and reproductive biology of H. zebra, possibly it is a kind of k-strategist due to its low fertility that takes time to reach sexual maturity, takes care of parents, is sedentary and has a long life cycle 55- 57 .Given the parameters that fed the algorithms, the tolerance level is greater than the concurrency radius effect.
Our study presents a theoretical ecological intrinsic modeling for predicting how the populations of H. zebra will behave with variation of the parameters, showing that a high rate of withdrawal of individuals or impacts caused by the alteration of the water ow in the natural environment can cause a reduction of population viability that will generate extinction of its populations.In addition, some authors claim that H. zebra is sensitive to changes in water quality 88 and climate changes 90 .In addition to these factors, the species is considered endemic 42,47 which corroborates that it is a more specialized niche species.Their narrow tolerance range indicates that their populations are below the median tolerance level in Table 2, which should not be a concern if the current ecological condition is to contribute to greater competition between individuals for resources.In natural conditions without human alterations and capture pressure, natural populations of H. zebra are viable in a delicate balance, however, according to the model, small disturbances can promote a decline in population growth, generating great probabilities of the species' extinction.Impacts such as the change in the hydrological cycle caused by the Belo Monte HPP dam, as well as the high rate of specimen withdrawal by illegal shing, will synergistically cause irreversible damage to the population viability of this species.
Material And Methods
In this research, was used the IBM type Agent Based Modeling (ABM).The algorithms used (Supplementary Material I, II and III) were developed within the platform Matlab® version R2015a 50 .The model description is structured according to the protocol ODD (Overview, Design concepts, Details) updated suggested by Grimm et al. (2010).
Purpose
The purpose of the models is to assess the effect of varying intrinsic parameters on the population viability of an endemic sh species.Although endemism and species unity are not a limiting feature for replicating models.Thus, the developed algorithms indicate how (i) the ratio between the birth and mortality rates (Supplementary II -MetaZebra01) and (ii) the level of specialization and competition (Supplementary III -MetaZebra02) of the specimens affect population viability.
Entities and state variables
The model has only one entity, specimens.The specimens are representatives of a single species of sh distributed in three populations of different sizes.Each specimen was characterized by a set of speci c parameters based on information from literature and breeders of the species.Basically, we used characteristics that in uence population dynamics, including the following state variables: longevity, age of sexual maturity, annual reproduction number, instant birth rate (b0), instant mortality rate (d0), interference of each individual in population growth (b1), interference of each individual in population mortality (d1), time (t), radius, richness (S), tolerance (tol) and population size (N).We consider the following assumptions for determining the values of our variables: Longevity.Species of the same family (e.g.Ancistrus ssp.) can reach more than 15 years of age [51][52][53] .According to aquarists, longevity in captivity is at least 15 years for the species under study.However, probably in a natural environment, specimens have a shorter life span when compared to individuals in good health in captivity 54 .Therefore, in the model, we estimate that specimens can reach 12 years of life in nature.
Age of sexual maturity.According to breeders who produce the species in captivity, individuals reach sexual maturity at the age of three.
Annual reproduction rate.To determine b0, we need to know the number of individuals who enter the cohort each year by birth.Because the male copulates with more than one female 55 , we only consider females in the calculation.We used 50% of the specimens of each hypothetical population, since the parameter is more related to the fertility of females, as the number of individuals depends more on that sex, so the sex ratio considered was 1:1.Each female spawns an average of 14 eggs per spawn 48,55 .In captivity, multiple spawning are observed throughout the year.In a natural environment, reproduction can occur at any time of the year, but two reproductive peaks were observed annually 56,57 .We estimate that 95% of the females in each cohort are able to reproduce.Most females spawn twice a year, possibly larger females with better nutritional performance, are more apt for a greater number of annual reproduction 58 .We consider the following rates: 35% of females reproduce only once / year, 45% reproduce twice/ year and only 15% of females reproduce three times / year.
Mortality rate.Regarding mortality, we arbitrarily de ne that in a natural environment despite parental care, the mortality of individuals under one year of age (juvenile and egg) is high, around 50%, due to predation and competition for hide.Additionally, adult females are assigned a 25% mortality rate, while adult males have 20%.We consider that the behavior of the male to stay hidden and protecting it's crevice (Ramon, 2011; Gonçalves, 2011) provides less vulnerability to males and consequently a lower mortality rate than females in the population.
Intrinsic birth rate (b0).According to the assumptions of the hypothetical natural population size and annual reproduction rate, we obtained the result of b0= 0,6.We used only the females of each population (50% of the individuals) and the rate value was obtained through a difference equation presented in more detail in section 2.6 (sub-models).
Intrinsic mortality rate (d0).Following the assumptions of the hypothetical natural population size and annual mortality rate it was estimated as d0= 0,3.We used only the females of each population (50% of the individuals) and the rate value was obtained through a difference equation presented in more detail in section 2.6 (sub-models).
Interference of each individual in the growth (b1) and mortality (d1) of the population.We did not consider the effect of b1 (interference of each individual on population growth) and d1 (interference of each individual on population mortality) in the models.Time (t).Time was measured in years in models with t = 0 as a starting point, ending in 1,000 years.
Radius.The radius rate varies from 0 to 1, being 0 when there is no competition and 1 when everyone is competing with each other.There are no studies that say how competitive the species is.So, we determine an average value (0.5).
Richness (S).As the study deals with the population of just one species, we considered S = 1.
Tolerance (tol)
. There is also no information on the species tolerance, therefore, we determined an average value (0.5).
Initial population size (N).In both modules of model execution, we considered n the initial size of the population with 100,000 individuals.
Process overview and scheduling
The processes of the models promote the simulation of the dynamics of individuals within the population in an environment without anthropic effects and interspeci c interaction (Figure 1; Supplementary I -PopZebra).The process begins with the entry of a cohort with an initial number of individuals (n = 10.000) in the population.Gradually, individuals are assigned to the Optimal level range (OLR) randomly determined, or to the Maximum or Minimum tolerance level range (LRMaxMin) ranging from zero to one, according to the model.Specimens included in the OLR are aged, go through the update of age and later young individuals go through the process of sexual maturity until they reach the age of sexual maturity (adult individuals), the individuals enter the reproduction process, causing the origin of a new cohort by birth.
As for the individuals that enter FNMaxMin, they are destined to the competition processes for resources (territorialism and / or food).Randomly, some individuals are classi ed as survivors and enter the aging process and the consecutive ones mentioned above, while the rest are removed from the model by the mortality process.Individuals who reach the age of longevity are also removed by the model through the process of natural mortality (by age).
This dynamic is generated through the PopZebra program (supplementary material I).However, to meet our objectives, two metaprograms were developed.
MetaZebra 01 (supplementary material II) for objective one and MetaZebra 02 (supplementary material III) for objective two.The values of the variables used in the algorithms (shown in Table 3) are based in knowledge gathered in specialized hobbyists' magazines, hobbyists and shermen personal communications, as well as experiments carried out in captive breeding program in the laboratory.The MetaZebra 01 algorithm was built to create combinations of birth and mortality, thus varying b0 and d0 from zero to one in 0.05 intervals.Considering H. zebra as r or k strategist (depending on the mortality rate).This procedure will build an interface of values where the x-axis will be the entire variation of the mortality rate while the y-axis will be the entire variation of the birth rate and the 441 cells (21 birth values multiplied by 21 mortality values) will represent all possible combinations between these two rates.
Thus, indicating the effect of the ratio between birth and mortality rates on population viability.As for the MetaZebra 02 algorithm, it was created to vary the tolerance and the radius of competition, thereafter the tol and the radius vary from zero to one in 0.05 intervals.Considering the species as generalist or specialist (depending on the tolerance rate) and little or very competitive (depending on the competition radius rate) in the face of changes.This procedure will generate an interface of values, where the x axis will be the entire variation of the competition radius rate and the y axis will be the entire variation of the tolerance rate and, the 441 cells (21 tolerance values multiplied by 21 radius values competition) will represent all possible combinations between these two rates.This way, it is possible to compare the effect of the species' level of specialization (tolerance) and relate it to the in uence of intraspeci c competition on the persistence of H. zebra.Each algorithm generated 441 combinations of values (scenarios) that represent the population's probability of survival over an interval of 1,000 years.This probability was calculated from ve replicates of each combination.
Design concepts
Basic principles.Population dynamics are maintained with the constant in uence of several factors.Growth is determined by the number of entities (individuals) that enter (birth and migration) and leave (mortality and emigration) the population 59 .The population grows exponentially until it is controlled by the amount of resources available in the environment, called support capacity 60 .In addition, ecological factors within the habitat will in uence the interaction between individuals (intraspeci c interaction) and with the environment (Law of Tolerance), which depending on the level of specialization of the species will determine the growth and survival of this set of organisms [61][62][63] .We assumed, in the model under study, the population as being a closed one (entry of individuals by birth and exit by mortality), since it is an endemic species with no migratory characteristics, restricted to about 170 km of the river section.
Emergency.Population dynamics included an emerging result of behavior and interactions between individuals and their habitat.Processes that promote the dynamics of specimens in the model over an interval of 1,000 years.
Figures
Figures
Table 1
Values of population survival probability in different combinations in the birth rate (b) and in the mortality rate (d) estimated by the model.Values closer to gre persistence of the population, values in yellow and next to the red are scenarios between mid to low chances of persistence.While values in red represent 100% of the population remaining over a long period of time.
Table 2
Values of probability of population persistence in different combinations in the tolerance rate and in the radius of intraspeci c competition rate estimated by t with greater chances of persistence of the population, values in yellow and close to red are scenarios between mid to low chances of persistence.While value probability of 100% of the population remaining over a long period of time
Table 3
Condition variables used in the algorithms to generate the models.Randomized values were determined by "Minimum Value: Interval: Maximum value". | 2022-02-02T16:11:55.472Z | 2022-01-31T00:00:00.000 | {
"year": 2022,
"sha1": "c10cd64ebe6eea2394723ef99008d186a6d553c4",
"oa_license": "CCBY",
"oa_url": "https://www.researchsquare.com/article/rs-1233118/latest.pdf",
"oa_status": "GREEN",
"pdf_src": "ScienceParsePlus",
"pdf_hash": "5ab7bd81e6714b107bfe1a959bc72d65492e6296",
"s2fieldsofstudy": [
"Environmental Science",
"Biology"
],
"extfieldsofstudy": []
} |
267488198 | pes2o/s2orc | v3-fos-license | Molecular Basis of XRN2-Deficient Cancer Cell Sensitivity to Poly(ADP-ribose) Polymerase Inhibition
Simple Summary Many cancers exhibit compromised 5′-3′-exoribonuclease 2 (XRN2) expression. XRN2 is a major regulator of RNA polymerase II (RNAPII) at the transcription termination sites of protein-coding genes. Deregulated transcription termination facilitates the formation of triple-stranded nucleic acid structures known as R-loops (RNA–DNA hybrids with displaced single-strand DNA). Elevated levels of unscheduled R-loops promote genomic instability. In the absence of XRN2, R-loop levels increase and promote DNA damage that activates DNA damage surveillance protein poly(ADP-ribose) polymerase 1 (PARP1). Previously, we discovered that the simultaneous absence of XRN2 and PARP1 compromises the survival of non-cancer and cancer cells; however, the underlying cellular stress response remained unknown. Here, we aimed to uncover the molecular consequences of concurrent XRN2 depletion and PARP1 inhibition. Our findings provide a mechanistic understanding of why cancer cells rely on PARP1 when XRN2 is absent and strengthen the translational aspect of targeting XRN2 cancer vulnerabilities using PARP inhibitors. Abstract R-loops (RNA–DNA hybrids with displaced single-stranded DNA) have emerged as a potent source of DNA damage and genomic instability. The termination of defective RNA polymerase II (RNAPII) is one of the major sources of R-loop formation. 5′-3′-exoribonuclease 2 (XRN2) promotes genome-wide efficient RNAPII termination, and XRN2-deficient cells exhibit increased DNA damage emanating from elevated R-loops. Recently, we showed that DNA damage instigated by XRN2 depletion in human fibroblast cells resulted in enhanced poly(ADP-ribose) polymerase 1 (PARP1) activity. Additionally, we established a synthetic lethal relationship between XRN2 and PARP1. However, the underlying cellular stress response promoting this synthetic lethality remains elusive. Here, we delineate the molecular consequences leading to the synthetic lethality of XRN2-deficient cancer cells induced by PARP inhibition. We found that XRN2-deficient lung and breast cancer cells display sensitivity to two clinically relevant PARP inhibitors, Rucaparib and Olaparib. At a mechanistic level, PARP inhibition combined with XRN2 deficiency exacerbates R-loop and DNA double-strand break formation in cancer cells. Consistent with our previous findings using several different siRNAs, we also show that XRN2 deficiency in cancer cells hyperactivates PARP1. Furthermore, we observed enhanced replication stress in XRN2-deficient cancer cells treated with PARP inhibitors. Finally, the enhanced stress response instigated by compromised PARP1 catalytic function in XRN2-deficient cells activates caspase-3 to initiate cell death. Collectively, these findings provide mechanistic insights into the sensitivity of XRN2-deficient cancer cells to PARP inhibition and strengthen the underlying translational implications for targeted therapy.
Previously, XRN2 deficiency has been shown to create a complex phenotype that includes elevated R-loop formation and subsequently increased DNA double-strand breaks (DSBs), delayed DSB repair kinetics, chromosomal aberrations, increased sensitivity to DNA damaging agents, and replication stress [25].Recently, we characterized the XRN2 interactome using a strategy that combined proteomics (tandem affinity purification-mass spectrometry, TAP-MS), bioinformatics, genetics, biochemical, and biological approaches and revealed molecular links that connect XRN2 to novel biological processes and pathways [20].We found that the XRN2 associates with several proteins involved in cellular processes separate from RNA metabolism, and novel major pathways related to XRN2 include cell cycle control of chromosomal replication and DSB repair by NHEJ [20].We and others have also found that cellular XRN2 deficiency created by several different si/shRNAs constructs results in elevated poly(ADP-ribose) polymerase 1 (PARP1) catalytic activity and a synthetic lethal relationship of these cells with PARP1 depletion/inhibition [20,26].Another recent study also reported that post-translational modification of XRN2 is important for its ability to prevent R-loop-induced genomic instability in cancer cells [27].Collectively, these findings highlight the translational implications of targeting XRN2 cancer vulnerabilities with PARP inhibitors (PARPi).However, the mechanistic basis of the synthetic lethal relationship between XRN2 and PARP1 remains elusive.Moreover, establishing the broader applicability of clinically relevant PARP inhibitors on XRN2-deficient cancer cells is warranted.
In the current study, we investigated the molecular consequences of XRN2 deficiency in lung and breast cancer cells in conjunction with PARP inhibition via Rucaparib and Olaparib to gain mechanistic insights into the synthetic lethality of XRN2 and PARP1.First, we evaluated the sensitivity of XRN2-deficient lung and breast cancer cells against these clinically relevant PARP inhibitors and then delineated the cellular consequences of simultaneous XRN2 deficiency and loss of PARP1 catalytic function.Collectively, our study provides mechanistic insights into why XRN2-deficient cells display sensitivity to PARPi and strengthens the notion of targeting XRN2 vulnerabilities in cancer via PARP inhibition.
Chemicals and Reagents
Rucaparib and Olaparib were purchased from Selleck Chemicals LLC (Houston, TX, USA).Camptothecin (CPT) was purchased from Alfa Aesar (Haverhill, MA, USA).Hoechst 33258 dye was purchased from Sigma-Aldrich (St. Louis, MO, USA).RNase H was purchased from New England BioLabs (Ipswich, MA, USA).Prolonged gold antifade mounting medium containing DAPI was purchased from Invitrogen (Eugene, OR, USA).
Tissue Culture
Human A549 (lung carcinoma) and MDA-MB-231 (mammary adenocarcinoma) cells were obtained from ATCC.A549 cells were maintained in DMEM media (Lonza, Walkersville, MD, USA) supplemented with L-glutamine and 10% FBS (standard DMEM media).MDA-MB-231 cells were maintained in RPMI media (Lonza, Walkersville, MD, USA) supplemented with 2 mM L-glutamine and 10% FBS.All cells were kept at 37 • C with 5% CO 2 .Cells were routinely monitored to confirm the absence of mycoplasma contamination.
RNAi and Transfection
Non-target control siRNAs (siSCR) were purchased from Sigma-Aldrich (St. Louis, MO, USA).siXRN2 and siPARP1 were purchased from Santa Cruz Biotechnology (Dallas, TX, USA).Note that several different siRNAs against XRN2, including those that were used in this study, were validated previously by our laboratory and others [20,25,26].
The general procedure for transient knockdown experiments is described previously [20].Briefly, for a typical transient transfection, cells were plated in 100 mm dishes (1 × 10 6 cells/dish) and allowed to adhere overnight.OptiMEM, Lipofectamine 2000 RNAiMax, and the indicated siRNAs were used at 1 nM concentrations (or 2 nM siSCR in the case of a double knockdown).For immunofluorescent studies, the knockdowns were performed in 6-well plates after the cells had adhered to the glass coverslips.All experiments were performed within a 72 h knockdown window following transfection.
DNA Assay
A modified cell survival assay measuring DNA content over 4-day period was utilized [28].Following a 24 h transient siRNA transfection (1 nM siSCR or siXRN2), cells were seeded at 4000 cells/well in 96-well plates in 100 µL of media.The next day, media were aspirated and replaced with 100 µL of fresh media containing the indicated concentrations of Rucaparib (µM) or Olaparib (µM).The cells were exposed to PARPi for 24 h, and the media were again aspirated and replaced with fresh media (without PARPi).The cells were then allowed to grow until the control samples became confluent.The cells were then lysed in 50 µL water, freeze-thawed, and then treated with 100 µL 1X TNE buffer containing Hoechst 33258 fluorescence dye for 3 h at room temperature, and the DNA content was determined by measuring the fluorescence signal (355 nm/460 nm, 0.1 s) using a Victor X5 plate reader (PerkinElmer, Waltham, MA, USA).Fluorescence values of treated samples were normalized to the control DMSO samples and plotted as means ± SEM for treated over control (T/C) samples.The reported values are the result of n ≥ 4 biological repeats.
Colony-Forming Assay
A549 or MDA-MB-231 cells were seeded on 6-well plates at 250, 100, or 50 cells per well following transient siSCR or siXRN2 (1 nM) transfections for 24 h.The next day, cells were treated with 10 µM Rucaparib or 10 µM Olaparib for 24 h.Following treatment, the media were replaced with fresh media, and the cells were allowed to grow for 10 days.Then, the media was removed, and colonies were stained with crystal violet solution (1X PBS, 1% formaldehyde, 1% methanol, and 0.05% w/v crystal violet) for 20 min, thoroughly rinsed in water, and allowed to dry.Plates were imaged on an Azure c600 (Azure Biosystems, Dublin, CA, USA), and the colonies were counted using ImageJ.The counted colonies were normalized to the control, and data (means ± S.D.) were expressed as treated/control (T/C) samples.The reported values are the result of n ≥ 3 biological repeats.
Western Blot
The standard Western blotting protocol was followed as previously described, with the indicated changes [20].For a typical Western blotting experiment, 1 × 10 6 cells were plated in 35 mm or 100 mm dishes and allowed to adhere overnight.The next day, cells were knocked down with the specified siRNA followed by the treatments and times as indicated.Cells were then lysed in ice-cold RIPA buffer (Alfa Aesar, Haverhill, MA, USA) containing 1X protease and 1X phosphatase inhibitors (Thermo Fisher Scientific, Waltham, MA, USA).Whole-cell protein extracts were then sonicated and centrifuged at 14,500 rpm at 4 • C, and the supernatants were obtained.Protein concentrations were determined via the BCA assay (Thermo Fisher Scientific).Next, 15 µg of protein were separated using SDS-PAGE gels and transferred to nitrocellulose or PVDF membranes.The membranes were blocked in either 1X casein blocking buffer (Sigma-Aldrich) or 5% skim milk-TBST for 1 h and incubated with primary antibodies overnight at 4 • C. Following washing, blots were incubated with appropriate secondary antibodies conjugated with HRP for 1 h at room temperature.Unless otherwise stated, primary antibodies were diluted at a concentration of 1:1000 in blocking buffer, α-tubulin was diluted at 1:5000, and all secondary IgG-HRP antibodies were diluted at 1:5000.Protein bands were detected using SuperSignal West Pico PLUS Chemiluminescent Substrate (Thermo Fisher Scientific) and imaged on an Azure c600 (Azure Biosystems).For quantification of Western blot images, protein band intensities were analyzed using ImageJ software (NIH; version 1.53c, http:imagej.net;accessed on 20 July 2020), and bands were normalized to the loading control.The reported relative intensities are the results of n ≥ 3.
Immunofluorescence
Immunofluorescence confocal microscopy was performed as previously described [29].For a typical immunofluorescence experiment, cells were seeded on 6-well plates (75,000 cells/well) containing glass slides and allowed to adhere overnight.The next day, the cells were knocked down with 1 nM siRNA for 24 h.Cells were then treated with 10 µM Rucaparib for 12, 24, or 48 h or DMSO vehicle control.Depending on the experiment, either 25 µM Camptothecin (CPT) for 2 h or 1 mM H 2 O 2 in 1X PBS for 15 min was used as a positive control.Following treatments, cells were washed with 1X PBS and fixed using ice-cold methanol/acetic acid (3:1, v/v) overnight at −20 • C. Fixed cells were then rehydrated in 1X PBS at room temperature (3x, 5 min each).Next, cells were then blocked in 1X PBS containing 5% normal goat serum for 1 h at room temperature.The cells were then incubated with the primary antibody in 1X PBS containing 5% normal goat serum for 3 h at room temperature.Cells were washed (3x, 5 min each in 1X PBS containing 0.05% Tween-20) and then incubated with the appropriate fluorescently tagged secondary antibody in 1X PBS containing 5% normal goat serum for 1 h at room temperature.The following antibody dilutions were used: anti-S9.6 (1:500), anti-nucleolin (1:2000), anti-53BP1 (1:750), anti-PAR (1:1000), anti-pRPA32 (1:1000), anti-cleaved caspase-3 (1:1500), and Alexa Fluor 488 or Alexa Fluor 594 (1:1000 or 1:1500).Finally, cells were washed (3x, 5 min each in 1X PBS containing 0.05% Tween-20), and the glass slides were mounted with prolong gold antifade mounting medium containing DAPI and sealed with nail polish.
For enzymatic treatment involving RNase H, the general procedure was adopted from a previous study with the indicated changes [30].Slides were rehydrated in PBS as stated above and subjected to blocking for 30 min in 5% goat serum.Next, slides were treated with 2.5 U of RNase H1 (New England BioLabs, Ipswich, MA, USA) supplemented with 3 mM magnesium chloride in 5% goat serum for 1 h at room temperature.The slides were then rinsed in blocking buffer for 5 min and followed by the standard IF protocol described above.
Images were acquired using the Olympus FV10i confocal laser scanning microscope (Olympus, Golden, CO, USA) with a 60x oil immersion objective using a 2.0 aperture.Laser and intensity settings were kept constant based on the positive control.The background was reduced using Olympus FluoView version 4.2b.Raw images were imported into ImageJ, and the nuclear foci were determined using the BioVoxxel plugin v2.5.1.For PAR and cleaved-caspase-3 IF, nuclear or cellular fluorescence intensities were then quantified and normalized to the siSCR DMSO negative control.The reported values are representative of n ≥ 3 biological repeats.
Dot Blot Assay
For R-loop detection via dot blot assay, a previously described method was employed with indicated changes [31].Cells were first lysed in TE buffer containing 0.5% SDS with 200 µg/mL of RNase A (Thermo Fisher Scientific) for 3 h at 37 • C, followed by the addition of 160 µg/mL proteinase K (New England BioLabs) and incubated at 37 • C overnight, phase-separated using phenol/chloroform/isoamyl alcohol (25:24:1), ethanol precipitated by adding 2 volumes of 100% ethanol and 100 µL of 7.5 M ammonium acetate overnight, washed 3 times with 70% ethanol, air dried, and resuspended in TE buffer.Genomic DNA was then fragmented by sonication and quantified.Genomic DNA samples were spotted on a nitrocellulose membrane, crosslinked with UV light (254 nm, 5 min), blocked with 1X Casein buffer at 4 • C for 1 h, and incubated with mouse S9.6 antibody (1:1000) overnight at 4 • C.After washing with TBS-Tween (0.1%), the membrane was incubated with HRPconjugated anti-mouse secondary antibody (1:2000) at 4 • C for 3 h, followed by washing, and then developed.For RNase H treated control, genomic DNA was pre-incubated with 2.5 U of RNase H (New England BioLabs) for 3 h at 37 • C. For loading control, the membrane was stained using freshly made methylene blue staining solution (0.4 M sodium acetate, 0.4 M glacial acetic acid, 0.2% methylene blue) for 10 min and then briefly rinsed and imaged.For quantification, dot blot intensities were analyzed using ImageJ software (NIH; version 1.53c, http:imagej.net;accessed on 20 July 2020) and normalized to the siSCR or siSCR + RNase H negative control.The reported relative intensities are the results of n ≥ 3.
Comet Assay
The neutral comet assay was performed as described previously using the Comet Assay Kit (Trevigen, Gaithersburg, MD, USA) with the indicated changes [29].A549 or MDA-MB-231 cells were plated on 6-well plates and adhered overnight.The next day, cells were knocked down with siSCR or siXRN2 (1 nM) for 24 h.Following transfection, the cells were treated with vehicle control (0.1% DMSO) or 10 µM Rucaparib for 48 h.A measure of 1 mM H 2 O 2 in 1X PBS for 30 min was used as a positive control.Next, cells were trypsinized, washed, and resuspended in 1X PBS at a concentration of 2.5 × 10 5 cells/mL, added to 37 • C LMAgarose (Trevigen) at a ratio of 1:10, and spread on a comet slide.The agarose was allowed to adhere to the slide at 4 • C for 30 min followed by overnight lysis at 4 • C. The next day, slides were submerged in 1X Neutral Electrophoresis Buffer for 30 min at 4 • C followed by electrophoresis at 20 V for 60 min at 4 • C in 1X Neutral Electrophoresis Buffer.Slides were placed in DNA precipitation solution (1 M ammonium acetate in 95% EtOH) for 30 min at room temperature, and then immersed in 70% EtOH for 30 min at room temperature.Next, the slides were dried at 37 • C for 10 min and stained with 1:10,000 dilution of SYBR Green in TE buffer (10 mM Tris-HCl pH 7.5 with 1 mM EDTA) for 30 min at room temperature.Slides were rinsed in distilled water and dried before imaging.The images were obtained using an Olympus FV10i confocal laser scanning microscope with a 10x objective.The comets were analyzed using the OpenComet v1.3 (www.biocomet.org;accessed on 10 November 2020) ImageJ (version 1.53c, http:imagej.net;accessed on 20 July 2020) plug-in.The minimum biological replicate size was n = 3.
Statistical Analyses
Unless otherwise stated, the graphed data represent mean ± SEM.For the DNA content assays, two-tailed Student's t-tests were performed using the Holm-Sidak method to correct for multiple comparisons.For all other data, an ordinary one-way ANOVA using the Dunnett's multiple comparisons test was used to compare treated samples to control.The minimum biological replicate size was n = 3. Alpha was set to 0.05.GraphPad Prism 8 was used to perform the statistical analyses.* p < 0.05; ** p < 0.01, *** p < 0.001, **** p < 0.0001.
XRN2 Deficiency Sensitizes Cancer Cells to PARP Inhibition
Efficient transcription termination facilitated by XRN2 aids in R-loop resolution and when it is deficient in cells, R-loops have been shown to accumulate [25,26,32].As a consequence of defective termination promoting R-loops, XRN2-deficient cells show elevated DNA damage and PARP1 hyperactivation [20,25,26].In our recent study, a genetic approach led us to demonstrate the synthetic lethal relationship between XRN2 and PARP1 depletion, and we also found that XRN2 depletion in immortalized human fibroblast cells elicits sensitivity to PARP inhibitor (BMN 673 (Talazoparib)) treatment [25].To ensure the broad applicability of PARP inhibitors (PARPi) against XRN2 vulnerabilities, here, we evaluated the effects that other FDA-approved PARPi, Rucaparib, and Olaparib have in XRN2-depleted A549 (lung carcinoma) and MDA-MB-231 (mammary adenocarcinoma) cells (Figure 1).Transient depletion of XRN2 in the current study was achieved using siRNAs that were validated in our previous study, effectively eliminating the issues related to off-target effects of siRNAs [20,25].The concentrations of Rucaparib and Olaparib tested were based on reported IC 50 values against a panel of NCI60 cell lines and other previously reported values [33][34][35].Cell survival was evaluated via the DNA assay and clonogenic survival assay, as described in detail under Section 2, "Materials and Methods".Rucaparib significantly decreased the cell survival of A549 XRN2 knockdown cells (siXRN2) at multiple concentrations compared to control knockdown (siSCR) cells measured via the DNA content assay (Figure 1A).Additionally, A549 siXRN2 cells treated with Rucaparib (10 µM, 24 h) showed significantly reduced clonogenic survival compared to siSCR cells (Figure 1B).Successful XRN2 knockdown within the treatment window pertinent to Figure 1A,B was validated through Western blot (Figure 1C).Similar cell survival trends were observed in A549 siXRN2 cells treated with Olaparib when compared to the siSCR control (Figure 1D-F).Importantly, we then assessed the effect XRN2-depletion has in combination with PARPi in the triple-negative breast cancer cell line, MDA-MB-231 (Figure 1G-L).Similar to the A549 lung cancer cells, we observed a significant decrease in cell survival in MDA-MB-231 siXRN2 cells treated with PARPi when compared to the siSCR control for Rucaparib (Figure 1G-I) and Olaparib (Figure 1J-L).We treated these cells with up to 20 µM PARPi based on previously reported IC 50 values [35].Of note, despite ~95% knockdown efficiency at 1 nM siRNA, XRN2 depletion alone did not compromise cell survival, as highlighted in the siXRN2 DMSO control in both cell lines.Collectively, these data clearly show the potential of various FDA-approved PARPi to sensitize XRN2-depleted cancer cells.cells with up to 20 µM PARPi based on previously reported IC 50 values [35].Of note, despite ~95% knockdown efficiency at 1 nM siRNA, XRN2 depletion alone did not compromise cell survival, as highlighted in the siXRN2 DMSO control in both cell lines.Collectively, these data clearly show the potential of various FDA-approved PARPi to sensitize XRN2-depleted cancer cells.
Simultaneous XRN2 Depletion and PARP Inhibition Enhance R-loop Formation
Similar to XRN2 depletion, PARP1 knockdown also results in elevated levels of R-loop formation [36].It is conceivable that PARP1 inhibition may exert a similar effect as PARP1 knockdown in enhancing R-loop formation.Thus, it is important to evaluate the effect of XRN2 depletion in combination with PARP1 inhibition on R-loop formation to delineate the mechanistic basis of their synthetic lethal relationship.To address this, we utilized the effective conditions of PARP1 inhibition by Rucaparib defined in Figure 1 for all the studies described below.Nuclear R-loop foci were evaluated using the S9.6 antibody via immunofluorescence confocal microscopy as described in detail under Section 2. As a positive control, cells were treated with 25 µM camptothecin (CPT) for 2 h.Importantly, nonspecific binding of S9.6 antibody is recognized as a major limitation of detecting genuine R-loop via immunofluorescence-based method [30].To address this issue and to avoid S9.6 artifacts, several measures were considered in the present study that include quantifying nuclear R-loop foci, excluding S9.6 signal overlapping with nucleolin, and demonstrating RNase H sensitivity of the nuclear S9.6 signal.A549 cells depleted in XRN2 and treated with 10 µM Rucaparib displayed significantly higher R-loop formation when compared to the siSCR DMSO vehicle control, siSCR cells treated with 10 µM Rucaparib, and siXRN2 cells treated with DMSO (Figure 2A,B).Representative images shown in Figure 2 are whole-cell images with the nuclei outlined.Note that nuclear R-loop foci presented here are not overlapping with nucleolin and show clear sensitivity to RNase H treatment, indicating the genuine detection of R-loops in all of our samples.To further strengthen these findings, we performed dot blot analyses to detect R-loops in A549 cells using similar treatment conditions as described above.Dot blot analyses also clearly demonstrate that XRN2-deficient A549 cells treated with Rucaparib accumulate considerably higher RNase H-sensitive R-loops compared to the control (siSCR + DMSO) and individual (siSCR + Rucaparib or siXRN2 + DMSO) treatments (Figure 2C).Representative Western blot image along with the quantification, show that A549 cells used for Figure 2A-C were indeed depleted in XRN2 (Figure 2D).Consistent with A549 cells, MDA-MB-231 cells depleted in XRN2 and treated with 10 µM Rucaparib also exhibited significantly higher R-loop formation when compared to the siSCR cells with DMSO, siSCR cells treated with 10 µM Rucaparib, and XRN2-depleted cells with DMSO (Figure 2E-G).Together, these data suggest that the simultaneous depletion of XRN2 and pharmacological inhibition of PARP1 significantly elevate R-loop formation and likely contribute to the higher cellular sensitivity of the combination treatment.
Concurrent XRN2 Depletion and PARP Inhibition Exacerbate DSB Formation and Downstream Signaling
The increase in R-loop formation in cells simultaneously depleted in XRN2 and treated with PARP inhibitors prompted us to evaluate DNA double-strand break (DSB) formation as a contributing factor in enhancing cellular stress and promoting cell death (Figures 3 and 4).To evaluate DSBs, we first utilized the neutral comet assay, as described in Section 2. A549 cells depleted in XRN2 and treated with Rucaparib showed significantly higher mean comet tail moments than cells treated with siSCR DMSO control, siSCR with 10 µM Rucaparib, or siXRN2 DMSO (Figure 3A-C).Similarly, MDA-MB-231 cells depleted in XRN2 and treated with 10 µM Rucaparib exhibited a significantly higher mean comet tail moment than cells treated with the siSCR DMSO control, siSCR with 10 µM Rucaparib, or siXRN2 DMSO (Figure 3D-F).To further support the elevated levels of DSBs instigated by XRN2 deficiency in conjunction with PARPi, we utilized immunofluorescence confocal microscopy to monitor 53BP1 foci as a marker of DSB signaling (Figure 4).Consistent with the comet assay data, A549 cells depleted in XRN2 and treated with PARPi demonstrated increased 53BP1 foci when compared to the siSCR DMSO control, siSCR with PARPi, and siXRN2 DMSO at both the 48 and 72 h time points (Figure 4A-C).Analogous to A549 cells, MDA-MB-231 cells depleted in XRN2 and treated with PARPi showed elevated 53BP1 foci compared to the siSCR DMSO control, siSCR with PARPi, and siXRN2 DMSO at both the 48 and 72 h time points (Figure 4E-G).CPT (25 µM for 2 h) was used as a positive control for both A549 and MDA-MB-231 experiments and displayed an expected increase in 53BP1 foci (Figure 4A-C,E-G, respectively).XRN2 depletion for the treatment window is confirmed for both A549 and MDA-MB-231 cells (Figure 4D,H, respectively).Taken together, data presented in Figures 3 and 4 for A549 and MDA-MB-231 cancer cells clearly demonstrate that the concurrent depletion of XRN2 and PARPi exacerbate DSB formation at higher levels than each individual treatment alone and amplify the cellular stress response.
XRN2 Deficiency Enhances PARP1 Activity in Cancer Cells
We recently showed that immortalized human fibroblast cells deficient in XRN2 displayed increased PARP1 activity to counteract the DNA damage response and promote cell survival [20].To further substantiate these findings, we sought to evaluate the effect that XRN2 depletion has on PARP1 activation in A549 and MDA-MB-231 cancer cells.We investigated PARP1 activity by measuring PAR (poly(ADP-ribose)) levels via immunofluorescence confocal microscopy and Western blot analysis as described in Section 2. A549 cells showed significantly higher PAR formation after XRN2 knockdown compared to the basal PAR levels of siSCR DMSO control at both the 48 and 72 h time points (Figure 5A-C).Moreover, Rucaparib treatment reduced the PAR levels of XRN2-deficient cells to that of siSCR control cells at 48 h (Figure 5B) and a significant reduction at 72 h (Figure 5C).H 2 O 2 -treated cells were used as positive control and showed strong PAR staining (Figure 5A-C).To further support PARP1 hyperactivation prompted by XRN2 deficiency, we utilized Western blot analysis to detect the increase in PAR formation in A549 cells.As XRN2 is depleted in A549 cells over time (24,48, and 72 h), there is a significant increase in PAR formation when compared to the siSCR control knockdown (Figure 5D).Additionally, to emphasize the engagement of PARP1 in the absence of XRN2, we utilized a genetic approach and evaluated the effect of a double knockdown of XRN2 and PARP1 on PAR formation.A549 cells depleted in XRN2 hyperactivate PARP1, which is ameliorated with PARP1 depletion (Figure 5E).These findings are consistent with our immunofluorescence studies conducted with pharmacological inhibition of PARP1.Similar to A549 cells, MDA-MB-231 cells depleted in XRN2 showed a strong increase in PARP1 catalytic activity at both 48 and 72 h time points after knockdown and Rucaparib significantly blocked the formation of PAR (Figure 5F-I).Collectively, these data support the notion that cellular stress in the form of R-loops and DSBs induced by XRN2 depletion consequently hyperactivates PARP1.
XRN2 Deficiency Combined with PARP Inhibition Results in Enhanced Replication Stress
Replication stress is one of the major contributors inducing genomic instability, and conflicts between transcription and replication are a significant culprit [37].Unresolved co-transcriptional R-loops have been shown to stall replication fork progression, replication fork collapse, and lead to toxic DSBs [38,39].Thus, it is conceivable that enhanced replication stress driven by XRN2 deficiency combined with impaired PARP1 function could be a contributing factor in the synthetic lethality of XRN2 knockdown cells with PARPi.To test this notion, we sought to investigate the replication stress in XRN2-depleted cells treated with PARP1 inhibition by measuring phospho-RPA32(S4/S8) levels using immunofluorescence confocal microscopy as described in Section 2. A549 cells deficient in XRN2 and treated with Rucaparib demonstrated a significant increase in pRPA32(S4/S8) foci compared to the siSCR DMSO control, siSCR Rucaparib, and siXRN2 DMSO at both the 48 and 72 h time points (Figure 6A-D).Similar trends in pRPA32(S4/S8) foci formation were observed in MDA-MB-231 cells depleted in XRN2 and treated with 10 µM Rucaparib compared to the siSCR DMSO control at both the 48 and 72 h time points (Figure 6E-H).CPT-treated cells (25 µM for 2 h) were used as a positive control (Figure 6A-C,E-G).Collectively, these data demonstrate that enhanced replication stress is also an underlying cause of the synthetic lethality of XRN2 knockdown cells treated with PARPi.
Combined XRN2 Depletion and PARP Inhibition Activate Caspase-3
After evaluating the synthetic lethality and underlying cellular stress responses in cancer cells with XRN2 depletion and PARP1 inhibition (Figures 1-6), we investigated the cell death pathway employed under these conditions.We measured activated caspase-3 (cleaved caspase-3) levels via immunofluorescence confocal microscopy as described in Section 2. A549 cells deficient in XRN2 and treated with 10 µM Rucaparib exhibited a strong increase in activated caspase-3 levels compared to the siSCR DMSO control, siSCR with Rucaparib treatment, and siXRN2 (Figure 7A-C).XRN2-deficient MDA-MB-231 cells treated with 10 µM Rucaparib also showed significantly increased levels of cleaved caspase-3 compared to the siSCR DMSO control, siSCR with Rucaparib treatment, and siXRN2 (Figure 7D-F).CPT-treated cells (10 µM for 48 h) were used as a positive control (Figure 7A,B,D,E).Importantly, siXRN2 with DMSO alone did not cause a significant increase in the activated caspase-3 levels in either cell line.Together, these data indicate that the concurrent depletion of XRN2 and PARP inhibition activate caspase-3 to initiate cell death.
Combined XRN2 Depletion and PARP Inhibition Activate Caspase-3
After evaluating the synthetic lethality and underlying cellular stress responses in cancer cells with XRN2 depletion and PARP1 inhibition (Figures 1-6), we investigated the cell death pathway employed under these conditions.We measured activated caspase-3 (cleaved caspase-3) levels via immunofluorescence confocal microscopy as described in Section 2. A549 cells deficient in XRN2 and treated with 10 µM Rucaparib exhibited a strong increase in activated caspase-3 levels compared to the siSCR DMSO control, siSCR with Rucaparib treatment, and siXRN2 (Figure 7A-C).XRN2-deficient MDA-MB-231 cells treated with 10 µM Rucaparib also showed significantly increased levels of cleaved caspase-3 compared to the siSCR DMSO control, siSCR with Rucaparib treatment, and siXRN2 (Figure 7D-F).CPT-treated cells (10 µM for 48 h) were used as a positive control (Figure 7A,B,D,E).Importantly, siXRN2 with DMSO alone did not cause a significant increase in the activated caspase-3 levels in either cell line.Together, these data indicate that the concurrent depletion of XRN2 and PARP inhibition activate caspase-3 to initiate cell death.
The importance of PARP1 activity in counteracting the cellular stress response instigated by XRN2 deficiency and promoting cell survival is becoming evident.Both lung and breast cancer cells depleted in XRN2 using previously validated siRNAs show sensitivity to clinically relevant PARP inhibitors, Rucaparib and Olaparib, over a range of concentrations (Figure 1).These data are consistent with the sensitivity of XRN2-deficient non-cancer fibroblast cells toward Talazoparib [20] and LN229 glioblastoma cells toward Niraparib [26].Also, the deficiency of another transcription termination factor, Kub5-Hera/RPRD1B that interacts with XRN2 and promotes its recruitment to termination sites [20,44,47], also shows similar sensitivity to PARP inhibition [33].Our findings highlight the crosstalk between RNA and DNA metabolism mediated by XRN2 and strengthen the underlined translational implications.
The enhanced PARPi sensitivity of XRN2-deficient cells is driven by an aggravated cellular stress response that involves significantly higher R-loop formation (Figure 2), when compared to the individual deficiency or inhibition of XRN2 and PARP1.We utilized the S9.6 antibody to evaluate R-loops in the current study.However, the non-specific binding of the S9.6 antibody presents several limitations in evaluating the R-loops [30,[48][49][50].Notably, to ensure that the S9.6 foci are specific to the RNA-DNA hybrids of R-loops (Figure 2), we employed several measures in the present study including focusing on nuclear R-loop foci, excluding S9.6 signal overlapping with nucleolin and demonstrating RNase H-sensitivity of nuclear S9.6 signal in our experimental and control samples used in the immunofluorescence analyses.Also, we utilized dot blot analyses as another independent method to evaluate R-loops.Moreover, elevated R-loops in XRN2-deficient cells have been reported by others using S9.6-based ChIP-and DRIP-seq approaches [26,32].Additionally, we and others have shown previously that XRN2-deficient cells accumulate R-loops during active transcription that can be removed by the overexpression of RNase H [25,26]. Essentially, XRN2 deficiency promoting R-loop formation presented here is consistent with the findings from other studies [25,26,32].Recently, PARP1 was reported to interact with RNA-DNA hybrids and implicated in the R-loop biology [36].PARP1 inhibition promoting R-loop formation observed here is consistent with recent reports demonstrating enhanced RNase H-sensitive RNA-DNA hybrid formation after siRNA-mediated depletion or pharmacological inhibition of PARP1 [36,51].Also, we recently defined the XRN2 interactome and showed that it physically interacts with PARP1 [20].Collectively, the emerging interplay of XRN2 and PARP1 in R-loop metabolism is intriguing and further investigation is ongoing in our laboratory.
R-loops have emerged as a potent source of genomic instability, especially DSB formation [52,53].Previously, we showed that both PCNA-positive and PCNA-negative XRN2-deficient cells display significantly higher levels of DSB formation compared to control cells, indicating that XRN2 deficiency instigates DNA damage in both replicating and non-replicating cells [25].These observations were further validated through subsequent studies [20,26,32].Consistent with these studies, here, we observed that XRN2 deficiency alone in lung and breast cancer cells instigates elevated DSB formation (Figures 3 and 4).It is conceivable that, in XRN2-deficient cells, PARP1 plays an important role in coordinating the resolution of elevated R-loops and/or sensing consequent DNA damage.Thus, compromised PARP1 function in XRN2-deficient cells could lead to dire consequences.This notion is supported by the data presented here, which indicate that XRN2 deficiency combined with PARPi renders cells to accumulate R-loops (Figure 2), consequently creating elevated DSBs (Figures 3 and 4) and ultimately activating caspase-3 to initiate cell death (Figure 7).Moreover, similar to XRN2 deficiency, PARP1 inhibition results in elevated DSB formation in both replicating and non-replicating cells [51].We previously showed that XRN2 deficiency not only elevates basal levels of DSBs but also compromises the repair of these breaks due to the loss of classical non-homologous end joining (cNHEJ) [25].Recently, another study reported that compromised cNHEJ in XRN2-deficient cells is likely arising from abrogated Ku70 binding at the sites of DSBs [26].The authors also described that XRN2 loss causes an extended association of EXO1 to chromatin, resulting in extensive DNA end resection and inhibition of the homologous recombination (HR) pathway of the DSB repair [26].Enhanced PAR (poly(ADP-ribose)) formation in cancer cells presented here (Figure 4) is a direct consequence of increased R-loop formation and subsequent DNA damage instigated by XRN2 depletion.This notion is supported by the following observations: (i) XRN2 deficiency leads to RNase H-sensitive R-loops (Figure 2); (ii) XRN2-deficient cells show R-loop-dependent DSB formation [25,26]; (iii) PARP1 ′ s physical interaction with R-loops results in its catalytic activation [51].Taken together, these findings further support the PARPi sensitivity of XRN2-deficient cancer cells presented in this study.
Earlier, we reported that XRN2-deficient cells show increased replication stress, including elevated levels of phosphorylated Chk1 Ser317 and phosphorylated RPA32 Ser4/8 [25] and others showed that depletion of XRN2 does not alter cell cycle distribution or cell growth [26,54].Moreover, PARP1 inhibition alone has also been shown to increase replication stress [55] and is known to cause G2/M arrest.Individual depletion of XRN2 or inhibition of PARP1 creating replication stress that we observed here (Figure 6) is consistent with our previous study [25].A potential caveat could be the defective Okazaki fragment processing in XRN2-deficient cells that could also lead to replication stress and PARP1 activation independent of the R-loop formation [56].This possibility is further complicated by the S9.6 antibody's capacity to bind RNA-DNA hybrids longer than six base pairs and potentially recognize unprocessed Okazaki fragments.However, the direct role of XRN2 in DNA replication has not been explored yet; currently, evidence of XRN2 deficiency promoting Okazaki fragment processing defects is lacking.Regardless of this possibility, mechanistically the augmented replication stress in XRN2-deficient cells treated with PARP1 inhibitors (Figure 6) is attributed, at least partially, to elevated R-loop formation, and it is imperative to their synthetic lethal relationship since XRN2 deficiency or PARP1 inhibition alone causes R-loop formation and R-loops are a potent source of replication stress [52,53].
Figure 1 .
Figure 1.XRN2 knockdown sensitizes A549 and MDA-MB-231 cells to PARP1 inhibition.A549 (A-F) and MDA-MB-231 (G-L) cells treated with FDA-approved PARP1 inhibitors.(A) DNA assay of siSCR control (black bars) or siXRN2 (white bars) knockdown A549 cells treated with varying concentrations of Rucaparib for 24 h.(B) Colony-forming assay of A549 cells ±siXRN2 treated with 10 µM of Rucaparib or DMSO vehicle control.(C) Representative Western blot image of the siXRN2 knockdown confirmation and quantification with α-tubulin as a loading control.(D) DNA assay of
Figure 1 .
Figure 1.XRN2 knockdown sensitizes A549 and MDA-MB-231 cells to PARP1 inhibition.A549 (A-F) and MDA-MB-231 (G-L) cells treated with FDA-approved PARP1 inhibitors.(A) DNA assay of siSCR control (black bars) or siXRN2 (white bars) knockdown A549 cells treated with varying concentrations of Rucaparib for 24 h.(B) Colony-forming assay of A549 cells ±siXRN2 treated with 10 µM of Rucaparib or DMSO vehicle control.(C) Representative Western blot image of the siXRN2 knockdown confirmation and quantification with α-tubulin as a loading control.(D) DNA assay of A549 cells ±siXRN2 treated with varying concentrations of Olaparib for 24 h.(E) Colonyforming assay of A549 cells ±siXRN2 treated with 10 µM of Olaparib or DMSO vehicle control.(F) Representative Western blot image of the siXRN2 knockdown confirmation and quantification H 2 O 2 (1 mM for 30 min) served as a positive control for induction of DNA damage and displayed a strong comet tail moment for both A549 and MDA-MB-231 experiments (Figure 3A,B,D,E, respectively).Cancers 2024, 16, 595 9 of 21
Figure 2 .
Figure 2. XRN2 depletion with simultaneous PARP1 inhibition further enhances R-loop formation in cancer cells.(A) Representative confocal immunofluorescent (IF) microscopy images of nuclei stained with DAPI (blue), anti-S9.6 (green), and nucleolin (red) in A549 cells treated with ±siXRN2, ±10 µM Rucaparib, and ± 2.5 U RNase H. Cells treated with 25 µM CPT for 2 h served as a positive control.The scale bar is 10 µm.(B) Following subtraction of the nucleolin signal using ImageJ (version 1.53c, http://imagej.net;accessed on 20 July 2020), nuclear S9.6 foci were determined via the BioVoxxel ImageJ plugin v2.5.1 (www.biovoxxel.de;accessed on 21 January 2024).The quantification plot is representative of 3 biological replicates and indicates the number of nuclear foci in 250 individual cells.The red bar on each dataset represents the mean.(C) Representative dot blot image (top) and quantification (bottom) of A549 cells treated with ±siXRN2, ±10 µM Rucaparib, and ± 2.5 U RNase H. Cells treated with 25 µM CPT for 2 h served as a positive control.Dot blot stained with methylene blue (MB) was used as a loading control.The quantification plot is representative of 3 biological replicates.(D) Representative Western blot image and quantification of the siXRN2 knockdown confirmation with α-tubulin as a loading control.(E) Representative confocal immuno-
Figure 3 .Figure 3 .
Figure 3. XRN2 depletion with simultaneous PARP1 inhibition elevates DSB formation in cancer cells.(A) Representative images of comets stained with SYBR Green in ±siXRN2 ±10 µM Rucaparib A549 cells.Cells treated with 1 mM H2O2 for 30 min served as a positive control.The scale bars are 100 µm.(B) Quantification of the comet tail moments from images processed in ImageJ (version 1.53c, http://imagej.net;accessed on 20 July 2020) utilizing the plugin OpenComet v1.3 (www.biocomet.org;accessed on 21 January 2024).The graph represents three biological repeats and describes the tail moments of 200 individual comets.The red bar on each dataset represents the mean.(C) Representative Western blot image and quantification indicating the successful knockdown ofFigure 3. XRN2 depletion with simultaneous PARP1 inhibition elevates DSB formation in cancer cells.(A) Representative images of comets stained with SYBR Green in ±siXRN2 ±10 µM Rucaparib A549
Figure 4 . 21 Figure 5 .Figure 5 .
Figure 4. XRN2 depletion with simultaneous PARP1 inhibition increases DSB signaling in cancer cells.(A) Representative confocal immunofluorescence microscopy images of nuclei stained with DAPI (blue) and 53BP1 (red) in ±siXRN2 ±10 µM Rucaparib A549 cells.Cells treated with 25 µM CPT for 2 h served as a positive control.The scale bar is 10 µm.(B,C) Quantification of 53BP1 foci from images obtained from 48 h (B) and 72 h (C) processed in ImageJ.The graph is representative of 3 biological repeats and indicates the nuclear foci of 250 individual cells.The red bar on each dataset represents the mean.(D) Representative Western blot image and quantification indicating the successful knockdown of XRN2 in A549 cells with α-tubulin as a loading control.(E) Representative confocal immunofluorescence microscopy images of nuclei stained with DAPI (blue) and 53BP1 (red) in ±siXRN2 ±10 µM Rucaparib MDA-MB-231 cells.Cells treated with 25 µM CPT for 2 h served as a positive control.The scale bar is 10 µm.(F,G) Quantification of 53BP1 foci from images obtained Figure XRN2 depletion with simultaneous PARP1 inhibition increases DSB signaling in cancer cells.(A) Representative confocal immunofluorescence microscopy images of nuclei stained with DAPI (blue) and 53BP1 (red) in ±siXRN2 ±10 µM Rucaparib A549 cells.Cells treated with 25 µM CPT for 2 h served as a positive control.The scale bar is 10 µm.(B,C) Quantification of 53BP1 foci
Figure 6 .
Figure 6.XRN2 depletion with simultaneous PARP1 inhibition increases replication stress in cancer cells.(A) Representative confocal immunofluorescence microscopy images of nuclei stained with DAPI (blue) and pRPA32 (red) in ±siXRN2 ±10 µM Rucaparib A549 cells.Cells treated with 25 µM CPT for 2 h served as a positive control.The scale bar is 10 µm.(B,C) Quantification of the nuclear foci from images obtained from 48 h (B) and 72 h (C) processed in ImageJ.The graph is representative of 3 biological repeats and indicates the nuclear pRPA32 foci in 250 cells.The red bar on each dataset represents the mean.(D) Representative Western blot image and quantification indicating the successful knockdown of XRN2 in A549 cells with α-tubulin as a loading control.(E) Representative confocal immunofluorescence microscopy images of nuclei stained with DAPI (blue) and pRPA32 (red) in ±siXRN2 ±10 µM Rucaparib A549 cells.Cells treated with 25 µM CPT for 2 h served as a positive control.The scale bar is 10 µm.(F,G) Quantification of the nuclear foci from images obtained from 48 h (F) and 72 h (G) processed in ImageJ.The graph is representative of 3 biological repeats and indicates the nuclear pRPA32 foci in 250 cells.(H) Representative Western blot image and quantification indicating the successful knockdown of XRN2 in A549 cells with α-tubulin as a loading control.White squares indicate highlighted Single Cell.p-values were obtained via an ordinary one-way ANOVA using the Dunnett's multiple comparisons test.****, p < 0.0001, comparing treatments to the control (siSCR + DMSO) or as indicated.The uncropped blots are shown in Supplementary Figure S6.
Figure 6 .
Figure 6.XRN2 depletion with simultaneous PARP1 inhibition increases replication stress in cancer cells.(A) Representative confocal immunofluorescence microscopy images of nuclei stained with DAPI (blue) and pRPA32 (red) in ±siXRN2 ±10 µM Rucaparib A549 cells.Cells treated with 25 µM CPT for 2 h served as a positive control.The scale bar is 10 µm.(B,C) Quantification of the nuclear foci from images obtained from 48 h (B) and 72 h (C) processed in ImageJ.The graph is representative of 3 biological repeats and indicates the nuclear pRPA32 foci in 250 cells.The red bar on each dataset represents the mean.(D) Representative Western blot image and quantification indicating the successful knockdown of XRN2 in A549 cells with α-tubulin as a loading control.(E) Representative confocal immunofluorescence microscopy images of nuclei stained with DAPI (blue) and pRPA32 (red) in ±siXRN2 ±10 µM Rucaparib A549 cells.Cells treated with 25 µM CPT for 2 h served as a positive control.The scale bar is 10 µm.(F,G) Quantification of the nuclear foci from images obtained from 48 h (F) and 72 h (G) processed in ImageJ.The graph is representative of 3 biological repeats and indicates the nuclear pRPA32 foci in 250 cells.(H) Representative Western blot image and quantification indicating the successful knockdown of XRN2 in A549 cells with α-tubulin as a loading control.White squares indicate highlighted Single Cell.p-values were obtained via an ordinary one-way ANOVA using the Dunnett's multiple comparisons test.****, p < 0.0001, comparing treatments to the control (siSCR + DMSO) or as indicated.The uncropped blots are shown in Supplementary Figure S6.
Figure 7 .
Figure 7. XRN2 depletion with PARP1 inhibition activates caspase-3 in cancer cells.(A) Representative confocal microscopy images evaluating the activation of caspase-3 using the cleaved caspase-3 antibody (green) and nuclei stained with DAPI (blue) in ±siXRN2 ±10 µM Rucaparib A549 cells.Cells treated with 10 µM CPT for 48 h were used as a positive control.The scale bar is 10 µm.(B) Quantification of the cleaved caspase-3 fluorescence signal from images processed in ImageJ.The graph represents three biological repeats and indicates the fluorescence intensities of 500 individual cells normalized to siSCR + DMSO.The red bar on each dataset represents the mean.(C) Representative Western blot image and quantification indicating the successful knockdown of XRN2 in A549 cells with | 2024-02-06T18:27:47.266Z | 2024-01-30T00:00:00.000 | {
"year": 2024,
"sha1": "18b33e3aac02bbec7a3f6680d5adebc1467b38a1",
"oa_license": "CCBY",
"oa_url": "https://www.mdpi.com/2072-6694/16/3/595/pdf?version=1706628773",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "48c2d68e4f4b438ec69818fc3d67039e397175db",
"s2fieldsofstudy": [
"Medicine",
"Biology"
],
"extfieldsofstudy": [
"Medicine"
]
} |
258557894 | pes2o/s2orc | v3-fos-license | AT 1 inhibition mediated neuroprotection after experimental traumatic brain injury is dependent on neutrophils in male mice
After traumatic brain injury (TBI) cerebral inflammation with invasion of neutrophils and lymphocytes is a crucial factor in the process of secondary brain damage. In TBI the intrinsic renin-angiotensin system is an important mediator of cerebral inflammation, as inhibition of the angiotensin II receptor type 1 (AT1) reduces secondary brain damage and the invasion of neutrophil granulocytes into injured cerebral tissue. The current study explored the involvement of immune cells in neuroprotection mediated by AT1 inhibition following experimental TBI. Four different cohorts of male mice were examined, investigating the effects of neutropenia (anti-Ly6G antibody mediated neutrophil depletion; C57BL/6), lymphopenia (RAG1 deficiency, RAG1−/−), and their combination with candesartan-mediated AT1 inhibition. The present results showed that reduction of neutrophils and lymphocytes, as well as AT1 inhibition in wild type and RAG1−/− mice, reduced brain damage and neuroinflammation after TBI. However, in neutropenic mice, candesartan did not have an effect. Interestingly, AT1 inhibition was found to be neuroprotective in RAG1−/− mice but not in neutropenic mice. The findings suggest that AT1 inhibition may exert neuroprotection by reducing the inflammation caused by neutrophils, ultimately leading to a decrease in their invasion into cerebral tissue.
Treatment. Application of antibodies for neutrophil granulocyte depletion and control antibodies.
For the depletion of neutrophils in WT mice (studies A and C) the Ly6G-specific antibody (anti-mouse, clone 1A8) was used. In the control antibody group, we used the isotype control antibody immunoglobulin IgG2a (rat, clone: 2A3). Both antibodies, anti-Ly6G (1A8) and IgG2a (2A3) (BXCell; West Lebanon, USA) were diluted in PBS with a final concentration of 2.5 mg/mL. We injected 0.2 mL (0.5 mg) of anti-Ly6G antibody (ND) and the same volume of the control IgG2a antibody (Ctrl) intraperitoneally (i.p.) 24 h before (studies A and C) and 24 h after experimental TBI (study C).
Study C: effect of AT1 inhibition in neutrophil depleted mice 3 days after TBI. Mice were randomized to treatment (24 h before and repeated 24 h after TBI) with either anti-Ly6G (ND) or IgG2a control antibody (Ctrl). They were subjected to CCI and then randomly assigned to additional treatment with candesartan (Cand) or vehicle solution (Veh), performed 30 min after TBI and then repeated daily, 24 and 48 h after TBI (Fig. 1). Therefore, the animals were randomly allocated to four treatment groups: Ctrl-Cand, Ctrl-Veh, ND-Cand and ND-Veh (n = 12/group). After the 72-h observation period, brains were removed for quantification of lesion volume, cytokine expression and activated microglia. Blood samples were withdrawn for hematological quantification of white blood cells (WBC), lymphocytes and neutrophils. For comparison we used naïve (non-operated) WT mice (n = 6; Fig. 1). Figure 1. Experimental Timeline. Study C: Effect of AT1 inhibition in neutrophil depleted mice 3 days after TBI: C57BL/6 mice were randomized to treatment (24 h before and 24 h after TBI) with either anti-Ly6G or IgG2a control antibody. They were subjected to controlled cortical impact injury (CCI) and then randomly assigned to additional treatment with candesartan or vehicle solution, performed 30 min after TBI and then repeated daily, 24 and 48 h after TBI. Animals were randomly allocated to four treatment groups: (n = 12/group). To enhance the comparability of the effects on brain tissue infiltration dynamics between neutropenic and lymphopenic mice, we selected an observation time of 72 h after TBI. 72 h after CCI, brains were removed for quantification of lesion volume, cytokine expression and activated microglia (histology, PCR). Study D: Effect of AT1 inhibition in lymphopenic RAG1-deficient mice 3 days after TBI: RAG1-deficient mice (RAG1 −/− ) were randomly assigned to candesartan or vehicle solution treatment (n = 12/group) at 30 min, 24 and 48 h after CCI. 72 h after CCI, lesion volume, cytokine expression and activated microglia were quantified (histology, PCR). CCI, controlled cortical impact; N, neurological assessment; BW, body weight; BP, blood pressure; H, hematology (blood cell count); histology, lesion volume, activated microglia/macrophages; PCR, normalized (PPIA) gene expression (mRNA) of MPO, TNFα, TGFβ, IL1β, IL6 and iNOS. www.nature.com/scientificreports/ Study D: effect of AT1 inhibition in lymphopenic RAG1-deficient mice 3 days after TBI. RAG1-deficient mice were randomly assigned to candesartan or vehicle solution treatment (RAG1 −/− -Cand, RAG1 −/− -Veh; n = 12/ group) at 30 min, 24 and 48 h after TBI (Fig. 1). As in study C, 72 h after TBI, lesion volume, cytokine expression and activated microglia were quantified and hematologic assessment was performed. Additionally, we used naïve RAG1 −/− mice (n = 6; Fig. 1).
Measurement of physiological parameters. Before, and after experimental TBI body weight of each mouse was controlled. Blood pressure was measured 5 min before and after CCI under general anesthesia at the tail using a modified NIBP system (RTBP 2000, Kent Scientific, Torrington, USA; A/D converter: PCI 9112, Adlink Technology, Taiwan; software: Dasylab 5.0, measX, Germany; Flexpro 6.0, Weisang, Germany) as previously described 20 . Additionally, blood pressure values were determined in awake animals daily for 8 days before (training phase) and for 2 days after CCI. Perioperative body temperature was measured by a rectal temperature probe (Physitemp; Clifton, NJ, USA).
Assessment of functional outcome. In studies A, C and D neurological outcome was assessed using the rotarod performance test (Heidolph Instruments GmbH &Co.; Schwabach, Germany) as previously described [22][23][24][25] . After a pre-training phase (mice remained on a rotating rod for 20 s at 4 rpm) two days before TBI, the time to fall from the accelerating rod in the 2-min test period was registered. This test assesses coordination and motoric function and was performed 1 day before, 24 and 72 h after CCI. In study B functional outcome was determined by Neurological Severity Score 26 . In addition to the rotarod test, in studies C and D, functional outcome was also determined by modified neurological severity score (mNSS; modified after Tsenter et al. 26 ) 1 day before and 24 and 72 h after CCI 4 . To calculate mNSS, general behavior, alertness, motor ability and balance were rated with 6 different tasks. Each task was scored from 0 (normal) up to 3 (failed task). The mNSS ranges from 0 (healthy) to 16 (severely impaired) points 27 (Table 1). All neurological tests were performed by investigators blinded towards experimental group allocations.
Flow cytometry and blood cell count. At the end of observation period, great care was taken to perform accurate routine differential blood cell count. In deep anesthesia, EDTA anti-coagulated blood samples were taken from the retro-orbital veins as previously described 28 . The differential blood cell count was obtained via the ADVIA 2120i Hematology system by a medical technician specialized in murine blood analyses and blinded to experimental group allocation. The ADVIA 2120i is a Ly6G-independent full automated veterinary flow cytometry analyzer, validated for murine blood analyses. The analyses were performed after the standardized protocol of the Institute of Clinical Chemistry and Laboratory Medicine of the University Medical Center Table 1. Modified neurological severity score (mNSS). The modified Neurological Severity Score (mNSS) was designed on the basis of the Neurological Severity Score introduced by Tsenter et al. 26 . The mNSS focusses on motoric function and behavioral deficits and was performed 1 day before CCI and on posttraumatic day 1 and 3 (day 5 in study C) after experimental TBI. www.nature.com/scientificreports/ of Mainz (ADVIA 2120i Hematology System; Siemens Healthcare, Erlangen, Germany; https:// www. sieme nshealt hinee rs. com/ en-us/ hemat ology/ syste ms/ advia-2120-hemat ology-system-with-autos lide). The hematology analyzer ADVIA 2120i is a flow cytometry-based system that uses laser light scatter to differentiate and count WBC in two different ways: the peroxidase method and the lobularity/nuclear density method 29 . The peroxidase method uses the myeloperoxidase (MPO) to detect, differentiate and quantify the WBC when they pass through the flow cell. With the help of an optical system all WBC are counted, and peroxidase reagents are used to distinguish between MPO-positive cells, such as neutrophils, eosinophils, and monocytes, and peroxidase-negative cells, which include lymphocytes, and basophils 29 . The cells absorb light in proportion to the amount of peroxidase stain present, and this peroxidase activity parameter is represented on the x-axis of the peroxidase cytogram (Fig. 2a). Cells scatter light in proportion to their size, and this cell size parameter is represented on the y-axis of the cytogram (Fig. 2a). When the light-scatter and absorption data are plotted, distinct populations or clusters are formed, and cluster analysis is applied to identify different cell populations 29 . In the lobularity/nuclear density channel, surfactant and phthalic acid are used to lyse red blood cells and platelets, and to strip away the cytoplasmic membrane from all leukocytes, except basophils. Cells are then counted, and classified, according to size, lobularity, and nuclear density 29 . By the cluster analysis, the polymorphonuclear cells (neutrophils), the mononuclear leukocytes as well as the basophils are quantified.
Histologic and immunohistochemical evaluation. According to our previous protocol 4 brains were removed in deep anesthesia. For tissue evaluation, the brains were frozen in powdered dry ice and stored at − 20 °C. They were then cut in coronal plane with a cryostat (HM 560 Cryo-Star, Thermo Fisher Scientific, Walldorf, Germany) as previously described in detail 8 . The first slide was defined according to the first section corresponding to bregma + 3.14 mm in the Mouse Brain Library (www. mbl. org). 16 sections (12 and 20 µm) were collected at 500 µm-intervals, placed on Superfrost + TM slides (Thermo Fisher Scientific, Germany). In cresyl violet (Merck, Darmstadt, Germany) stained sections (12 µm), the total area of both hemispheres and the injured brain tissue area were determined for each section and animal using a computerized image analysis system (Delta Pix Insight; Maalov, Denmark) by an investigator blind to the group allocation. The total hemispheric brain volumes and the lesion volumes were calculated by following formula: 0. 4,8 . Immunohistochemical staining was performed as described before 27 . Briefly, cryosections (20 µm) were fixed in 4% paraformaldehyde in phosphate buffered saline (PBS), incubated with blocking solution (5% goat serum, 1% bovine serum albumin, and 0.1% TX-100 in PBS) for 1 h at room temperature. Primary antibodies specific for anti-ionized calcium-binding adapter molecule-1 (Iba-1; rabbit anti-mouse, anti-Iba-1 antibody; Wako Chemicals GmbH, Neuss, Germany), or Gr1 (rat anti-Ly6g + Ly6c, clone RB6-8C5, Abcam, UK) were applied in blocking solution overnight at 4 °C. The sections were washed, incubated with secondary biotin-conjugated antibodies (goat anti-rabbit IgG; Merck; Darmstadt, Germany) and processed according to the manufacturer's instructions using Vectastain Elite ABC Kit (Vector Laboratories, Burlingame, USA), or fluorophore conjugated secondary antibodies (goat anti-rabbit IgG, Biotinylated; Merck; Darmstadt, Germany or goat anti-rat IgG Alexa Fluor 488, Thermo Fisher). Images of anti-Iba-1 immunostaining were taken at × 20 magnification (Axiovert, Zeiss, Germany), for anti-Gr-1 immunostaining at × 10 magnification (Keyence, BZ-X800). The total number of Iba-1-positive cells were counted at bregma − 1.28 mm in a region of interest (ROI) of 0.52 × 0.65 mm 2 in the cortical tissue adjacent to the lesion by an investigator blind to randomization, using ImageJ software (National Institutes of Health, USA). Iba-1-immunolabeled cells with appropriate morphology and appearance 30 were identified as activated microglia/macrophages and assessed in the ROI (0.52 × 0.65 mm 2 in the cortical tissue adjacent to the lesion). The rationale for counting Iba-1-positive cells in an area adjacent to the lesion rather than within the lesioned area was, that inside the lesion, where the tissue is essentially destroyed, microglia/macrophages are almost absent. In the perilesional area, however, there is a robust activation of microglia/macrophages. Results are presented in number of activated Iba-1 positive cells/mm 2 . Unfortunately, technical difficulties prevented us from obtaining an adequate quantitative assessment of Gr1-positive cells. Therefore, we present qualitative images with Gr1 staining (Fig. 2b).
Gene expression analysis. Brain tissue samples from the lesion and perilesional area of 500 µm coronal cryostat sections between histologic slice intervals were collected, snap frozen in liquid nitrogen, stored at − 80 °C. As described previously in detail 4,31 , after tissue sampling, extraction of mRNA and cDNA synthesis qPCR were performed (lysis: Qiazol-reagent, Qiagen, Hilden, Germany; homogenization: MM300 mill mixer, Retsch, Haan, Germany; RNA isolation: RNeasy Lipid Tissue Kit, Qiagen, Hilden, Germany; RNA concentration determined by spectrometer: NanoVue System, GE Healthcare Europe, Munich, Germany; RNA to cDNA reverse transcription Verso cDNA Kit, ABgene, Hamburg, Germany; cDNA amplification: real-time Ligthcycler 480 PCR System, Roche). PCR fragments of all applied genes were generated by PCR on an Eppendorf Thermocycler gradient (Eppendorf, Hamburg, Germany). The PCR products were purified with QIA quick PCR Purification Kit (Qiagen) according to the manufacturer´s instructions, and the DNA concentration was determined using NanoVue. A standard curve for absolute quantification was generated with PCR DNA for each PCR product (10 1 -10 7 DNA copies/µl), showing similar and good efficiency (90-110%; LightCycler Software, Roche) and linearity. Equal amounts of cDNA (1 µL) of each sample were analyzed in duplicates and amplified by realtime Lightcycler 480 PCR System (Roche). Real-time RT PCR kits were used according to the manufacturer´s instructions. All assays were conducted by an investigator blinded to group allocation. Using mouse-specific primers and probes ( Table 2) and optimized temperature conditions for qPCR, absolute copy numbers of the target genes, tumor necrosis factor α (TNFα), transforming growth factor β (TGFβ), interleukin 1β (IL1β), interleukin 6 (IL6), inducible nitric oxide synthase (iNOS) and myeloperoxidase (MPO) were calculated, and were then normalized against the absolute copy numbers of cyclophilin A (PPIA) 4 www.nature.com/scientificreports/ chosen as single normalizer 33 based on recent findings in our housekeeping gene study 32 . In order to improve comparability of the mRNA expression data between different treatment groups, and to eliminate qPCR kit dependent differences and limitations, qPCR data was normalized with PPIA and then related to normalized naïve target gene expression from naïve tissue samples from the corresponding brain region 34 . Therefore, normalized target gene expression values are expressed as % naïve expression 4 .
Statistical analysis.
All experiments were randomized and performed by investigators blinded toward the treatment groups (computer-based randomization: www. pubmed. de/ tools/ zufal lsgen erator). In order to determine the required sample size, the a priori power analysis using G * Power 35 was performed with the main variable, the primary endpoint, lesion volume data from previously published studies 4,8 . Therefore, based upon the data of these studies the present a priori power analysis was performed to determine an effect size of d 1.75, with an actual standard statistical power (1−β) of 0.95, and a significance level (α) of 0.05 and a sample size per group of n = 7. In order to have a sufficient power, we decided to have larger sample sizes (n = 8-12, per group) 36 . Statistical analysis was performed using the GraphPad Prism 8 Statistical Software (GraphPad Software Inc., La Jolla, CA, USA). Data distribution was tested by Shapiro-Wilks test. The comparisons of parametric and nonparametric data between two independent groups were done using the Welch-t test and the Wilcoxon rank sum test, respectively. For the statistical analysis of mNSS we performed ANOVA on ranks with the Kruskal-Wallis test, corrected for multiple comparisons using the Dunn's test. In this multi-arm parallel group randomized trial, for comparison of multiple independent groups, if the Shapiro-Wilk normality test was passed, one-way analysis of variance (one-way ANOVA) with post-hoc Holm-Šidák comparisons test (comparisons between all groups) was employed. In experimental groups where two separate treatment factors (neutrophil depletion and AT1 inhibition) are present, a two-way analysis of variance (two-way ANOVA) was performed. Physiologic data, blood cell count, lesion volumes, number of activated microglia and mRNA expression data were compared between experimental groups with two-way ANOVA and post hoc with all-pairwise multiple comparison procedures (Holm-Šidák method). To evaluate group differences in repeated measurements from the same animals (body weight, systolic blood pressure), repeated measures (RM) two-way ANOVA (two-factor repetition) was applied (factors: treatment and time), followed by Šidák's multiple comparisons test. Whenever there were missing values in the repeated measures dataset and a two-way ANOVA was not possible, repeated measures (mNSS, rotarod) data were analyzed with the mixed effect model using the restricted maximum likelihood (REML) method with Holm Šidák's multiple comparison test. The p values were adjusted for multiple comparisons. Values of p < 0.05 were considered significant. To identify outliers in our dataset, we employed a combination of the Figure 2. Depletion of neutrophils reduces brain damage after TBI (Study A). Male C57BL/6 mice were randomized to i.p. injection with specific anti-Ly6G (1A8) for neutrophil depletion (ND) or control antibody (Ctrl; IgG2a (2A3)) 24 h before TBI. With respect to the maximum brain tissue infiltration of neutrophils, lesion volume and cerebral inflammation were determined 4 h (ND-4h, Ctrl-4h; n = 6/group) and 24 h after TBI (ND-24h, Ctrl-24h; n = 8/group; p.i. = post injury). In diagrams control antibody IgG2a treated groups are depicted in white (Ctrl), neutrophil granulocyte depleted, anti-Ly6G treated groups are shown in light grey (ND). (a) Neutrophils in the WBC count: (a) top: representative cytograms of mice treated with control antibody IgG2a (Ctrl), and of mice treated with anti-Ly6G (ND) for neutrophil depletion. The cytograms, obtained with the ADVIA 2120i Hematology System contain the two channels of accurate quantification: peroxidase and basophil (lobularity/nuclear density
Results
Perioperative physiological parameters were stable in all groups. Peri-and intraoperative body temperature and systolic blood pressure were in all mice within physiological range, without considerable difference between groups ( Table 3). As published earlier, in our standardized anesthesia and operation setting, values were stable and within physiological limits 6 .
Low-dose candesartan treatment did not influence blood pressure after CCI. As the specific AT1 antagonist candesartan is used for the treatment of arterial hypertension, we determined its influence on arterial blood pressure. In the present study, low dose (0.1 mg/kg) candesartan treatment did not to alter blood pressure, as shown in previous studies 4,8 . During the observation period, in all groups, blood pressure was within physiological range (Table 3).
After TBI bodyweight was not affected by anti-Ly6G, RAG1-deficiency or AT1 inhibition. In naïve mice bodyweight was 24.7 ± 2.2 g. Before TBI, initial bodyweight was in Ctrl-4h mice 25.6 ± 0.9 and in ND-4h mice 25.5 ± 1.1 g. After 24 h, bodyweight was reduced in both Ctrl and ND, without any difference Table 3. Systolic blood pressure after TBI. Systolic blood pressure [mmHg] during 36 posttraumatic hours was within physiologic range and not affected by low dose candesartan treatment. Time point 0 represents intraoperative measurement immediately after CCI induction under general anesthesia (in italic characters). At the following time points, measurements were performed in awake animals (normal characters). At certain time points there were significant differences, intergroup ( # p < 0.05 ND-Cand vs. Ctrl-Cand) and within the groups (ND-Veh: § p < 0.05 vs. 0 (CCI); ND-Cand: $ p < 0.05 vs. 6 h). www.nature.com/scientificreports/ (before: 25.8 ± 0.7 and 26.0 ± 1.2 g; 24 h after TBI: 24.1 ± 1.2* and 24.3 ± 1.3*, for Ctrl and ND, respectively, *p < 0.001 vs. before TBI). The body weight of RAG1 +/+ and RAG1 −/− were similarly reduced after CCI (24-h investigation: before: 22.7 ± 0.8 and 24.2 ± 1.9 g; 24 h after TBI: 20.3 ± 1.2* and 21.5 ± 1.7* g, for RAG1 +/+ and RAG1 −/− , respectively, *p < 0.001 vs. before TBI). In the 5-days investigation, after a body weight loss at 1 day after CCI, RAG1 +/+ and RAG1 −/− animals regained weight at 5 days after CCI in both groups (before TBI: 22.4 ± 1.1 and 22.2 ± 1.2 g, 1 day after TBI: 20.6 ± 1.2* # and 20.4 ± 0.5* # and 5 days after TBI: 22.3 ± 0.9 and 21.8 ± 1.5, for RAG1 +/+ and RAG1 −/− , respectively, *p < 0.001 vs. before and # p < 0.001 1 vs. 5 days after TBI). In both investigations RAG1 deficiency did not affect posttraumatic body weight loss. In all groups, there is a significant posttraumatic decrease of bodyweight, with a minimum on day 2 and an increase on day 3. However, neither neutrophil depletion, nor RAG1 deficiency, nor AT1 inhibition affected posttraumatic body weight loss, compared to control antibody, wild type or vehicle solution treated mice (Table 4). www.nature.com/scientificreports/ Neutropenia was achieved with anti-Ly6G in WT, lymphopenia was present in RAG1 −/− mice. Differential blood cell counts were performed ( In ND-mice WBC reduction was more distinct and sustaining (ND-4h and-24h), with lasting relatively low WBC count to day 3 after TBI (ND-Veh and ND-Cand). In non-neutrophil-depleted mice, as a response to TBI, there was a shift from lymphocyte-dominated (84%) WBC to an elevation of the neutrophil fraction from 11% (naïve), to 43%, followed by a continuous decrease by 33% to 29 at 24 h (p < 0.05; Fig. 2a) and to 16% at 72 h in Ctrl-treated mice after TBI. In ND mice, in contrast, after TBI there is no initial elevation of neutrophils. Moreover, 24 and 72 h after TBI there is a significant neutropenia in ND-mice alongside elevated lymphocyte counts. Candesartan did not affect neutrophil counts (Table 5). Naïve RAG1 −/− mice are leukopenic, reduced lymphocyte counts (40%) are compensated by elevated neutrophil numbers (34%). After TBI in RAG1 −/− mice there is a decrease of WBC in vehicle treated mice, whereas in candesartan treated mice the decrease is not significant. In RAG1 −/− there is also a decrease of lymphocytes (25%) and an increase of neutrophils to 50%, after TBI, that are not affected by AT1 inhibition (Table 5). There is a sustaining posttraumatic decrease of monocytes in all groups. In candesartan treated ND mice, however there is a normalization of monocyte count 3 days after TBI. At 4 h after TBI there is a transient elevation of hemoglobin and hematocrit, with normalization of these parameters at 24 and 72 h after TBI. Platelets were within a physiological range in all groups at all time points and not affected by the treatment (Table 5).
Study A: effect of Neutrophil granulocyte depletion. Neutrophil depletion reduced tissue infiltration of neutrophils only at 4 h after TBI.
To detect the neutrophil infiltration into injured brain tissue we analyzed the gene expression of MPO, considered a neutrophil marker 37 , by qPCR (normalized naïve expression: 0.00000548 ± 0.00000017 mRNA/PPIA). In the first 4 h after TBI the gene expression of MPO is elevated to 159 ± 60% naïve in Ctrl mice. Neutrophil depletion by anti-Ly6G reduced MPO expression within the injured brain tissue by 56% (70 ± 24% naïve; p < 0.05), compared to Ctrl mice, at 4 h after CCI (Fig. 2b). However, at 24 h after TBI (i.e.: 48 h after application of anti-Ly6G), MPO gene expression of ND mice increased to the level of Ctrl (p < 0.05%; Fig. 2b), despite reduced neutrophils in the WBC count of ND mice (Fig. 2a).
Neutrophil granulocyte depletion did not affect neurological outcome. Neurological outcome was assessed before and 24 h after TBI by rotarod performance test. One day after TBI there was a marked neurological impairment in both groups (from 89 ± 15 and 78 ± 12 s before TBI to 38 ± 14 and 38 ± 8 s at 24 h after TBI, for Ctrl and ND, respectively, p < 0.001). However, neutrophil depletion did not alter neurological outcome compared to control group (Fig. 2f).
Study C: effect of Neutrophil granulocyte depletion combined with AT1 inhibition. AT1 in-
hibition had no effect on neutrophil blood cell count. To analyze an independent effect of candesartan treatment on neutrophil blood cell count we compared all treatment groups by two-way-ANOVA. While a sustained neutrophil depletion was achieved by anti-Ly6G (p < 0.001; Fig. 4a), AT1 inhibition did not affect neutrophil granulocyte count (Fig. 4a).
Neutrophil depletion and AT1 inhibition had no effect on neurological outcome. Neurological assessment was performed 1 day before, and 24 and 72 h after TBI using a mNSS and time spent staying on the rotarod. Compared to pre-trauma values, CCI induced a significant impairment in all experimental groups 24 h after TBI in mNSS (p < 0.001). Time spent on the rotarod and mNSS improved over time 3 days after TBI without differences between the treatment groups.
AT1 inhibition did not affect neurological outcome in RAG1-deficient mice. Neurological outcome was assessed by mNSS and rotarod at day 1 and 3 after TBI. There was a significant increase of neurological impairment (p < 0.001). However neurological deficit was not affected by candesartan treatment (Fig. 5d).
Discussion
To investigate the yet unexplained role of cellular immune response in the context of AT1 inhibition following TBI, we examined the impact of candesartan treatment in both neutrophil-depleted and lymphopenic RAG1 −/− mice. The current findings indicate that both neutropenia and lymphopenia independently contributed to a reduction in brain damage following TBI. AT1 inhibition after TBI resulted in a decrease in brain damage and neuroinflammation in Ctrl mice with normal neutrophil counts, as well as in lymphopenic RAG1 −/− mice. However, in neutrophil-depleted mice, AT1 inhibition had no effect on brain damage or neuroinflammation. Hence, the current findings indicate that the neuroprotective effects of AT1 inhibition are partially mediated by neutrophils.
To deplete neutrophils, we employed a specific antibody against Ly6G, which selectively targets neutrophils while leaving other cell types unaffected [40][41][42] . Numerous dosage concepts exist for administering anti-Ly6G (4-40 µg/g). We based our dosage decision (app. 20 µ/g) on murine studies that used anti-Ly6G to achieve sustained neutrophil depletion 41,43,44 . To ensure a significant reduction of neutrophils, that was monitored using the WBC count, we conducted preliminary pilot dosage finding studies. We found that the present dosage (500 µg anti-Ly6G), and the application interval (24 h before and after CCI) were effective in sustainingly reducing neutrophils in the WBC count for the entire 3-day observation period after TBI 41,43,45 . Furthermore, this dosage did not affect other blood cell populations, animal survival, or body weight. In contrast, the widely used less specific anti-Gr1 (clone RB6-8C5) not only reduces Ly6G-specific cells (neutrophils, Gr1+/Ly6G+), but also other lines of WBC with Ly6-receptors (dendritic cells and subpopulations of monocytes and CD8 T-lymphocytes) 40,46 .
To examine the effects of lymphopenia in TBI, we employed RAG1 deficient mice 47,48 . While RAG1 plays a key role in VDJ-recombination and B and T cell differentiation, RAG1 deficient mice have small lymphatic organs without mature B and T lymphocytes [47][48][49][50] . Their lymphocyte differentiation is blocked at an immature stage 49 . In the present study, RAG1 −/− showed to have lymphopenia. Although, certain cell lines, such as NK cells, may be present in RAG1 −/−51 , the lymphocytes detected in the WBC count of RAG1 −/− mice are likely immature lymphocytes 49 .
The selected observation periods for each study were based on the optimal timing for the maximum infiltration of the two immune cell types into the brain tissue. Neutrophils, the dominant blood cell population 24 h after TBI 52,53 , infiltrate early, with parenchymal infiltration peaking at 1 day after TBI 18,19 . From the third day on after TBI, lymphocytes invade into cerebral tissue 6,18 . To enhance the comparability of the impact of candesartan between the neutropenic and lymphopenic mice, we opted to utilize an observation period of 72 h following TBI in studies C and D, respectively 18 .
To ensure timely accurate hematologic analyses and interpretation and to minimize pre-analytic errors a mouse-species-appropriate practical hematologic instrumentation was performed with consistent collection method from the retroorbital sinus and a validated automated veterinary analyzer according to our standardized www.nature.com/scientificreports/ protocol 54,55 . The quality showed to be adequate and results are consistent with recent data 54 . Neutrophil depletion with anti-Ly6G may not completely eliminate circulating neutrophils that lack Ly6G expression on their surface, as previously noted by Boivin et al. 56 . However, our flow cytometry analysis was performed independently of Ly6G, as we used the ADVIA 2120i Hematology system for differential blood cell count 29 . This is a well-standardized flow cytometry-based system that distinguishes and counts WBC through two methods: peroxidase, detecting MPO-positive cells, and lobularity/nuclear density, classifying cells by size, lobularity, and nuclear density. Both methods allow for accurate identification and quantification of WBC populations, including neutrophils. Therefore, we believe that the presented neutropenia by anti-Ly6G treatment is accurate.
In the present study, we observed a transient leukopenia as part of the inflammatory reaction following TBI, which we attribute to trauma-associated neutrophil sequestration 57 . In Ctrl treated mice WBC count normalized one day after TBI. Anti-Ly6G treatment resulted in a sustained reduction of WBC count in mice after TBI, lasting up to 3 days (within physiological range) 54 . Neutrophils, the most common granulocytes, generally comprise 20-30% of WBC count in naïve wild type mice (70-80% are lymphocytes) 54 . In Ctrl mice, TBI caused a shift from lymphocyte-dominated WBC to a sustained and significant increase in neutrophil count 52,53 . ND mice experienced neutropenia and leukopenia in comparison to naïve mice, with an increase in lymphocyte count from 24 h post-TBI onwards. RAG1 −/− mice appeared to be leukopenic, and lymphopenia was compensated by elevated neutrophils. AT1 inhibition had no effect on posttraumatic blood cell count in wild type, neutropenic and RAG1 −/− mice.
In the present study the selective AT1 blocker candesartan was chosen, that crosses the BBB 16 . Consistent with prior research 4,8,12,17 , a low dose of candesartan (0.1 mg/kg) was administered to avoid any negative impact on blood pressure, which can exacerbate TBI outcomes 58,59 . To achieve sustained AT1 inhibition, treatment was started 30 min after TBI and then repeated daily 8 .
The peritraumatic body weight, a surrogate parameter of well-being and intake of food and water, was not affected by any treatment. The behavioral and neuromotor functions are impaired in this acute phase but show quick recovery in all groups. However, we did not detect any significant treatment effects on neurologic outcome using the NSS which was originally developed by Tsenter et al. 26 as a non-parametric assessment that focuses on motor function and behavior. Therefore, we tried to implement a more sensitive parameter for neurofunctional outcome, the Rotarod test, in subsequent studies. In studies C and D, we chose to combine the modified NSS (mNSS, adapted from Tsenter et al. 2008 4,26 ) with the Rotarod test in an attempt to detect any treatment effects on both motor function and behavior. Unfortunately, despite our efforts, we were not able to demonstrate any significant treatment effects on neurofunctional outcome. The current neurological assessments may lack sensitivity to detect the effects under investigation, or the protective effects on brain tissue may not have been significant enough to improve neurological deficits.
During the first days after TBI, microglia/macrophages show high phagocytic activity to remove necrotic and apoptotic cells, whereas astrocytes start to form a scar-like barrier around the brain lesion 60 . This leads to a contraction of the lesion-surrounding tissue and decrease of lesion volume assessed with Nissl staining. This can be observed in the present study considering different lesion sizes at different post-TBI time points, with cavitation processes beginning later than 5 days after CCI in our model (for example, at 7 days after CCI) 27 .
Consistent with recent research, our study demonstrated that neutrophil depletion decreased lesion size at 24 h, and neuroinflammation, following TBI 61 . However, neutrophil depletion did not affect lesion size at 72 h post-CCI in our study. In contrast, a recent study demonstrated sustained reduction of brain damage up to 14 days after CCI by using a less specific Gr1-antibody to achieve neutrophil depletion 61 . We hypothesize that neutrophils may affect brain injury at a very early time point after injury, as supported by previous live-imaging observations in mice 62 , and our results on MPO mRNA expression. We examined the gene expression of MPO, which is widely recognized as a reliable neutrophil marker 37,63,64 . MPO mRNA expression was reduced in the injured brain tissue of ND (compared to Ctrl) mice at 4 h after TBI, that is 28 h after application of anti-Ly6G. However, at later time points (24 h after CCI, 48 h after anti-Ly6G application), MPO mRNA expression increased in ND mice to the level of Ctrl mice. Thus, a reduction was not detectable concerning neutrophil brain tissue infiltration, despite reduced neutrophils in the WBC count of ND mice. In the present study anti-Ly6G was applied 24 h before CCI, and in Study C also at 24 h after CCI. Based on our findings that MPO expression was not different between ND and Ctrl mice at 48 h after the application of anti-Ly6G, we believe that brain tissue neutrophil infiltration is only reduced by anti-Ly6G in the first 24 h after the application, and therefore, at later time points there is no effect of antibody-mediated neutrophil depletion on parenchymal neutrophil infiltration. This is supported by recent reports that indicate that antibodies against neutrophil Ly6G do not inhibit neutrophil recruitment 42 . Our findings suggest that reduced neutrophil infiltration into damaged brain tissue within the first few hours after TBI (rather than after 24 h) results in beneficial effects, including reduced TNFα expression at 4 h, reduced structural brain damage at 24 h, and decreased microglial activation at 72 h post CCI.
While T cells are known to play diverse roles in adaptive immune responses and inflammation regulation, their specific role in TBI pathogenesis remains unclear 65 . Recent findings suggest that cerebral infiltration of T cells exacerbates neuroinflammation without affecting lesion volume after TBI 66,67 . A previous study on closedhead injury found no significant differences in pathological or neurological parameters between wild-type and RAG1 −/− mice up to 7 days post-injury 47 . The authors concluded that adaptive immunity is not crucial for initiating and sustaining inflammatory neuropathology after closed-head injury 47 . Another TBI study by Fee et al. showed that CD4 + T lymphocytes contribute to the severity of the acute phase of TBI and that brain injury is attenuated in RAG1 −/− mice compared to wild-type animals 48 . Consistent with these findings, RAG1 deficiency in the present study led to reduced brain damage at 1 and 5 days post-TBI. However, at 24 h post-TBI, compared to wild type, neutrophil depletion was found to be more effective in reducing lesion volume (33%) than lymphopenia (17%). Despite this, the reduction of lesion volume by lymphopenia was found to be more sustained (up to 5 days post-TBI) than that of neutropenia (only 24 h after TBI). The acute post-traumatic cerebral infiltration www.nature.com/scientificreports/ of neutrophils is more pronounced than that of lymphocytes 18 . Hence, it is conceivable that reducing neutrophil infiltration may have a more potent anti-inflammatory effect in the initial 24 h post-TBI than reducing lymphocyte infiltration, possibly explaining why brain damage and inflammation are consistently reduced in studies involving neutrophil depletion 61 , while results are inconsistent in studies involving RAG1 deficiency 47,48 . Emerging evidence indicates that the entire cellular immune response is modulated by the RAS 68,69 . AngII is a major mediator of cerebral inflammation and oxidative stress through AT1 7,70 . AT1 is widely expressed in the mature central nervous system, primarily in neurons, endothelial and smooth muscle cells, astrocytes, and microglia, which are important regulators of neuroinflammation 9,71 . AT1 is also expressed on migrating immune cells, like neutrophils, macrophages, and T-cells. AT1 activation triggers the production of chemokines, cytokines, and adhesion molecules, which promote the immigration of activated immune cells into the lesion site 7,[72][73][74][75] . This process induces inflammation and generates high levels of ROS via NADPH oxidase activation. AT1 signaling modulates NADPH oxidase complex activity and promotes the transcription of pro-inflammatory cytokines through activation of NF-κB dependent transcription 15,76 . Subsequently, AT1 activation stimulates various kinases, which propagate inflammatory responses and apoptotic pathways 17,[77][78][79][80] .
The present study demonstrated that repeated post-traumatic administration of candesartan in mice with normal post-traumatic neutrophil count led to reduced histological brain damage and decreased microglial activation at 3 days after TBI 8,12 . Candesartan treatment resulted in a 12% reduction in activated microglia/ macrophages, while neutrophil depletion led to a 8% reduction, compared to vehicle and control antibody treatment. These findings are consistent with earlier studies where candesartan treatment reduced the number of neutrophils and microglia/macrophages 3 days after TBI 4,17 .
Recent studies have demonstrated that AT1 inhibition after TBI reduces cytokine expression 7,8,81 . The pleiotropic cytokine TNFα is implicated in BBB dysfunction and the transmigration of WBC into brain tissue. It induces neuronal loss via microglial activation 2,82-84 . After an early upregulation in the first 8 h after TBI 83 TNFα decreases significantly in the following time 85 . This kinetic could explain that in the present study there is only a reduction of TNFα 4 h after TBI in ND mice. Cerebral IL1β expression increases in the first hour after TBI and reaches highest levels 12 and 24 h after experimental TBI 86 . Reduced activity of IL1β showed improved neurological outcomes and reduced infiltration of neutrophils 83,[87][88][89] . The absence of an effect on IL1β expression in the present study following neutropenia, RAG1 deficiency, or AT1 inhibition may be attributed to low levels of cytokine mRNA at 3 days post CCI 15 . Numerous studies have reported upregulation of IL6 following TBI, which is associated with increased microglial activation and neurological impairment 2,90 . Recent clinical studies suggest a correlation between elevated serum levels of IL6, increased ICP, and severity of TBI 91 . IL6 has also been shown to regulate the migration of neutrophils during acute inflammation 92 . In this study, AT1 inhibition led to a reduction in IL6 expression three days after TBI in Ctrl mice. However, candesartan did not affect IL6 expression in neutropenic mice.
Limitations of the present study are the short duration of reduced neutrophil infiltration into brain tissue, only detected at 4 h after CCI, and not at later time-points, as well as the absence of observed effects on neurological outcome. The fact, that we were not able to demonstrate the direct effect of AT1 inhibition on neutrophil infiltration into the brain tissue is a further limitation. Another significant limitation of the current study is the exclusive focus on acute post-traumatic time points. Further investigation is needed to examine the long-term impact of AT1 inhibition on structural brain damage, chronic brain inflammation, and cognitive dysfunction.
In a recent study, the protective effect of post-traumatic AT1 inhibition in young adult and aged mice was attributed to a decrease in microglia activation and an increase in anti-inflammatory microglia polarization. One major finding of the study was a significant reduction in neutrophil infiltration 4 . AT1 is expressed on both circulating neutrophils and lymphocytes 74,93 . However, in the previous study, perilesional T-cell immigration was not affected by AT1 inhibition 4 . The neuroprotective mechanisms of AT1 inhibition in the acute phase after TBI may thus not depend on the adaptive lymphocyte response. A recent study showed that expression of CD62L on human neutrophils is modulated by AT1 receptors, on various pathways involving ERK1/2, MAPK, and calcineurin 94 , leading to reduced transmigration of neutrophils. It has been shown that AT1 inhibition leads to down regulation of important recruitment proteins like ICAM1 in endothelial cells, and CD11b/CD18 on WBC, and reduces post traumatic increase of BBB permeability. Consequently, AT1 inhibition leads to a significant reduction in the infiltration of immune cells [95][96][97][98] . A recent murine cerebral transcriptomic analysis after TBI showed strong alterations of gene transcription, particularly of the innate immune response, by candesartan treatment 99 . Therefore, AT1 inhibition may have a direct and modulating anti-inflammatory effect on invading neutrophils and resident activated microglia 17 . AT1 inhibition appears to provide neuroprotection by reducing the inflammatory response of the innate immune system, as evidenced by reduced microglial activation and decreased infiltration of neutrophils. This suggests that the protective effect of AT1 inhibition is mediated by its anti-inflammatory properties 4 .
Conclusion
The present study indicates that the reduction of immune cells in both the innate and adaptive immune system, specifically neutropenia (ND), and lymphopenia (RAG1 −/− ), independently leads to decreased brain damage after TBI. Moreover, the study demonstrates that posttraumatic AT1 inhibition decreases brain damage and neuroinflammation in mice with normal neutrophil counts and in lymphopenic mice. However, in neutrophildepleted mice, AT1 blockage had no effect on brain damage and neuroinflammation. We conclude that the neuroprotective effects of AT1 inhibition are independent of lymphocytes but dependent on neutrophils. Thus, reduced neutrophil invasion into the injured brain tissue may mediate neuroprotection by AT1 inhibition. In summary, the study highlights the importance of immune cells in mediating neuroinflammation after TBI and the potential of AT1 inhibition as a therapeutic strategy against exacerbated neuroinflammation following TBI. www.nature.com/scientificreports/
Data availability
The datasets generated during and/or analyzed during the current study are available from the corresponding author on reasonable request. | 2023-05-09T06:16:54.857Z | 2023-05-07T00:00:00.000 | {
"year": 2023,
"sha1": "858d26338a176b70db5e5d49304b9375faadbff4",
"oa_license": "CCBY",
"oa_url": null,
"oa_status": null,
"pdf_src": "PubMedCentral",
"pdf_hash": "40c11c5fde37a6a4efb4a9016e04382ca9c4c9f0",
"s2fieldsofstudy": [
"Medicine"
],
"extfieldsofstudy": [
"Medicine"
]
} |
56338917 | pes2o/s2orc | v3-fos-license | Improved Estimator of Measure for Marginal Homogeneity using Marginal Odds in Square Contingency Tables
For square contingency tables, Iki, Tahata and Tomizawa (2011) considered the measure to represent the degree of departure from the marginal homogeneity model. Using the first-order term in the Taylor series expansion, the estimated measure with the cell probabilities replaced by the corresponding sample proportions is an approximately unbiased estimator when the sample size is large. The present paper proposes the improved approximate unbiased estimator of the measure which is obtained by using the second-order term in the Taylor series expansion. Also, it shows that the improved estimator approaches to the true measure faster than the original estimator as the sample size becomes larger by the simulation studies.
Introduction
Consider an R × R square contingency table with the same row and column ordinal classifications. Let p ij denote the probability that an observation will fall in the ith row and jth column of the table (i = 1, . . . , R; j = 1, . . . , R), and let X and Y denote the row and column variables, respectively. The marginal homogeneity model is defined by p i· = p ·i (i = 1, . . . , R), where p i· = R t=1 p it and p ·i = R s=1 p si ; see Stuart (1955). This indicates that the row marginal distribution is identical to the column marginal distribution. This model is also expressed as where F X i = i k=1 p k· and F Y i = i k=1 p ·k . Using the marginal logit, this model can be expressed as where This states that the log odds that X is i or below instead of i + 1 or above is equal to the log odds that Y is i or below instead of i + 1 or above for i = 1, . . . , R − 1. Further, the marginal homogeneity model is expressed as H 1(i) = H 2(i) (i = 1, . . . , R − 1), where This indicates that the probability that the row variable X selected at random from the row marginal distribution is in category i or below and the column variable Y selected independently at random from the column marginal distribution is in category i + 1 or above is equal to the probability that such X is in category i + 1 or above and such Y is in category i or below.
Since the marginal homogeneity model indicates that {H 1(i) } are equal to corresponding {H 2(i) }, when the marginal homogeneity model does not hold, we are interested in a measure for seeing how far the probabilities {H 1(i) } and {H 2(i) } are distant from marginal homogeneity. Iki et al. (2011) considered the measure Φ (λ) to represent the degree of departure from marginal homogeneity for the ordinal data, which is expressed by using the power-divergence (Read and Cressie, 1988, p. 15) or the Patil and Taillie's (1982) diversity index, and as a function of and let For λ > −1, the measure of departure from the marginal homogeneity model considered by Iki et al. (2011), is defined by and the value at λ = 0 is taken to be the limit as λ → 0. The measure Φ (λ) must lie between 0 and 1, and it would be useful for comparing the degrees of departure from marginal homogeneity toward the maximum departure in several tables. Using the first-order term in the Taylor series expansion, the estimated measure with the cell probabilities replaced by the corresponding sample proportions is an approximately unbiased estimator when the sample size is large. Using the second-order term, Tahata et al. (2014) proposed the refined estimators of measures for marginal homogeneity proposed by Tomizawa and Makii (2001) and Tomizawa et al. (2003). So we are now interested in proposing the improved approximate unbiased estimator of Φ (λ) .
The purpose of the present paper is to propose the improved approximate unbiased estimator of Φ (λ) . Section 2 gives such a estimator. Section 3 shows that the proposed estimator works well in many cases by the simulation studies.
Improved Approximate Unbiased Estimator
Assume that the observed frequencies {n ij } have a multinomial distribution. Let p be the R 2 × 1 probabilities vector whereĄgtĄhmeans transpose. Also let {p ij } be the sample proportion, wherep ij = n ij /n with n = n ij and letp be the R 2 × 1 vector in the similar way. We assume that g has a nonzero differential at p, i.e., that g has the following expansion asp → p: For the details, see e.g., Agresti (2013, p. 589) and Bishop et al. (1975, p. 486). For large n, we can see from above equation that g(p) is an approximate unbiased estimator of g(p) because mean ofp equals p. Similarly, the sample version of Φ (λ) , i.e.,Φ (λ) is given by Φ (λ) with {p ij } replaced by {p ij }, is an asymptotically unbiased estimator of Φ (λ) when the sample size n is large.
Assuming that g has a second differential at p, g(p) has the following expansion asp → p: Therefore when the sample size n is large, the mean of g(p), i.e., E(g(p)), is approximately equal to is approximately equal to g(p), and it would approach g(p) faster than g(p) as the sample size n becomes larger. However, since the second term is unknown, the improved estimator of g(p) is given as follows: We now propose the improved estimator of the true measure Φ (λ) as follows: and for λ = 0,
Simulation Studies
By the simulation studies, we calculate the values of estimated measuresΦ (λ) andΦ (λ) * from the observed frequencies of sample size n = 30, 40, 50, 100, 500 and 1000, which are obtained from the true probability distribution (see Tables 1a to 6a). We shall compare the mean of the values ofΦ (λ) andΦ (λ) * obtained by 1000 times simulations, for each sample size. The results of simulations are given in Tables 1c to 6c. Tables 1a, 3a and 5a have a characteristic that the sum of the probabilities of main-diagonal cells is very small (p ii = 0.020 for i = 1, 2, 3, 4) and Tables 2a, 4a and 6a have a characteristic that the sum of the probabilities of main-diagonal cells is large (p ii = 0.100 for i = 1, 2, 3, 4). Also the true values of measures for Tables 1a and 2a are small, while those for Tables 3a and 4a are medium, and those for Tables 5a and 6a are large, respectively.
We can see that the improved estimatorΦ (λ) * approaches the true value Φ (λ) faster than the original estimatorΦ (λ) when λ ≥ 1 from Tables 1c to 6c. Especially, we can see great improvement when sample size is small.
Concluding Remarks
The present paper has proposed the improved approximate unbiased estimatorΦ (λ) * of the true measure Φ From the simulation studies, we conclude that the improved estimatorΦ (λ) * tends to approach to the true value Φ (λ) faster than the estimatorΦ (λ) as the sample size n becomes larger, when λ ≥ 1.
When λ < 1, we can calculate the improved estimatorΦ (λ) * for only the case of H 1(i) > 0 and H 2(i) > 0 for i = 1, . . . , R − 1, i.e., p 1· > 0, p R· > 0, p ·1 > 0 and p ·R > 0. On the other hand, the original estimatorΦ (λ) can be calculated for the case of H 1(i) + H 2(i) > 0 for i = 1, . . . , R − 1. In other words, the calculable conditions are different between the improved estimator and the original estimator. Thus, it seems difficult to evaluate whether the improved estimator tends to approach the true value faster than the original estimator by simulation study when λ < 1. Therefore, we recommend that the proposed estimator should be used for the case of λ ≥ 1. Then this estimator works very well. | 2019-04-02T13:08:20.678Z | 2017-06-13T00:00:00.000 | {
"year": 2017,
"sha1": "ba9c1a7f7fdeeefe4fe8008635a46a2c2a3e17b4",
"oa_license": null,
"oa_url": "https://doi.org/10.22606/jas.2017.22001",
"oa_status": "GOLD",
"pdf_src": "MergedPDFExtraction",
"pdf_hash": "b905c627c844fdb2056c86c83c4476e11f197a39",
"s2fieldsofstudy": [
"Mathematics"
],
"extfieldsofstudy": [
"Mathematics"
]
} |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.